@terrorjack@functional.cafe avatar

terrorjack

@terrorjack@functional.cafe

This profile is from a federated server and may be incomplete. Browse more on the original instance.

terrorjack, to random
@terrorjack@functional.cafe avatar

one advantage of nixos on wsl is i get to use the unstable channel, practice using unfamiliar modules and administrating a postgres/redis/elasticsearch thing or whatever, without worrying about screwing anything up. and the skills learnt here can be reapplied to my real server with a fair degree of confidence

terrorjack, to random
@terrorjack@functional.cafe avatar

configuring nixos-wsl from a fresh install to my desired state is like playing dark souls. a bit harder admittedly, but it's for work and done in my working hour so i'm fine with that

terrorjack,
@terrorjack@functional.cafe avatar

one step wrong and you need to start all over! but then i realized i can checkpoint the whole damn thing by wsl --shutdown and copying ext4.vhdx elsewhere. which brings me back to the memories of playing totono, one of the few visual novels which contains meta narrative that involve nuking your saves so i basically cheated by playing it in virtualbox and snapshotting the machine state whenever i click an option

terrorjack, to haskell
@terrorjack@functional.cafe avatar

one aspect still sucks is build parallelism:

  1. vanilla cabal builds are coarse grained and have component level build dependency

  2. cabal/ghc has multiple home units now but that's only for repl for the time being

  3. cabal/ghc has semaphores now so multiple ghc --make -jsem processes can share cpu cores without oversubscribing. which is nicer, but not nice enough

  4. semaphore format is home brew and not something more standard like make jobserver. hard to fit in external build systems

  5. external build systems resort to using oneshot mode instead of make mode, so one ghc invocation produces one .hi .o pair, and a fair amount of cpu cycles is wasted compared to make mode due to repeatedly building context that could have been shared

  6. more importantly, once .hi of upstream module is emitted, before ghc -c exits, downstream module should queue for compilation immediately. but this is tricky to implement and often omitted

  7. ironically the wasted cpu cycles in ghc oneshot mode is often compensated by increased parallelism. because external build systems parse cabal metadata but breaks cabal component level dependency wall

  8. but now there's a thing called cabal custom setup and now you need to resort to actually respecting Setup.hs for those packages and they can easily become bottlenecks of a build

  9. the people equipped with knowledge to fix the situation thoroughly have tons of more important issues on their plate

terrorjack, to random
@terrorjack@functional.cafe avatar

TLS stands for thunk local storage. this name is almost pleasing enough to nerd snipe me to actually write a patch

terrorjack, to haskell
@terrorjack@functional.cafe avatar

ok. sparks is indeed a nice way to get work stealing nested parallelism for free in , as long as you work with spark# directly and don't use par, pseq or anything built upon these combinators

terrorjack,
@terrorjack@functional.cafe avatar

@leftpaddotpy spark# is state# passing so you can explicitly spawn a spark as a monadic operation in io or st which feels natural. on the other hand, par's purity is an undesired burden because you now have evaluation order to worry about and litter your code with pseq. the entire "Strategy" thing and "Eval" monad in the parallel package is a huge distraction in the same sense

terrorjack,
@terrorjack@functional.cafe avatar

@leftpaddotpy in all the workloads i have in mind, i only care about the final seq or deepseq on the root node of the computation. i'd even argue that you should avoid seq'ing if possible at all, just leave the thunk graph to the background and get on with the rest of program logic

terrorjack,
@terrorjack@functional.cafe avatar

@leftpaddotpy only when a spark needs to pattern match another spark's result. and now there's a real risk the target spark has not been evaluated yet, and i'd argue it's better to simply bundle the continuation that performs pattern matching in the same spark to be spawned, since a monadic bind is sequential in nature anyway

terrorjack,
@terrorjack@functional.cafe avatar

@leftpaddotpy of course you can spawn sparks from a spark. the real issue is when you:

x <- spark foo  
y <- if x then spark bar else spark baz  
pure $ qux x y  

then x is likely to be fizzled. so when you use sparks and you need to pattern match, it's better to bundle the case continuation:

spark $ do  
 x <- foo  
 y <- if x then spark bar else spark baz  
 pure $ qux x y  
terrorjack,
@terrorjack@functional.cafe avatar

there's another problem about sparks: the work stealing queue in rts cannot be resized at runtime, it's only configurable via an rts option. that's not the worst part; once the queue is full then later pushes silently fail. but the correct thing to do is to make it a ring buffer and drop the head closures, since they're earliest enqueued in time and are therefore most likely to have been fizzled anyway. overall, a ghc patch to fix this situation is left as an exercise for the motivated reader

terrorjack,
@terrorjack@functional.cafe avatar
terrorjack, to random
@terrorjack@functional.cafe avatar

can't feed whole manga series to gpt-s yet, sad :/

terrorjack, to random
@terrorjack@functional.cafe avatar

seems to compile with ghc wasm backend with just a bit of modification (remove custom setup & gitrev)! i'm not familiar with agda internals, but as long as it only involves file i/o and doesn't need to spawn processes to do real work, then in principle it shouldn't be hard to get a fully functional agda in browser working :)

terrorjack,
@terrorjack@functional.cafe avatar
terrorjack, to random
@terrorjack@functional.cafe avatar

portable way of putting rust code in a haskell package: just compile it to wasm and then to c via wasm2c and put it in cbits. no i'm not kidding

mistivia, to random Chinese
@mistivia@mstdn.party avatar

为了有朝一日能full-remote global pay,是不是还是要设法把Rust啃了。C/C++ is dying,招remote的kira kira的startups都喜欢Rust。

terrorjack,
@terrorjack@functional.cafe avatar

@fulkrum @mistivia 除了被地方政府报金币,更严重的问题是大陆个人账户收swift电汇然后转人民币时,是计入五万美金额度的。人不出去至少账户一定得出!香港渣打几年前还可以陆客开户发银联卡,香港账户收汇以后用银联卡在大陆atm取款,不计入大陆的结售汇额度,不知道现在怎么样

terrorjack, to random
@terrorjack@functional.cafe avatar

gazing at graph-easy in amazement and half in rage for the time i wasted in asciiflow for manually counting grids and aligning boxes and arrows. i have no excuse to not include a ton of ascii diagrams in my code from now on

terrorjack, to random
@terrorjack@functional.cafe avatar

did nasa broadcast just say they use unreal 5 for simulation

kripken, to random
@kripken@fosstodon.org avatar

Now you can run wasm-opt in wasm at much closer to the speed of a native build:

https://github.com/emscripten-core/emscripten/issues/15727#issuecomment-1960295018

It used to be 10x slower but Emscripten's mimalloc port makes it 5x faster, so now it is 2x slower. (aside from mimalloc, that also depends on wasm threads and wasm exceptions)

terrorjack,
@terrorjack@functional.cafe avatar

@kripken re exceptions, would it make sense to compile with -fno-exceptions and have a __cxa_throw that simply traps? assuming binaryen doesn't make use of exceptions for normal inputs

terrorjack,
@terrorjack@functional.cafe avatar

@kripken ah til. also nice work on the mimalloc emscripten port! i'll probably get to do a wasi-libc port later this year, and your work is very encouraging

terrorjack, to random
@terrorjack@functional.cafe avatar

> You now have Claude API access!

thank you but you haven't been kind to gdpr region residents so what even is the point

wingo, to random

the more i work on garbage collectors, the less i want to use C libraries or whatever from GC languages; there are just too many pitfalls around finalization

terrorjack,
@terrorjack@functional.cafe avatar

@wingo shouldn't all sane gc languages have a touch/keepalive primitive that pins the foreign resource for a while until it's fully consumed?

terrorjack,
@terrorjack@functional.cafe avatar

@wingo fwiw, haskell has a keepAlive# primitive that does exactly this: it takes a closure (heap object) as well as a continuation as parameter. the closure is guaranteed to be alive throughout the evaluation time of that continuation, and it's implemented as pushing a stub stack frame that keeps the closure alive and prevent any attached finalizer from being run. and it's used a lot

terrorjack,
@terrorjack@functional.cafe avatar

@wingo i gotta be honest, we have indeed been bitten by compiler optimization on this front before. but this doesn't count as "pray the compiler never changes", it's a compiler bug that gets fixed, and this primitive is an important part of language core on which standard libraries rely to function properly!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • khanakhh
  • kavyap
  • thenastyranch
  • everett
  • tacticalgear
  • rosin
  • Durango
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • InstantRegret
  • Youngstown
  • slotface
  • megavids
  • ethstaker
  • ngwrru68w68
  • cisconetworking
  • modclub
  • tester
  • osvaldo12
  • cubers
  • GTA5RPClips
  • normalnudes
  • Leos
  • provamag3
  • anitta
  • lostlight
  • All magazines