one advantage of nixos on wsl is i get to use the unstable channel, practice using unfamiliar modules and administrating a postgres/redis/elasticsearch thing or whatever, without worrying about screwing anything up. and the skills learnt here can be reapplied to my real server with a fair degree of confidence
configuring nixos-wsl from a fresh install to my desired state is like playing dark souls. a bit harder admittedly, but it's for work and done in my working hour so i'm fine with that
one step wrong and you need to start all over! but then i realized i can checkpoint the whole damn thing by wsl --shutdown and copying ext4.vhdx elsewhere. which brings me back to the memories of playing totono, one of the few visual novels which contains meta narrative that involve nuking your saves so i basically cheated by playing it in virtualbox and snapshotting the machine state whenever i click an option
one aspect #haskell still sucks is build parallelism:
vanilla cabal builds are coarse grained and have component level build dependency
cabal/ghc has multiple home units now but that's only for repl for the time being
cabal/ghc has semaphores now so multiple ghc --make -jsem processes can share cpu cores without oversubscribing. which is nicer, but not nice enough
semaphore format is home brew and not something more standard like make jobserver. hard to fit in external build systems
external build systems resort to using oneshot mode instead of make mode, so one ghc invocation produces one .hi .o pair, and a fair amount of cpu cycles is wasted compared to make mode due to repeatedly building context that could have been shared
more importantly, once .hi of upstream module is emitted, before ghc -c exits, downstream module should queue for compilation immediately. but this is tricky to implement and often omitted
ironically the wasted cpu cycles in ghc oneshot mode is often compensated by increased parallelism. because external build systems parse cabal metadata but breaks cabal component level dependency wall
but now there's a thing called cabal custom setup and now you need to resort to actually respecting Setup.hs for those packages and they can easily become bottlenecks of a build
the people equipped with knowledge to fix the situation thoroughly have tons of more important issues on their plate
ok. sparks is indeed a nice way to get work stealing nested parallelism for free in #haskell, as long as you work with spark# directly and don't use par, pseq or anything built upon these combinators
@leftpaddotpy spark# is state# passing so you can explicitly spawn a spark as a monadic operation in io or st which feels natural. on the other hand, par's purity is an undesired burden because you now have evaluation order to worry about and litter your code with pseq. the entire "Strategy" thing and "Eval" monad in the parallel package is a huge distraction in the same sense
@leftpaddotpy in all the workloads i have in mind, i only care about the final seq or deepseq on the root node of the computation. i'd even argue that you should avoid seq'ing if possible at all, just leave the thunk graph to the background and get on with the rest of program logic
@leftpaddotpy only when a spark needs to pattern match another spark's result. and now there's a real risk the target spark has not been evaluated yet, and i'd argue it's better to simply bundle the continuation that performs pattern matching in the same spark to be spawned, since a monadic bind is sequential in nature anyway
there's another problem about sparks: the work stealing queue in rts cannot be resized at runtime, it's only configurable via an rts option. that's not the worst part; once the queue is full then later pushes silently fail. but the correct thing to do is to make it a ring buffer and drop the head closures, since they're earliest enqueued in time and are therefore most likely to have been fizzled anyway. overall, a ghc patch to fix this situation is left as an exercise for the motivated reader
#agda seems to compile with ghc wasm backend with just a bit of modification (remove custom setup & gitrev)! i'm not familiar with agda internals, but as long as it only involves file i/o and doesn't need to spawn processes to do real work, then in principle it shouldn't be hard to get a fully functional agda in browser working :)
gazing at graph-easy in amazement and half in rage for the time i wasted in asciiflow for manually counting grids and aligning boxes and arrows. i have no excuse to not include a ton of ascii diagrams in my code from now on
It used to be 10x slower but Emscripten's mimalloc port makes it 5x faster, so now it is 2x slower. (aside from mimalloc, that also depends on wasm threads and wasm exceptions)
@kripken re exceptions, would it make sense to compile with -fno-exceptions and have a __cxa_throw that simply traps? assuming binaryen doesn't make use of exceptions for normal inputs
@kripken ah til. also nice work on the mimalloc emscripten port! i'll probably get to do a wasi-libc port later this year, and your work is very encouraging
the more i work on garbage collectors, the less i want to use C libraries or whatever from GC languages; there are just too many pitfalls around finalization
@wingo fwiw, haskell has a keepAlive# primitive that does exactly this: it takes a closure (heap object) as well as a continuation as parameter. the closure is guaranteed to be alive throughout the evaluation time of that continuation, and it's implemented as pushing a stub stack frame that keeps the closure alive and prevent any attached finalizer from being run. and it's used a lot
@wingo i gotta be honest, we have indeed been bitten by compiler optimization on this front before. but this doesn't count as "pray the compiler never changes", it's a compiler bug that gets fixed, and this primitive is an important part of language core on which standard libraries rely to function properly!