ghc wasm backend jsffi has finally been implemented! the rest is source notes as well as user-facing documentation. not sending a #haskell discourse thread yet, but for the curious eyes, here's what's supported so far (and also an ama thread):
js src text can be any valid js expr or statements, using $1 etc to refer to the haskell arguments. JSVals are first class haskell values and garbage collected.
await is supported in js src text. calling it initiates the side effect immediately, and returns a thunk that only blocks on promise resolution when forced, allowing concurrency without even needing to fork haskell threads. promise rejections are captured as haskell exceptions.
yup, this converts any haskell function closure to a js callback that you can pass to 3rd party frameworks. it's garbage collected on the js side as well, the haskell closure will be dropped if the callback is unreachable in js later.
this will end up as a wasm export named "js_func_name" directly callable in js, and it returns a promise of the final result that can be awaited.
the hardest part of all the above work is concurrency & re-entrancy: calling async js should be cheap, should not block the runtime, should play well with existing threading model in haskell. and hs can call into js that calls back into hs, indefinitely. i'll write up more detailed explanation on how this is delivered
for the past few days i've been adding asan support to #ghc rts. motivation: the rts is a c monolith that does complex memory management, segfaults are very rare but they do occur sometimes as people reach for help in issue tracker and matrix channel. so i'm really hoping the rts development workflow can be backed by sanitizers and fuzzers to make this monolith more rock solid than it currently is.
after landing jsffi for ghc wasm backend, next thing i'll work on is template haskell support. and after that, threaded rts. yes, it'll be possible to run a haskell app in your browser that eats all your cpu cores (for a better purpose than crypto stuff, i personally hope)
join the petition to tax higher order functions from programming! don't let devs get away without paying maintenance tax when they casually add a continuation parameter and pass a function defined 100k lines away in the codebase
evil little trick learned from fosdem: to debug systemd, you could use a shell script as stub init script that spawns gdb server to debug itself, then execs into systemd
ok. sparks is indeed a nice way to get work stealing nested parallelism for free in #haskell, as long as you work with spark# directly and don't use par, pseq or anything built upon these combinators
@leftpaddotpy spark# is state# passing so you can explicitly spawn a spark as a monadic operation in io or st which feels natural. on the other hand, par's purity is an undesired burden because you now have evaluation order to worry about and litter your code with pseq. the entire "Strategy" thing and "Eval" monad in the parallel package is a huge distraction in the same sense
one aspect #haskell still sucks is build parallelism:
vanilla cabal builds are coarse grained and have component level build dependency
cabal/ghc has multiple home units now but that's only for repl for the time being
cabal/ghc has semaphores now so multiple ghc --make -jsem processes can share cpu cores without oversubscribing. which is nicer, but not nice enough
semaphore format is home brew and not something more standard like make jobserver. hard to fit in external build systems
external build systems resort to using oneshot mode instead of make mode, so one ghc invocation produces one .hi .o pair, and a fair amount of cpu cycles is wasted compared to make mode due to repeatedly building context that could have been shared
more importantly, once .hi of upstream module is emitted, before ghc -c exits, downstream module should queue for compilation immediately. but this is tricky to implement and often omitted
ironically the wasted cpu cycles in ghc oneshot mode is often compensated by increased parallelism. because external build systems parse cabal metadata but breaks cabal component level dependency wall
but now there's a thing called cabal custom setup and now you need to resort to actually respecting Setup.hs for those packages and they can easily become bottlenecks of a build
the people equipped with knowledge to fix the situation thoroughly have tons of more important issues on their plate
oh yes, one case where i find laziness rocks in my current wip branch: calling an async import would invoke the side effect (e.g. fetch()) immediately, but it returns a thunk, doesn't block the current thread if not forced immediately. one does not even need to fork 10k haskell threads to do 10k fetch() calls concurrently, just a plain mapM and you get concurrency for free :)
@wingo another example of wasm stack switching probably worth mentioning is wasmtime, which support async host rust functions as wasm imports and works similarly as js promise integration in v8