pseq combinator is used for sequencing; informally, it eval-
uates its first argument to weak-head normal form, and then eval-
uates its second argument, returning the value of its second argu-
ment. Consider this definition of parMap:
parMap f [] = []
parMap f (x:xs) = y ‘par‘ (ys ‘pseq‘ y:ys)
where y = f x
ys = parMap f xs
The intention here is to spark the evaluation of f x, and then
evaluate parMap f xs, before returning the new list y:ys. The
programmer is hoping to express an ordering of the evaluation: first
spark y, then evaluate ys.
The obvious question is this: why not use #Haskell’s built-in seq
operator instead of pseq? The only guarantee made by seq is that
it is strict in both arguments; that is, seq a ⊥ = ⊥ and seq ⊥
a = ⊥. But this semantic property makes no operational guaran-
tee about order of evaluation. An implementation could impose this
operational guarantee on seq, but that turns out to limit the optimi-
sations that can be applied when the programmer only cares about
the semantic behaviour. Instead, we provide both pseq and seq
(with and without an order-of-evaluation guarantee), to allow the
programmer to say what she wants while leaving the compiler with
as much scope for optimisation as possible. https://simonmar.github.io/bib/papers/multicore-ghc.pdf
After extensively using the #lens#Haskell library for half a year at work I have now played around with the #optics library again. I am amazed by how much more helpful error messages are with #optics. It’s an amazing library and I would recommend it over #lens whenever you have the choice.
@mangoiv Hasn’t been my impression with #optics yet. To me it feels like the type level stuff is "finished". As long as you only define and use existing types of optics you don’t need to worry about it.
@boarders This is one of my use cases. The great thing about recursion schemes is that they don’t require recursion. And all streaming can be done with recursion schemes, so also no recursion.
But currently it’s pretty basic. You can use recursive functions from other modules … as long as they don’t get inlined. And there’s no way to restrict imported recursive functions. But please, open issues with stuff you’d like to see.
@sellout ah fantastic, I want the same thing (I actually wanted it as a language extension similar to explicitly letrec, but dont currently have faith in the language proposal process as it attracts a lot of lawyer types)
For the guarded recursion thing, I had in mind coinductive programming which can be done with recursion schemes for coalgebra or with have an explicit tick operator as in guarded type theory / agda
I remember recently reading a new paper elaborating a novel presentation of graph algebras. But I can't for the life of me remember the title or the author.
The central idea was axiomatizing vertices as pairs of sets of all incoming and outgoing edges.
It also had example code in Haskell.
Does anybody have an idea of the title? I'd be very thankful for suggestions.
@someodd, here's a few thoughts on #haskell packaging/distribution:
Support the widest range of GHC versions, deps, and all related tools you can reasonably manage, reducing unnecessary friction.
Get your packages and their dependencies into Stackage nightly and keep them there. From there they will trickle into Stackage LTS, which is a starting point for many packaging systems.
You might say "Well, that's not a problem, just make those intermediate components that don't care about the connection polymorphic over MonadUnliftIO! Later, when composing the application, they will be injected a repository that does know about the ReaderT and the connection. Problem solved."
That's true, but what if I wanted those intermediate components to be as dumb as possible? Just plain records of functions in IO, no monad transformers, and no polymorphism over the monad, either?
Finding Success (and Failure) in Haskell by Type Classes is on sale on Leanpub! Its suggested price is $35.00; get it for $12.50 with this coupon: https://leanpub.com/sh/4sG8LBPo#Haskell
Here is a preprint of fun paper that I've been working on which investigates the utilization of formal descriptions of instruction semantics to perform symbolic binary-level program analysis: https://doi.org/10.48550/arXiv.2404.04132
It includes a prototype implementation in Haskell which performs symbolic execution of RISC-V binary code without requiring the transformation to an intermediate representation (like LLVM IR).
This paper also includes an empirical comparison with prior work which I attempted to design in a reproducible way by using #Guix for the evaluation artifacts: https://doi.org/10.5281/zenodo.10925791
The latest and greatest GHC version (9.8.2) is now available in the Alpine Linux Edge repositories and will be included in the upcoming 3.20 stable release.
Relatedly, it would be a nice feature for Haddock if the instances local to a module could be sorted and grouped for the purposes of documentation and telling a story.
Like "These are the instances for tuples of various sizes" or "this is the base case and these are some instances that recurse".
Is there any future in languages like #haskell where AI makes code a factor of small frequently and easily replaced glue and scraps, where whatever is most trained on and most hackable, most easily replaced/iterable is king?
Are big pieces of software that benefit from the architectural assurances Haskell brings a dead paradigm?
AI is here to stay and I feel if something was not already in or out of orbit, it may never reach escape velocity
effectfully describes #Haskell as a beautiful and amazing language. In episode 46 of #TheHaskellInterlude, Wouter Swierstra and Joachim Breitner asked effectfully about how he found a new passion for programming. Listen to the episode here: https://haskell.foundation/podcast/46/