Haskell. Its tool chain and version management pose significant hurdles, admittedly.
While the language itself inspires wonder, I must confess that the tooling falls short when measured against the high standards set by contemporary languages like Go and Rust.
In all fairness, we should recognize its historical context, where Haskell competed with C. From that perspective, its advancements were indeed commendable, and it's miles ahead of the rather "horrible" tooling in C. #Haskell
@Amirography oh well I have to say that to me C doesn't simply have tooling. There are some things, but I've never seen a C project actually effectively managing dependencies and doing linting the proper way, for example.
Taking this absence into account of course makes Haskell look like the cool kid on the block.
Actually the thing that irked me a lot at the time wasn't Cabal itself or Stack for example, but the fact that you had to know by heart which extensions for GHC you were using.
@Amirography@cjk I think SPJ's perspective on this will probably have changed a bit. In this talk, he's basically reflecting on the early history of Haskell, and it's certainly been one of the principles of Haskell evolution not to get dragged into maintaining compatibility too early.
I think there's much more emphasis on stability now. The Haskell Foundation has a Stability Working Group. For user-facing changes to GHC, there is a clear GHC proposals process that has to be adhered to. Nevertheless, there are still ongoing problems: one of the main problems is that the base package, which provides many of the most fundamental functions, is locked to GHC, so if you upgrade GHC, you have to upgrade base. Other libraries then depend on certain versions of base and are incompatible with others. This can lead to scenarios where a security fix in one package you need goes along with a dependency on a later base, which then means you have to upgrade GHC, which then means you have to make other changes. This sort of stuff isn't great. It's also not trivial to fix, although it's clearly identified as a problem, and people are working on improving this over time.
Another potentially annoying problem is that lots of things are sensitive to compiler versions (also tooling such as HLS). This is unlikely to be solved ever, but can be worked around by other tooling such as Nix relatively effectively. But the underlying point is that GHC has an aggressive optimiser including all sorts of cross-module optimisations, and that means that internal formats change essentially all the time.
@tshirtman You haven't spoiled anything - I'd seen this in others' solutions already, but couldn't quite believe it so was choosing to believe I'd missed their offset calculating bits
Cheating is that this information on the cycle periods being what they are is vital to a simple solution but not easily obtainable...
@penwing yeah, i've seen (and you probably as well) people use dot to produce graphs of the paths, making it slightly clearer what's happening, but getting the idea to do that? I fear there might be more data exploration in the future, rather than relying entirely on problem description.
I may bitch and gripe a bit about the insane learning curve of programming languages like #haskell... but I cut my teeth on #javascript and #php : I've seen worse. Of all the languages I've used, my favorite by far is #python, but only for processing. What's yours, and why?
@aeveltstra For personal use I jump between Python and JavaScript. (Sorry not sorry).
Design wise, Ruby is beautiful.
I've learned a little bit of Go, Lua, uxn/Forth (?) and others but I come back to interpreted language with a dynamic type system because I love Rapid Prototyping too much.
I feel like it encourages me to write better documentation and tests to compensate for the compile step.
With my background in math I should have looked into Haskell and Lisp but haven't done so.
@aeveltstra For me the biggest issue is finding bugs at compile time: I despise Python, it's even worse to debug than "proper" C. I simply cannot accept syntax or other weird type errors at runtime.
So it's mostly Rust (for serious stuff), Go (basically just a better Python), Kotlin (for JVM), C (for exploits/PoC) and Haskell/PureScript/Clojure (when the libraries around it allow it). I dislike about Clojure and anything Scheme/Lisp that typing isn't as native as it could be, as strict, explicit types usually help me catch errors early. I spend a lot of time fighting the type checker, but if I reflect: All those errors would have been undebuggable without.
I find Haskell/PureScript pretty easy to learn, it's simply taught pretty badly IMHO. Most of the time I don't really think about the monadic aspect of Monads.
Outside of the language aspects, I love the tooling around Rust and Clojure. Also, Haskell's Hoogle is awesome.
I have just been requested to commute three days a week 135 miles away each way from my home (I have not moved) to an office I was never required to attend before the pandemic starting end of September.
If anyone needs remote product/infrastructure/platform engineering or backend developer who has 15 years cloud deployment experience and data center to cloud migration experience, email me on spotter@referentiallabs.com.
The GitHub repo of the #haskellbasement and foundation packages was just archived, while it currently has 3764 indirect reverse dependencies including cryptonite, pandoc, and accelerate. I don't know why and I don't know what will happen now, but I am concerned.
Also, I'm concerned that this goes unnoticed. Hackage does not currently show maintenance status very well, so people might think these packages are still suitable to use in new projects.
You don't need side effects for I/O. Purely functional languages like #Miranda and #Haskell 1.0 got along just fine with dialogues for I/O, although whether they could be feased is another matter. But 'side effect' is short for 'side effect of evaluation'; i.e., the computer calculates the value of this expression and oh by the way it does this I/O while it's doing that (as a side effect). The side effects of a drug are everything the drug does that's not its intended purpose; the side effects of an expression are everything the code for that expression does that's not 'calculate the value'. So you don't need side effects to do I/O. Just make the I/O part of the value and have the computer calculate the value, then do the I/O it calculated.
@rml yes basically Development of a Haskell-EDSL Optimized for Open-Source Hardware (read risc V, and networked) , but I am realizing , my hsl may lie anywhere in the spectrum of nix and scheme ( given delay and force , vs let-rec of nix)
basically here's the updated view , after feedback of some learned haskell, nix folks
@mangoiv@maralorn Yes, I agree with this view. I don't think Maybe is the right analogy. The MVar being empty is not a case you have to explicitly deal with, it already has a behaviour attached to it (blocking). Regarding the missing entry in the square, isn't that just an IORef?
Yeah, It's kinda IORef but I thought that doesn't count because it has less concurrency guarantees.
But I think I get now why MVars are much more useful. I have even used TMVars myself as locks when the action I wanted to do with it contained effects.
@clementd@amyfou
A pair of stout field dogs! A church friend and her daughter are Willow Run Vizslas and have shown at Westminster show in NYC. Ania was best junior handler with a English cocker veteran. Rachael’s hounds are dual track conformation and trials dogs.
Non-deterministic behaviour in a specification can be a headache for testing. This updated post explores the non-determinism in the JSONPath RFC 9535, describes how the Compliance Test Suite is being upgraded to deal with non-determinism, and shows how non-deterministic tests can be generated automatically. There's also an "explosive" challenge for Haskell programmers.
Found a couple of trivial optimisations of my Haskell code which sped it up by a factor of over 20,000 (for the "explosive" example mentioned in the above post). That's sufficient for now. 😉
Someone once said that laziness is what kept #Haskell pure, and that's the actually relevant feature. This makes we wonder: will theorem proving languages like #lean, where logical consistency is what keeps them pure, deliver the same elegant experience, while avoiding some downsides of laziness (complex runtime, complicated performance characteristics)?
ok. sparks is indeed a nice way to get work stealing nested parallelism for free in #haskell, as long as you work with spark# directly and don't use par, pseq or anything built upon these combinators
@leftpaddotpy spark# is state# passing so you can explicitly spawn a spark as a monadic operation in io or st which feels natural. on the other hand, par's purity is an undesired burden because you now have evaluation order to worry about and litter your code with pseq. the entire "Strategy" thing and "Eval" monad in the parallel package is a huge distraction in the same sense
"Sadly, the hardware of the time often wasn’t powerful enough to make use of the solution. But today’s processors can easily manage the demands of Haskell and other purely functional languages."
Even today, software that requires performance is not written in purely functional languages like Haskell.
Maybe because purely functional languages are not designed with hardware in mind?
#RuinAFilm but programming language names:
Ghost in the #Haskell #Perl Is For Heros
In The Name Of The #Java
Schindler's #Lisp
Star Wars: The #FORTH Awakens
Manchester By the #C
The Truman #Go
Bringing Up #Ruby
Anyone get any other good ideas? #Puns#Joke