Another way to put it is that HasCallStack isn't optimized away by tail call optimization. And Haskell without tail call optimization will have huge stacks.
@mangoiv@jaror Inaccessible in what way? Cost? I doubt not colocating with ICFP would be an advantage, actually. The Haskell Symposium is an academic conference, and there's huge overlap between the audiences of ICFP and the Haskell Symposium. Moving HIW elsewhere sounds plausible to me, moving the Haskell Symposium does not.
Maybe the symposium should start catering more to industrial users, now that Haskell itself also seems to be moving more in that direction (e.g. more backwards compatibility). The symposium already allows experience reports and demos.
@jaror@mangoiv How would catering more to industrial users look like, in your opinion? And why would it be better to change the nature of the Haskell Sympoisum than to simply create / continue a different event? (FWIW, I'm an industrial user for many years now and have always felt very welcome and the Haskell Symposium. But I'm genuinely curious what you think could be done.)
@kosmikus@mangoiv I'm not really the right person to ask, having spent exactly zero time in industry. But I can imagine most industrial users have little interest in the main ICFP program and the other co-hosted workshops. So hosting the event separately at a smaller venue for just two days could make it possible to substantially lower the fees (and individual accommodation costs) which naturally makes the event more accessible. And I expect that the fees are generally a bigger problem outside of academia, so it cater more to industrial users and hobbyists.
@jaror@mangoiv I think compared to many commercial events targeted primarily at industrial users, ICFP + associated workshops is still not overly expensive. What I still don't understand is why it would be better to turn a historically primarily academic conference into something else rather than just try to create a different event. It's a bit sad that HaskellX isn't happening this year for unrelated reasons, but it used to be a good conference that was much less academic. So this can work. But if you decouple Haskell Symposium from ICFP, you'll probably have to re-build it completely anyway, because it'll lose all the academic audience it now automatically gets due to ICFP, and then I'm not sure if the brand is so important that it's better than to use a different one. Also, it's extremely risky. If you moved it away from ICFP, it might not be possible to easily undo that change.
Also, one thing to keep in mind, I don't think Haskell Symposium is suffering from a lack of attendance. It's primarily suffering from a lack of academic paper submissions. Other language workshops co-located with ICFP are mostly less serious (i.e., are encouraging more workshop-like work in progress and don't necessarily require full-paper submissions). Haskell Symposium is in a somewhat strange spot where it requires bascially the same amount of work and effort you'd put into a "full" ICFP submission, but it has much less prestige to people outside of the community that try to quantify the research output of individual researchers in order to decide whether they're worth funding / hiring (which, btw, I hate, but it's nevertheless a reality). So I think the main way to fix this is to either make it less serious as well, or to try make it more prestigious, the former being easier than the latter.
Perhaps making it less serious, but still co-locating with ICFP could also make it more interesting to industrial participants in itself, because it would make it easier for people not part of and familiar with the academic research community to get a presentation slot.
I think Idris' bang notation for performing effects in a do-block is pretty, it could look like this:
main = do putStrLn ("You said: " ++ !getLine)
Today, you'd have to come up with a new variable name or figure out the right combinator names:
main = do line <- getLine; putStrLn ("You said: " ++ line)
main = putStrLn . ("You said: " ++) =<< getLine
But unfortunately there are more complicated cases:
main = do print (True || !getLine == "foo")
In a strict language with built-in short-circuiting logical operations the getLine would never be performed, but in Haskell || is just a normal function that happens to be lazy in its second argument. The only reasonable way to implement it seems to be to treat every function as if it was strict and always perform the getLine:
main = do line <- getLine; print (True || line == "foo")
Do you think this is confusing? Or is the bang notation useful enough that you can live with these odd cases? I'm not very happy with this naive desugaring.
@jaror will this interact with any future proposals towards DH? I’m worried that this increase the pile of chores one would have to do to finally reach the goal of implementing them, similarly to how it happened with LinearTypes (and multiplicity polymorphism ftm) themselves.
The changes seem pretty modest as the costs and drawbacks section also says. But I wouldn't know how complicated it is to combine normal constraints with dependent types, let alone linear constraints.
@jaror yeah it seems fine at first; issue is that someone probably has to do the theory part, writing a new papers for every one new addition seems tedious 😅
It's great to see windows TUI support so soon after the vty release! Especially since Windows users are not known for their fondness of command lines. I hope this has lowered one barrier to entry to Haskell and will give a more pleasant experience to many Windows users.
It's still going strong. It's a pretty unique language. I use it among many other hobbyists and academics. Some well known companies like GitHub, Meta, and Tesla and of course many smaller companies use it too. Why aren't you using it?
Great point about Haskell seemingly becoming less and less easy to learn for beginners around 15:00. I hope some day we get a language levels system where you can start with a very simple subset and slowly expand to the full language.
@jaror Haskell 2010 is pretty simple. What do you imagine is the simpler starting point, if any? If Haskell 2010 is a good starting point, aren't language pragmas / extensions effectively the same as your "language levels"?
Type classes are a big cause of confusion among newcomers, and even parametric polymorphism.
If you want to see how simple a language can really get you should check out Hedy: https://www.hedycode.com/. It even removes string quotes (let alone variables) at the simplest level. Although it is too imperative for my taste.
@jaror@BoydStephenSmithJr Type classes have been in Haskell since forever. There’s no Haskell “level” that would avoid them while being a level of Haskell instead of some vague/generic “functional programming”.
If you want to teach Haskell - you teach Haskell, with its staples like type classes and laziness.
I'm not suggesting you should never explain type classes. I simply want to avoid having to explain type classes before I can explain how to add two integers. And more importantly, I don't want error messages to mention type classes until they have been understood.
@jaror@dpwiz I think without the type of polymorphism that Haskell uses type classes for, the language can never be more than a toy.
But, that doesn't mean it can't be didactically useful. A "Haskell--" with a JS-style Number for all numeric literals and replacing all numeric type classes with top-level operators on that type could be useful, for a bit.
Once you want to do indexing (e.g. Array) you need to distinguish between numbers like sqrt 5 and suitable indexes, tho. Enter polymorphism
There are so many options to work around type classes. As you say, we could be looser with our types and have one general number type that includes integers and floats. (And I don't even think array indexing is much of a problem, it could just throw an error or automatically round to an integer.)
Another option is to just have separate number types with separate functions for addition and multiplication etc. For example OCaml has + and +..
Perhaps more of a stepping stone towards full type classes would be a limited system where only a few pre-defined classes and instances exist. Then you'll never run into the dreadful could not deduce Num (a -> b) error message, but you can still use a nice overloaded + for both Int and Double.
@jaror@dpwiz Your first proposal is to sacrifice type safety. I reject that option; avoid success at all costs.
Your second actually increases complexity through semantic bifurcation . I reject that as a way to make a simpler language, even for didactic purposes.
No, discarding type classes without adopting something else worse (interface inheritance) is not easy, and may actually be impossible.
I'm not sure which options you are referring to, I had three options: a JS-style number type (with two suboptions for indexing: throwing errors or rounding), separate types, or a fixed set of classes and instances.
Your first point seems to be against the error throwing approach to array indexing with a JS-style number type. I kind of agree, but then we should also handle out of bounds indexing in a type safe way. I still don't see the problem with rounding to an integer, I think that's also what beginners intuitively expect if they write e.g. xs !! (length xs / 2).
Your second point seems to be against having separate types and separate instructions like + and +.. I think I'd agree that semantically it is not much simpler, but programming is more than just semantics. For example, error messages will be much simpler if there's no Num type class involved (at least the error messages that GHC gives). Perhaps it is possible to develop a better error reporting mechanism for type classes, but that would require more research.
@jaror I absolutely disagree that error messages would be better if we didn't have type classes. Languages with the other popular ad-hoc polymorphism (interface inheritance) have similarly opaque (initially) error messages of approximately the same length OR promote the failure to a run time error, a much worse situation.
@jaror Implicit rounding or other conversion also undermines type safety, though if the conversion is total, not in the strict sense of "well typed programs don't go wrong" but in the sense of safety is when our expectations align with the results, something easily violated by allowing floating point indexes.
@jaror@dpwiz Your third option has been tried and failed multiple times, most recently in Elm. Programmers that need that kind of extension route around the denial and produce a (sometimes limited, often more awkward) library solution and then ship adaptors for all the primitives in the language, replacing the native (but non-extensible) name overloading.
This ends up making for a more complex ecosystem, and frustration when learners are taught native, but everyone uses library.
Anyway, I didn’t encounter much problems with type classes while teaching Haskell, not even as a first language. May all of my students were okay with some suspense 😅
The GADT allows some constructors to be safely unhandled when the type parameter is known.
The consuming a ParsedImage, you always have to deal with both constructors. When consuming a Image px or an AnyImage, you also have to deal with both Image constructors. When consuming a Image Pixel8Bit the type system proves that it couldn’t be constructed with the Image16Bit constructor, so you only have to deal with the Image8Bit constructor.
But how? Parsing function can return any of the types, we don't know what was in the bytestring. So we'd need to deal with all variations in any case, no?
Is the difference in that it becomes possible to pattern-match on a type of an element inside the structure, rather than on the structure as a whole? So as long as you don't need that element, you can access elements that are common without pattern-matching? I guess it's a marginal benefit...
Parsing function can return any of the types, we don’t know what was in the bytestring. So we’d need to deal with all variations in any case, no?
Yes, the parsing function could return anything, but that’s not the exclusive source of values of the GADT.
So, yes, when consuming the output of the parsing function, you have to deal with all the variations. But, when you have types that reflect some information is known about the type index, you only have to deal with the matching constructors.
This is particularly important given the copy-paste transport of code from one context to another. You might have some code that works for a MyGADT MyType and doesn’t handle all the constructors. But, when you transport that code elsewhere and ask it to handle a MyGADT a, the type system will correctly warn you that you haven’t handled all the cases. This is an improvement over writing something like case x of { Right y -> stuff y; Left{} -> error “Can’t happen; won’t happen”; }, since I’m sure works fine in the right context, but doesn’t have a type that reflects what the programmer knows about x so if it is transported to a context where that information/assumption is no longer true, that’s only discovered at runtime. (This isn’t the best example, but it’s what I have for you.)
Well, I guess that I can see the value.. Leaving copy-pasting problem aside, you might, for instance, want to have a type for a message with moderately complex envelope and a wide variety of possible payload types. It would be useful to have functions that act on the envelope, and treat payload as something opaque.
Haskell
Hot