Twice in the past week I've read scholars who should know better repeat the urban legend that the QWERTY keyboard was designed to slow down typing, and thus, jamming, on early typewriters.
That'd be cool if it were true, but it's not, and the the truth is even cooler. The QWERTY keyboard evolved over time, shaped by two forces: (1) since the early machines were used by telegraph operators, the keys were arranged to avoid common transcription errors; and (2) competing patents of the typewriter slightly arranged the keyboard layout in order to qualify as new (and therefore patentable) designs.
Is there anything known about the efficiency implications of using HOAS for representing the (internal) syntax of a lambda calculus instead of de Bruijn syntax? Assuming one can get away with it, it seems like it could theoretically speed things up compared to looking up things in lists. OTOH functions can do arbitrary things, so I imagine much would depend on how you actually construct these.
By default, Haskell does not provide call stacks when errors occur. To get call stacks, one can add the HasCallStack constraint to any function to request it. However, did you know that doing this carelessly can cause memory usage to explode...
The notion of computability for functions ℕᵏ → ℕ is uncontroversial today, and it was never controversial. The Church-Turing Thesis is generally accepted by mathematicians, computer scientists, logicians and philosophers. Such a function is computable if and only if there is a Turing machine that computes it.
Moreover, the notion of computability with finite objects, such as trees, graphs, tables, etc. is also uncontroversial, by reduction to the above.
Computation with finite objects can be reduced to computation with natural numbers, by Gödel-type encodings.
One would not, of course, usually use such encodings to compute in practice, but, in order to ask the question "is this function computable?", such encodings are perfectly fine.
But then come higher-type computability. In its simplified form, it is about computation with types (ℕ → ℕ) → ℕ, and ((ℕ → ℕ) → ℕ) → ℕ, etc., where one starts with that of natural numbers ℕ (and perhaps the booleans 𝟚), and closes under function spaces (and perhaps under products × and coproducts +). These parenthetical remarks don't really make a difference. In fact, very much like (first-order) computability over ℕ is really about computability with finite objects, computability over higher types such as the above is really about computability with infinite objects.
There seem to be two canonical answers: (1) HEO, the "hereditarily effective total functionals", which are modeled by Kleene's first combinatory algebra K₁, and (2) C, "the continuous functionals", which are modelled by Kleene's second combinatory algebra K₂.
Now it turns out that if one tries to cook-up sensible notions of higher-type computability, one ends up invariably landing in (1) HEO or (2) C discussed above.
There is a nice paper by John Longley, called "On the ubiquity of certain type structures", which explains mathematically, rather than philosophically, why this is the case.
I also recommend Longley and Normann's 2015 book "Higher-order computability", particularly if you are a functional programmer who often works with higher types.
There is much, much more to say, and perhaps I will say it as a future continuation of this thread.
But here is the main lesson to be learned here: when you compute with infinite data, it does matter whether the data is given by an algorithm (this is K₁-computability) or by an "oracle" (this is K₂ computability).
For example, is the continuous weather forecast, or news updates, reported on my phone, given by an algorithm or by an oracle? Does your phone give you the Gödel number of the supposed program that generates the news? If so, you are the lucky owner of a K₁ phone. If not, you are the lucky owner of a K₂ phone, for which the Cantor type is compact, as it should. 🙂
A circular program is a program that depends on its own result. It may be surprising that this works at all, but laziness makes it possible if output becomes available sooner than it is required. In this final episode of 2023, which will be longer than usual (probably 45-60 minutes), we will take a look at several examples of...
This was a fun episode. I was introduced to breadth first labeling and attribute grammars by Doaitse Swierstra at the Applied Functional Programming summer school in Utrecht. He was an inspiring figure.
The biggest disadvantage of circular programs is that it is very easy to get into infinite loops when you make mistakes. I wish there was an easy way to guarantee statically that circular programs are terminating (perhaps using types).
There is a TDD exercise I run in a number of courses and workshops that involves creating a recently used list class (as in a recently open files menu or MRU cache).
One thing I find interesting is that the majority of developers do not use the most obvious name for the class — i.e., RecentlyUsedList, which is the title of the exercise and the term used consistently throughout the description — and instead seem to almost go out of their way to avoid any reference to the terminology used.
Tired of every null tracker having their own annotations and implementing their own logic? Then JSpecify is for you! Here's Kevin Bourrillion presenting it on my live stream:
The polynomial functors book by Nelson Niu and David Spivak finally dropped in a finished form, free on arXiv as usual https://arxiv.org/abs/2312.00990
Oleg's gists - Core Inspection (oleg.fi)
Haskell's problem with exploding call stacks (www.channable.com)
By default, Haskell does not provide call stacks when errors occur. To get call stacks, one can add the HasCallStack constraint to any function to request it. However, did you know that doing this carelessly can cause memory usage to explode...
[Well-Typed] The Haskell Unfolder Episode 17: circular programs (www.youtube.com)
A circular program is a program that depends on its own result. It may be surprising that this works at all, but laziness makes it possible if output becomes available sooner than it is required. In this final episode of 2023, which will be longer than usual (probably 45-60 minutes), we will take a look at several examples of...
Haskell Symposium 2023 Videos (www.youtube.com)