When I first came across Voigtländer's paper on speeding up free monads [1] and some of the methods that Hinze mentions [2] I was a bit bemused about why category theory had anything to say about program optimization. But now it seems obvious. Much of optimization is a lot like algebraic manipulation where you're rearranging while hoping to keep the value the same. But in particular, a really common optimization move is to write f(g(x)) as (fg)(x) where (fg) is somehow simpler (or more reusable than) than just applying g then f. Ie. associativity - which is one of the laws of category theory. I think this step also accounts for almost all of the computational reasons for using linear algebra. Eg. graphics pipelines make good use of this kind of associativity.
I have a lot of professional experience teaching people how to program — usually in a pairing environment. One thing I noticed almost universally is that people start off wanting to 'forward chain' all their code, meaning, they try to predict the leaves of the code and then build them up. The other thing they do is they leave their code in an inoperable (even non-parsing) state until the very last minute where it is completely done — and then are completely flummoxed when it doesn't work.
The problem with the first practice is that it is entirely hopeless, in most cases, to predict the smallest finishing touches of code that accomplishes some goal or purpose — because, naturally, ideas and rational thinking about the purpose of some code gives rise to decompositions of the problem at hand, whereas just thinking about all the things you have on hand gives rise to no useful decomposition whatsoever.
It is a real struggle to get people to (1) think from the root to the leaves, and (2) write their code in small steps after which it is always possible to take stock and see if you understand what you have written.
I learned to write code this way by being a Haskell and Agda programmer — I was lucky enough to be professionally writing Haskell around the time they introduced typed holes, which made this methodology much more systematic and applicable than it had been before.
I wrote a blog post about Agda Core! It's a bit long and not very polished, but I'm happy to have it finished (thanks to a long and distraction-free train ride to Swansea).
I prefer to call it "definitional equality" rather than "judgmental equality".
There is a difference between having "0 + x = x" by definition, and "x + 0 = x" by proof.
People complain that type theory "suffers" from this "defect".
But this is the case even in Peano arithmetic. The definition gives one of the equalities, and a proof is required for the other (by induction).
What is different in type theory from e.g. Peano arithmetic, and more generally first-order logic with equality, is that we can "literally" substitute along both definitional and "proved" equality in Peano arithmetic, while in type theory we can substitute "literally" along definitional equalities, but only via transport along "proved" equalities.
This may be seen as a "defect" of dependent type theory (and many people view it like that), but it may also be seen as a virtue, allowing the existence of HoTT/UF.
I am among the people who see this as a virtue.
This view is very reasonable if you see dependent type theory as "proof relevant as articulated by the Curry-Howard interpretation". Why should "proved" equality be different from other "proved" things?
Definitional equality is "literally" substitutional, while "proved" equality is substitutional "up to (proof) transport".
Definitional equality doesn't have witnesses, and this is why it can be "substituted literally".
Proved equality does have witnesses, which may not be definitionally equal themselves.
It is rather remarkable how "proof relevant" equality matches ∞-categorical thinking quite precisely. It is the kind of mathematical coincidence that makes me love mathematics.
W3C tech in use on the web "Printing music with CSS Grid"
"CSS Grid allows us to align other symbols inside the notation grid too. Chords and lyrics, dynamics and so on can be lined up with, and span, timed events: " https://cruncher.ch/blog/printing-music-with-css-grid/
Tuples and curried functions are nice for toys, but they are industrial Haskell's worst enemy. If you're going to be able to jump into a big repo and understand stuff, you need to see record field names.
6️⃣ Here's the 6th installment of my series of posts highlighting key new features of the upcoming v256 release of systemd.
In the 2nd installment of this series we have already discussed system and service credentials in systemd a bit. Quick recap: these are smallish blobs of data that can be passed into a service in a secure way, to parameterize, configure it, and in particular to pass secrets to it (passwords, PINs, private keys, …).
Environment variables after all suck for passing secrets, since they are by default inherited down the process tree, even across privilege changes, are placed in swappable memory, cannot be recalled, have no access control concept (i.e. not locked to the UID/GID) and so on.
There's one particularly nice facet of systemd's credentials concept: they are not just service credentials, but also system credentials.
A great example of how cool PureScript backend optimizer is - The PureScript code makes multiple map and filter calls and also calls into two functions with complicated logic. The generated output shows that the PS code was aggressively inlined into a tiny bit of JS code -
It is a mathematical device to define what a continuous function is, in a general setting.
A topological space is a set X together with a collection of subsets of X, called open, such that finite intersections of open sets are open, and arbitrary unions of open sets are open.
A function of topological spaces is continuous if inverse images of open sets are open.
What is the intuition behind (1) and (2)?
I claim that it is better to ask, instead, how mathematicians came up with (1) and (2).
What happens next is that metric spaces are not enough. There are sets on which we want to consider continuous functions which can't be metrized so that metric continuity coincides with the notion of continuity we want.
I like this book because it starts from trivial things, making them less and less trivial as we progress, until it gets eventually to many things, including Stone duality. I self-learned topology from this book as an undergrad.
A couple months ago, another mathematician contacted me and two of my co-authors (Green and Manners) regarding a minor mathematical misprint in one of our papers. Normally this is quite a routine occurrence, but it caused a surprising amount of existential panic on my part because I thought it involved the #PFR paper that I had run a #Lean formalization project in. As it turned out, though, the misprint involved a previous paper, in a portion that was not formalized in Lean. So all was well; we thanked the mathematician and updated the paper accordingly.
But yesterday, we received referee reports for the PFR paper that was formalized in Lean, and one of the referees did actually spot a genuine mathematical typo (specifically, the expression H[A]-H[B] appearing in (A.22) of https://arxiv.org/abs/2311.05762 should be H[A]+H[B]). So this again created a moment of panic - how could Lean have missed this?
After reviewing the Github history for the blueprint, I found that when I transcribed the proof from the paper to blueprint form, I had unwittingly corrected this typo (see Equation (9) of https://teorth.github.io/pfr/blueprint/sect0003.html in the proof of Lemma 3.23) without noticing that the typo was present in the original source paper. This lemma was then formalized by other contributors without difficulty. I don't remember my actual thought process during the transcription, but I imagine it is similar to how when reading in one's native language, one can often autocorrect spelling and grammar errors in the text without even noticing that one is doing so. Still, the experience gives me just a little pause regarding how confident one can be in the 100% correctness of a paper that was formalized...
The term 'telemetry' can raise a lot of concerns, especially within the realm of #OSS. In the latest episode of #TheHaskellInterlude, Joachim Breitner and Andres Löh interview @avi_press, the CEO of @scarf_oss. Learn more about the episode here: https://haskell.foundation/podcast/47/ #Haskell
This is the first of a two-part series of blog posts on GHC specialization, an optimization technique used by GHC to eliminate the performance overhead of ad-hoc polymorphism and enable other powerful optimizations. There will also be a Haskell Unfolder episode about this topic.
Twice in the past week I've read scholars who should know better repeat the urban legend that the QWERTY keyboard was designed to slow down typing, and thus, jamming, on early typewriters.
That'd be cool if it were true, but it's not, and the the truth is even cooler. The QWERTY keyboard evolved over time, shaped by two forces: (1) since the early machines were used by telegraph operators, the keys were arranged to avoid common transcription errors; and (2) competing patents of the typewriter slightly arranged the keyboard layout in order to qualify as new (and therefore patentable) designs.
Is there anything known about the efficiency implications of using HOAS for representing the (internal) syntax of a lambda calculus instead of de Bruijn syntax? Assuming one can get away with it, it seems like it could theoretically speed things up compared to looking up things in lists. OTOH functions can do arbitrary things, so I imagine much would depend on how you actually construct these.
[Well-Typed Blog] Choreographing a dance with the GHC specializer (Part 1) (well-typed.com)
This is the first of a two-part series of blog posts on GHC specialization, an optimization technique used by GHC to eliminate the performance overhead of ad-hoc polymorphism and enable other powerful optimizations. There will also be a Haskell Unfolder episode about this topic.
Oleg's gists - Core Inspection (oleg.fi)