@DiazCarrete@hachyderm.io avatar

DiazCarrete

@DiazCarrete@hachyderm.io

This profile is from a federated server and may be incomplete. Browse more on the original instance.

diana, to random
@diana@hachyderm.io avatar

@yvonnezlam Made famous in the NewCrafts closing keynote … was a perfect moment set up by other talks here.

dpiponi, to random
@dpiponi@mathstodon.xyz avatar

When I first came across Voigtländer's paper on speeding up free monads [1] and some of the methods that Hinze mentions [2] I was a bit bemused about why category theory had anything to say about program optimization. But now it seems obvious. Much of optimization is a lot like algebraic manipulation where you're rearranging while hoping to keep the value the same. But in particular, a really common optimization move is to write f(g(x)) as (fg)(x) where (fg) is somehow simpler (or more reusable than) than just applying g then f. Ie. associativity - which is one of the laws of category theory. I think this step also accounts for almost all of the computational reasons for using linear algebra. Eg. graphics pipelines make good use of this kind of associativity.

[1] https://janis-voigtlaender.eu/papers/AsymptoticImprovementOfComputationsOverFreeMonads.pdf
[2] https://www.cs.ox.ac.uk/ralf.hinze/Kan.pdf

jonmsterling, to random
@jonmsterling@mathstodon.xyz avatar

I'm working on figuring out how to explain the practice of mathematics to students who've never done it before...

There are too many things that we leave unsaid, and then use crudely to separate those who already know from those we won't bother to teach...

jonmsterling, to random
@jonmsterling@mathstodon.xyz avatar

I have a lot of professional experience teaching people how to program — usually in a pairing environment. One thing I noticed almost universally is that people start off wanting to 'forward chain' all their code, meaning, they try to predict the leaves of the code and then build them up. The other thing they do is they leave their code in an inoperable (even non-parsing) state until the very last minute where it is completely done — and then are completely flummoxed when it doesn't work.

The problem with the first practice is that it is entirely hopeless, in most cases, to predict the smallest finishing touches of code that accomplishes some goal or purpose — because, naturally, ideas and rational thinking about the purpose of some code gives rise to decompositions of the problem at hand, whereas just thinking about all the things you have on hand gives rise to no useful decomposition whatsoever.

It is a real struggle to get people to (1) think from the root to the leaves, and (2) write their code in small steps after which it is always possible to take stock and see if you understand what you have written.

I learned to write code this way by being a Haskell and Agda programmer — I was lucky enough to be professionally writing Haskell around the time they introduced typed holes, which made this methodology much more systematic and applicable than it had been before.

haskell, to haskell
@haskell@fosstodon.org avatar

"We don’t have too many String types" – The ultimate guide to #Haskell Strings by Julian Ospald

https://discourse.haskell.org/t/the-ultimate-guide-to-haskell-strings/9538

jesper, to Blog
@jesper@agda.club avatar

I wrote a blog post about Agda Core! It's a bit long and not very polished, but I'm happy to have it finished (thanks to a long and distraction-free train ride to Swansea).

jesper.cx/posts/agda-core.html

typeswitch, to random
@typeswitch@gamedev.lgbt avatar

maybe a better way to say "judgmental equality" is "coincidence" 🤔

MartinEscardo,
@MartinEscardo@mathstodon.xyz avatar

@typeswitch

I prefer to call it "definitional equality" rather than "judgmental equality".

There is a difference between having "0 + x = x" by definition, and "x + 0 = x" by proof.

People complain that type theory "suffers" from this "defect".

But this is the case even in Peano arithmetic. The definition gives one of the equalities, and a proof is required for the other (by induction).

What is different in type theory from e.g. Peano arithmetic, and more generally first-order logic with equality, is that we can "literally" substitute along both definitional and "proved" equality in Peano arithmetic, while in type theory we can substitute "literally" along definitional equalities, but only via transport along "proved" equalities.

This may be seen as a "defect" of dependent type theory (and many people view it like that), but it may also be seen as a virtue, allowing the existence of HoTT/UF.

I am among the people who see this as a virtue.

This view is very reasonable if you see dependent type theory as "proof relevant as articulated by the Curry-Howard interpretation". Why should "proved" equality be different from other "proved" things?

Definitional equality is "literally" substitutional, while "proved" equality is substitutional "up to (proof) transport".

Definitional equality doesn't have witnesses, and this is why it can be "substituted literally".

Proved equality does have witnesses, which may not be definitionally equal themselves.

It is rather remarkable how "proof relevant" equality matches ∞-categorical thinking quite precisely. It is the kind of mathematical coincidence that makes me love mathematics.

w3c, to random
@w3c@w3c.social avatar

W3C tech in use on the web "Printing music with CSS Grid"
"CSS Grid allows us to align other symbols inside the notation grid too. Chords and lyrics, dynamics and so on can be lined up with, and span, timed events: "
https://cruncher.ch/blog/printing-music-with-css-grid/

Jose_A_Alonso, to random
@Jose_A_Alonso@mathstodon.xyz avatar

Guided equality saturation. ~ Thomas Kœhler, Andrés Goens, Siddharth Bhat, Tobias Grosser, Phil Trinder, Michel Steuwer. https://michel.steuwer.info/files/publications/2024/POPL-2024-2.pdf

chris__martin, to random
@chris__martin@functional.cafe avatar

Tuples and curried functions are nice for toys, but they are industrial Haskell's worst enemy. If you're going to be able to jump into a big repo and understand stuff, you need to see record field names.

pid_eins, to random
@pid_eins@mastodon.social avatar

6️⃣ Here's the 6th installment of my series of posts highlighting key new features of the upcoming v256 release of systemd.

In the 2nd installment of this series we have already discussed system and service credentials in systemd a bit. Quick recap: these are smallish blobs of data that can be passed into a service in a secure way, to parameterize, configure it, and in particular to pass secrets to it (passwords, PINs, private keys, …).

pid_eins,
@pid_eins@mastodon.social avatar

Environment variables after all suck for passing secrets, since they are by default inherited down the process tree, even across privilege changes, are placed in swappable memory, cannot be recalled, have no access control concept (i.e. not locked to the UID/GID) and so on.

There's one particularly nice facet of systemd's credentials concept: they are not just service credentials, but also system credentials.

haskman, to javascript
@haskman@functional.cafe avatar

Since the backend post (https://www.moonbitlang.com/blog/js-support) is trending, I thought I'd compare backend optimizer (https://github.com/aristanetworks/purescript-backend-optimizer) output to see how it fares. The results were pretty good!

With basically this PureScript code -

run = fromArray  
 >>> flatMapF (fromArray <<< _.members)  
 >>> filterF _.gender  
 >>> mapF (\x -> min 100 (x.score + 5))  
 >>> mapF grade  
 >>> filterF (_ == 'A')  
 >>> foldF (\_ x -> x+1) 0  

the benchmark results are as follows. PureScript is roughly 6x faster than plain JS, and 6x slower than Moonbit output ( -

┌─────────┬──────────────┬─────────────┬────────────────────┬──────────┬─────────┐  
│ (index) │ Task Name │ ops/sec │ Average Time (ns) │ Margin │ Samples │  
├─────────┼──────────────┼─────────────┼────────────────────┼──────────┼─────────┤  
│ 0 │ 'Moonbit' │ '34,67,542' │ 288.38869989829305 │ '±0.06%' │ 1733772 │  
│ 1 │ 'Plain Js' │ '74,816' │ 13365.983827421464 │ '±0.54%' │ 37409 │  
│ 2 │ 'Kotlin Js' │ '1,90,241' │ 5256.474017304151 │ '±0.38%' │ 95121 │  
│ 3 │ 'PureScript' │ '4,99,456' │ 2002.1768597161156 │ '±0.70%' │ 249729 │  
└─────────┴──────────────┴─────────────┴────────────────────┴──────────┴─────────┘  

haskman,
@haskman@functional.cafe avatar

A great example of how cool PureScript backend optimizer is - The PureScript code makes multiple map and filter calls and also calls into two functions with complicated logic. The generated output shows that the PS code was aggressively inlined into a tiny bit of JS code -

on the left : on the right

The tiny bit of JS code that was generated for the above PS code

MartinEscardo, to random
@MartinEscardo@mathstodon.xyz avatar

What is a topological space?

It is a mathematical device to define what a continuous function is, in a general setting.

  1. A topological space is a set X together with a collection of subsets of X, called open, such that finite intersections of open sets are open, and arbitrary unions of open sets are open.

  2. A function of topological spaces is continuous if inverse images of open sets are open.

What is the intuition behind (1) and (2)?

I claim that it is better to ask, instead, how mathematicians came up with (1) and (2).

1/

MartinEscardo,
@MartinEscardo@mathstodon.xyz avatar

This, and much more, is beautifully explained in the book

George F. Simmons. Topology and modern analysis
https://archive.org/details/introduction-to-topology-and-modern-analysis-simmons

which I highly recommend.

What happens next is that metric spaces are not enough. There are sets on which we want to consider continuous functions which can't be metrized so that metric continuity coincides with the notion of continuity we want.

I like this book because it starts from trivial things, making them less and less trivial as we progress, until it gets eventually to many things, including Stone duality. I self-learned topology from this book as an undergrad.

5/

shadeed9, to random
@shadeed9@front-end.social avatar

The view transitions spec is one of the best (if not the best) specs out there. Full of real-life examples and demos ✨

https://www.w3.org/TR/css-view-transitions-1/

tao, to random
@tao@mathstodon.xyz avatar

A couple months ago, another mathematician contacted me and two of my co-authors (Green and Manners) regarding a minor mathematical misprint in one of our papers. Normally this is quite a routine occurrence, but it caused a surprising amount of existential panic on my part because I thought it involved the paper that I had run a formalization project in. As it turned out, though, the misprint involved a previous paper, in a portion that was not formalized in Lean. So all was well; we thanked the mathematician and updated the paper accordingly.

But yesterday, we received referee reports for the PFR paper that was formalized in Lean, and one of the referees did actually spot a genuine mathematical typo (specifically, the expression H[A]-H[B] appearing in (A.22) of https://arxiv.org/abs/2311.05762 should be H[A]+H[B]). So this again created a moment of panic - how could Lean have missed this?

After reviewing the Github history for the blueprint, I found that when I transcribed the proof from the paper to blueprint form, I had unwittingly corrected this typo (see Equation (9) of https://teorth.github.io/pfr/blueprint/sect0003.html in the proof of Lemma 3.23) without noticing that the typo was present in the original source paper. This lemma was then formalized by other contributors without difficulty. I don't remember my actual thought process during the transcription, but I imagine it is similar to how when reading in one's native language, one can often autocorrect spelling and grammar errors in the text without even noticing that one is doing so. Still, the experience gives me just a little pause regarding how confident one can be in the 100% correctness of a paper that was formalized...

haskell_foundation, to haskell
@haskell_foundation@mastodon.social avatar

The term 'telemetry' can raise a lot of concerns, especially within the realm of . In the latest episode of , Joachim Breitner and Andres Löh interview @avi_press, the CEO of @scarf_oss. Learn more about the episode here: https://haskell.foundation/podcast/47/

[Well-Typed Blog] Choreographing a dance with the GHC specializer (Part 1) (well-typed.com)

This is the first of a two-part series of blog posts on GHC specialization, an optimization technique used by GHC to eliminate the performance overhead of ad-hoc polymorphism and enable other powerful optimizations. There will also be a Haskell Unfolder episode about this topic.

ploeh, to random
@ploeh@mastodon.social avatar

New article: Services share schema and contract, not class. https://blog.ploeh.dk/2024/04/15/services-share-schema-and-contract-not-class

samplereality, to random
@samplereality@post.lurk.org avatar

Twice in the past week I've read scholars who should know better repeat the urban legend that the QWERTY keyboard was designed to slow down typing, and thus, jamming, on early typewriters.

That'd be cool if it were true, but it's not, and the the truth is even cooler. The QWERTY keyboard evolved over time, shaped by two forces: (1) since the early machines were used by telegraph operators, the keys were arranged to avoid common transcription errors; and (2) competing patents of the typewriter slightly arranged the keyboard layout in order to qualify as new (and therefore patentable) designs.

Check out this research for more: https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/139379/1/42_161.pdf

irene, to random
@irene@discuss.systems avatar

Important preparation for a PC chair: assemble PC program, pick best paper awards, find session chairs and program your Chanel handbag.

Chanel LED handbag with my name.

jesper, to random
@jesper@agda.club avatar

Is there anything known about the efficiency implications of using HOAS for representing the (internal) syntax of a lambda calculus instead of de Bruijn syntax? Assuming one can get away with it, it seems like it could theoretically speed things up compared to looking up things in lists. OTOH functions can do arbitrary things, so I imagine much would depend on how you actually construct these.

Maybe @AndrasKovacs has an idea?

wouter,
@wouter@functional.cafe avatar

@jesper If you're interested in benchmarks, you may want to have a look at Lennart's Lambda Calculus Cooked Four Ways (https://github.com/steshaw/lennart-lambda/blob/master/top.pdf) and Stephanie's Cooked n-Ways: https://github.com/sweirich/lambda-n-ways/tree/main. It seems like HOAS is really fast...

Jose_A_Alonso, to haskell
@Jose_A_Alonso@mathstodon.xyz avatar

Extending destination-passing style programming to arbitrary data types in Linear Haskell. ~ Thomas Bagrel. https://www.tweag.io/blog/2024-03-07-dps-haskell

lyxia, to random
@lyxia@mamot.fr avatar

How it started

lyxia,
@lyxia@mamot.fr avatar

This lecture at Collège de France https://www.youtube.com/watch?v=7Q4WUG-KDec

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • cubers
  • magazineikmin
  • GTA5RPClips
  • khanakhh
  • InstantRegret
  • Youngstown
  • mdbf
  • slotface
  • thenastyranch
  • everett
  • osvaldo12
  • kavyap
  • rosin
  • anitta
  • DreamBathrooms
  • Durango
  • modclub
  • ngwrru68w68
  • vwfavf
  • ethstaker
  • tester
  • cisconetworking
  • tacticalgear
  • Leos
  • provamag3
  • normalnudes
  • JUstTest
  • All magazines