@pervognsen@mastodon.social avatar

pervognsen

@pervognsen@mastodon.social

Performance, compilers, hardware, mathematics, computer science.

I've worked in or adjacent to the video game industry for most of my career.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

pervognsen, (edited ) to random
@pervognsen@mastodon.social avatar

An exposition of (theoretical) derandomization of randomized algorithms. Wigderson received the Turing Award last month for work in this area. https://www.youtube.com/watch?v=mZck0N_T9Cs

pervognsen,
@pervognsen@mastodon.social avatar

I think I first learned about the Impagliazzo-Wigderson theorem from Scott Aaronson's lecture notes for his wide-ranging (and fast-moving) course Quantum Computing Since Democritus, which I highly recommend. The relevant lecture is https://www.scottaaronson.com/democritus/lec7.html

pervognsen, (edited ) to random
@pervognsen@mastodon.social avatar

I'm cautiously in favor of further AI development when done responsibly (which isn't happening right now), but any generated value needs to be fully socialized and not restricted, captured or managed by capital, nonprofits or other private interests. Regardless of your view on maximalist socialism, the AI case should be a no-brainer or at least highly compelling given the risks and benefits and especially the source of the data they're using to train their models.
https://mastodon.social/@whitequark/112426645665489274

pervognsen, (edited )
@pervognsen@mastodon.social avatar

And even if all AI training data was legally in the public domain it wouldn't change any of this for me. If anything it would make the point more strongly about why we have a notion of public domain. It's to serve and benefit the public. I think getting lost in debates on authorial consent and licensing is missing the bigger picture; that said, I do support the class action lawsuits by Butterick, etc, as necessary and possibly useful within our current framework. Just my two cents.

pervognsen,
@pervognsen@mastodon.social avatar

If you currently do a search on "AI socialism" it means something very different: it's almost exclusively about AI-managed, AI-supported socialist planning. Incidentally, that does kind of have an interesting real-world history. Most of modern optimization theory was developed in the USSR to support economic state planning. A lot of the research didn't make it beyond the Iron Curtain for decades; Stephen Boyd's convex optimization lectures has a lot of jokes about Russians inventing everything.

pervognsen,
@pervognsen@mastodon.social avatar

This is not exactly a ringing endorsement of modern optimization theory but I don't think the well-known shortcomings of USSR economic planning can be blamed so much on the optimization methods or the limits of computation at the time. :)

pervognsen,
@pervognsen@mastodon.social avatar

@morten_skaaning Of course you can allocate resources for research and development. What do you think a company does? How does public sector research happen? Anyway, I'm not advocating for central planning; that was just an aside on the two possible meanings of "AI socialism". (I also don't agree with the premise that ML-based methods can only choose known solutions because that isn't even true for solving a linear equation. I assume you're thinking about some notion of novelty beyond that.)

pervognsen, to random
@pervognsen@mastodon.social avatar

Floyd's Assigning Meaning to Programs (1967), https://people.eecs.berkeley.edu/~necula/Papers/FloydMeaning.pdf, came out two years before Hoare's An Axiomatic Basis for Computer Programming (1969), https://dl.acm.org/doi/10.1145/363235.363259. He had the same basic idea and applies it to two examples: a flowchart language and an ALGOL subset. He even uses the term "strongest verifiable proposition" in a way very reminiscent of Dijkstra's later "strongest postcondition/weakest precondition".

pervognsen, to random
@pervognsen@mastodon.social avatar

"The algorithm uses exactly the same terminology and is presented in bottom-up form. (If you prefer top-down design, please read the rest of this section backwards.)"

pervognsen, (edited )
@pervognsen@mastodon.social avatar

These classic-era structured programmers really loved their sentinels and really hated their gotos/breaks/returns. You can't tell me that it's easier to prove the correctness of this with Hoare-Dijkstra-style assertions than if you had an early return; assertions flow across control flow edges just fine, and when you have multiple control flow predecessors your precondition is the logical disjunction of those in-flowing assertions. Bob Floyd used Hoare-style triples for flow charts before Hoare.

pervognsen,
@pervognsen@mastodon.social avatar

@regehr It's a global constant: w for width, x is an array [0..w], keeping in mind that this is inclusive Pascal arrays so it actually has w+1 entries.

pervognsen,
@pervognsen@mastodon.social avatar

@TomF Well, in this case you'd just need a break or an early return. When you look at a lot of the old articles on GOTO vs structured programming, including Knuth's famous article, they're working in a world where neither break or early return is a thing in structured languages, or at least not considered admissible. Almost all Knuth's examples (as far as I remember) in favor of GOTO are solvable with single-level continue, break or early return. Labeled break/continue probably handles the rest.

pervognsen, (edited )
@pervognsen@mastodon.social avatar

@TomF Most of the remaining examples that "need" GOTO rather than labeled break/continue come from irreducible control flow graphs, usually from non-trivial finite state machines where instead of representing the state as a variable and using a loop with a switch on the state variable, you implicitly encode the state via the program counter and use direct GOTOs between states. Those cases do occur but pretty infrequently. The common "GOTO the error handler" example is just labeled break.

pervognsen,
@pervognsen@mastodon.social avatar

@TomF As an aside, modern compilers perform an optimization called jump threading that turns the above example of a loop + state switch into the equivalent machine code of the GOTO version. So from the perspective of the final machine code (which was Knuth's concern in that article), there's no advantage even in that remaining class of examples.

pervognsen, (edited )
@pervognsen@mastodon.social avatar

@regehr That part isn't too objectionable to me. And I like Go's take on this with named (multiple) return values. I'm guessing that came to Go via Robert Griesemer, who went to ETH Zurich and probably was directly influenced by Wirth and his languages. As for having the same name as the function in ALGOL derivatives, well, there's a certain elegance to it compared to having to make up a generic variable name like 'result'. Though I agree it looks weird to modern eyes.

pervognsen, (edited )
@pervognsen@mastodon.social avatar

@regehr How much do you hate Haskell's /=? If you're an ingrained C programmer that one is fun. At least it's defensible on the grounds of visual similarity to a strike-through equality sign in hand-written math. And it has precedent in a language (the first language!) since it's what FORTRAN uses. I've never figured out from what origin or analogy ALGOL's diamond-shaped <> is supposed to have originated. That one must be a sui generis product of late 1950s pipe smoke.

pervognsen,
@pervognsen@mastodon.social avatar

@phairupegiont @regehr Huh I honestly hadn't even made the connection! I always thought it was an attempt at drawing an ASCII diamond.

pervognsen, to random
@pervognsen@mastodon.social avatar

Why uniform randomized testing might not be the best idea for validating that your base b = 2^64 arbitrary-precision long division implementation has properly implemented the correction step of quotient digit estimation:

pervognsen,
@pervognsen@mastodon.social avatar

I guess the "glass half full" angle on this is that in a non-adversarial situation, you might be able to get away with it. I encourage you to build mass-market divider circuits based on randomized testing; that worked well for Intel.

pervognsen,
@pervognsen@mastodon.social avatar

@TomF I was referring to the Pentium FDIV debacle. AFAIK they primarily validated the algorithm with randomized testing.

pervognsen,
@pervognsen@mastodon.social avatar

@TomF Right, I was just confused by your comment. It didn't work out for them in that instance and it's usually cited as the origin story of Intel's embrace of formal verification for at least parts of the processor.

pervognsen, to random
@pervognsen@mastodon.social avatar

This classic blog post resurfaced on the Rust subreddit today after someone posted that they wished ownership had been explained to them with a metaphor about pirate treasure chests.

https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/

'But now Joe goes and writes a monad tutorial called “Monads are Burritos,” under the well-intentioned but mistaken assumption that if other people read his magical insight, learning about monads will be a snap for them. “Monads are easy,” Joe writes. “Think of them as burritos.”'

pervognsen,
@pervognsen@mastodon.social avatar

@steve @pkhuong @zwarich Basically I was trying to figure out how to implement long division for arbitrary-precision arithmetic but with base 2^32 so you can use machine arithmetic as an efficient basis set of operations. And that's when you realize that small bases let you kind of brush the question of quotient estimation under the rug to the point where I hadn't even noticed it was happening in decimal long division.

pervognsen, (edited )
@pervognsen@mastodon.social avatar

@steve @pkhuong @zwarich And that's when I finally folded and looked it up in Knuth and of course it's quite subtle: you can get away with looking at 3 leading digits of the dividend and 2 leading digits of the divisor, if you're in a power-of-two base, if you re-scale so the divisor's leading digit's MSB is 1, if you're okay with sometimes being off by 1 in the quotient estimate (which can be detected and fixed in one step). Probably got that wrong, but in any case it's not trivial.

pervognsen, (edited )
@pervognsen@mastodon.social avatar

@steve @pkhuong @zwarich "Guess (and verify) the next quotient digit" sure works a lot better in base 10. Of course you can do binary search in base 2^n to guess more efficiently but that's effectively reducing to base 2, which is the ultimate cheat move.

Much later I found this technical note by Per Brinch-Hansen who has a stepwise discussion and development of how to do it. I believe he recommends (for tutorial purposes) a less tight estimate that can be off by 2 or 3 in the worst case.

pervognsen,
@pervognsen@mastodon.social avatar

@steve @pkhuong @zwarich Multiple-Length Division Revisited: A Tour of the Minefield (1992), Per Brinch-Hansen, https://surface.syr.edu/cgi/viewcontent.cgi?article=1162&context=eecs_techreports

"Only a handful of textbooks discuss the theory and practice of long division, and none of them do it satisfactorily. This tutorial attempts to fill this surprising gap in the literature on computer algorithms."

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • khanakhh
  • thenastyranch
  • Youngstown
  • hgfsjryuu7
  • slotface
  • rosin
  • InstantRegret
  • tacticalgear
  • kavyap
  • osvaldo12
  • everett
  • DreamBathrooms
  • PowerRangers
  • tester
  • magazineikmin
  • Durango
  • mdbf
  • ngwrru68w68
  • modclub
  • cubers
  • vwfavf
  • ethstaker
  • cisconetworking
  • GTA5RPClips
  • normalnudes
  • Leos
  • provamag3
  • All magazines