There's a bit of stuff in this article which phrases it in terms of changes over time, e.g. compute capability has grown and we no longer need big data. But it seems closer to reality that it was never required, and continues to not be required. (looking forward to the same style of post happening in a few years vis-a-vis microservices)
@dotstdy@ltratt >10 years ago, $WORK handled enough transactions to observe a couple collisions in random 63-bit ids every day (where each id represents a different [haha] transaction that exchanges a tiny but real amount of money between 2+ companies). I don't think it would have fit on a laptop back then… and SSDs weren't exactly mass market yet.
I think the only reason we didn't have to run everything with colocated CPU and storage like hadoop is someone had had the foresight to negotiate a fixed fee license for vertica w/o any limit on the storage footprint.
@wingo Re Joe Marshall's stack hack, I'm pretty sure Common Larceny compiled to CLR. When I implemented it for delimited continuations in CL, I ended up with only 2x code blowup: one instance for no capture at all, and a fully broken up version of the ANF-ed steps as the jump target when restoring any stack frame. That gave me near-parity for performance without capture, and avoided a quadratic size blow up.
Work is both performance and liability^Wcorrectness oriented, and I noticed a common pattern is that we'll generate commands with a fully deterministic program (i.e., a function), reify the command stream, and act on the commands.The IO monad is real!
@pervognsen Yup, auditability/testability is great. I remember googlers trying to test on or parse logging output. I think they really wanted a command list.
@pervognsen The moment you expose a dependency graph as a service, people like me will send batches of 200K tasks because there's no other pipelining API (that's how I ended up on a call with Azure).
@pervognsen and I think you want to schedule jobs, where jobs consist of n independent tasks. The abstraction isn't perfect, but it lets you scale to millions of tasks.
For people who've been around much longer, has there been any retrospectives on Rust's decision to allow panics to unwind rather than abort? I've mostly come to terms with it in a practical sense but it's something that really "infects" the language and library ecosystem at a deep level, e.g. fn(&mut T) isn't "the same" as fn(T) -> T and it's especially troublesome if you're writing unsafe library code and dynamically calling code through closures or traits that could potentially panic.
Hey software license knowledgeable friends. We recently put code out for a paper that is BSD licensed.
What would happen if some other company forked it and made a bunch of changes/ improvements?
Would it still be copyright EA in the license on their fork? And it'd have to stay BSD right?
Ty, random curiosity :) https://github.com/electronicarts/fastnoise/blob/main/LICENSE.txt
@demofox No patent… yet ;) IME, big co lawyers prefer ASLv2 over BSD because the former comes with a patent grant for using the licensed software… which could be considered important given the domain.
Ok so the internet is the epitome of cache invalidation problems (f5 and dns), and the challenge of naming things (urls). Are there significant off by one errors? :P
If you want a low-level intermediate language for reducible CFGs, it boils down to labeled break/continue. You can either have one type of block and two types of branches (break and continue). Or you can have two types of blocks and one type of branch. Wasm chose the latter: the two types of blocks are 'block' and 'loop' where branching to a 'block' is a break and branching to a 'loop' is a continue. That seems like the superior choice if you want to prioritize streaming compilers as Wasm does.
@pervognsen I see what you mean: a labeled continue is the only way to express a backward jump. Yeah, I had in mind a special "loop forever" (until break to a surrounding block) block type.
Does anyone have a list of references to Joe Seigh's work on proxy collection? I've seen @pkhuong mention it a few times but the origins of his work seems even harder to track down than some of the early Thomasson and Vyukov lock-free stuff.
This classic blog post resurfaced on the Rust subreddit today after someone posted that they wished ownership had been explained to them with a metaphor about pirate treasure chests.
'But now Joe goes and writes a monad tutorial called “Monads are Burritos,” under the well-intentioned but mistaken assumption that if other people read his magical insight, learning about monads will be a snap for them. “Monads are easy,” Joe writes. “Think of them as burritos.”'
@pervognsen@zwarich I didn't get it at the time, but now I think you have to go wide and hope one example makes things click, and a couple others can serve as counterexamples.
@pervognsen@molecularmusing@dotstdy@wolfpld@foonathan Physicists have it easy because nature makes non-locality hard/weak. Out here in the man-made world, one ship stopping at an embargoed port can instantaneously cause trouble for the rest of the fleet around the globe (not that I was traumatised by liner shipping problems).
I bet linking could scale linearly if there was an inverse square law for correctness ;)
This came up in another thread today but I figure I'd throw a brief comment to the timeline. The concept of "grace periods" where you separate the logical and physical deletion of resources is something you see in RCU, EBR, QSBR, etc, but it's just as useful in single-threaded code where you only have pseudo-concurrency through subroutine calls. Like the age-old pattern of deferring deletion until the end of the game loop, or autorelease pools in Objective C which get drained by the run loop.