Idle thought: if you cache the min/max key bounds (in a pair of arrays) along the descending path when you do an ordered search tree descent, you can binary search a correlated query key against the nested bounds in O(log(log(n))) time to use as a local root/LCA. In practice binary search isn't necessary for any realistic n, fast linear search is better if you can use SIMD. There are no chained data dependencies for the search, unlike restarting the tree descent from the top.
Mostly just found this amusing because it's rare to see log log complexity outside of vEB trees, x-fast/y-fast tries, etc, and there the nested logarithm is also because you're binary searching on the depth of a log-depth tree although in their case it's O(log(log(u)) where u is the size of the universe, not the number of keys in the tree.
Every once in a while there's a bug on YouTube where I get the comments for the wrong video and it's never not hilarious. Anyone else see this occasionally?
In this case I got the comments from a J Dilla track applied to a technical podcast. Sample comments:
"I listened to this on mushrooms, closed my eyes, and laid back on the couch."
"I recently found out that a vinyl can be made out of your ashes."
"His shit really have me ballin like a lil girl."
Must be one hell of a tech podcast to elicit those responses.
For people who've been around much longer, has there been any retrospectives on Rust's decision to allow panics to unwind rather than abort? I've mostly come to terms with it in a practical sense but it's something that really "infects" the language and library ecosystem at a deep level, e.g. fn(&mut T) isn't "the same" as fn(T) -> T and it's especially troublesome if you're writing unsafe library code and dynamically calling code through closures or traits that could potentially panic.
@dotstdy@glaebhoerl@foonathan@pkhuong I was thinking the other day about subprocess/sandbox isolation in the context of fallible allocations, too. There's at least two kinds of fallibility: the kind required to compose allocators (low-level), which doesn't infect the whole system; and the kind required to recover from allocation failure "anywhere", which does infect most of the system. The latter is really the kind of thing where you want something closer to sandboxing, IMO.
@dotstdy@glaebhoerl@foonathan@pkhuong I realize that sandboxing (I'm intentionally using the term very loosely) isn't always feasible and so approximating the properties of sandboxing with unwind recovery, best-effort side effect isolation, etc, can be a valid and necessary alternative. But it does feel like it really wants to be a form of sandboxing. And if fault isolation and functional correctness were both of the highest importance for me, I'd probably want actual sandboxing.
Do Apple-exclusive (or Apple-mostly) developers like their tech docs, etc? I always feel like I have a hard time finding anything I want. So much info is buried in WWDC presentations, their own docs are hard to navigate and often don't have what I want, and to top it off they also seem to have poor SEO juice so it's hard to use third-party search engines to find stuff (most of the time I find random blog posts instead).
@pervognsen My sense is that the Apple-exclusive devs dislike the docs intensely. I recall a community project to catalog the quality of the docs for all the different major APIs, and most were poor or nonexistent.
Hundreds of rooks have moved into some trees outside my mom's apartment since I arrived. They start cawing around 4 AM. It's now barely 5 AM and I am wide awake.
The loud, constant cawing for 18 hours a day is a bit much but you start to get used to it after a few hours into each day and at this point I'm also feeling less paranoid about them planning a Hitchcock's Birds style massacre.
@pervognsen@gfxstrand good piece, i think i remember coming across it last year after nvidia released a linux driver update that made games unplayable in wayland for me (and many others, from the sound of it) with the rationale "well, to really do this right we need explicit sync, so we'll just pull the ripcord on getting that through the whole standards process". this week they finally released a beta driver that supports it and fixes wayland on their cards.
@madmoose Yeah, I found out my browser had been misconfigured (started using a new one recently) and wasn't using my normal DNS settings for some reason. Everything seems fine now.
@pervognsen Ah interesting, I wonder if that's how they do it for C++ as well. The find uses in Resharper is vastly better than the version in VAX, because it's actually accurate. But it's still extremely slow when you ask for the callers to a function called "update" and if they're using unindexed text filtering as the primary search, then that would explain the slowness perhaps. (in the context of millions of lines of game engine and game code where everyone calls their method Update anyway)
Some obvious limitations of this approach: common identifiers produce way more false positives in the initial grep stage. A saving grace is that a real implementation of this idea can apply some basic scope and reachability filtering so you're not just blindly grepping everything in the known universe. Lexical scopes can be locally analyzed. And it's easy for most sane languages to compute a reasonably precise but still conservative (hence safe) approximation of source file reachability.
@pervognsen@rovarma
... except that after the release of WinUI 3, "WinUI 2" was sorta retconned to mean the entire UWP XAML stack, parallel to WinUI 3, not just the controls library on top of it.
@thomask77 Heh, I just looked up a demo video and it still looks like an ancient Windows 95 app (not that I care about that), so it's pretty funny that Windows 11 is required. Not sure why they upped the requirement. :)
I've never fully worked out how best to articulate my dissatisfaction with the usual way people talk about pluggable allocators in systems programming. Sure, I'd like to have some standard for fallible, pluggable allocation at the lower level of a language's standard library. But the entire mindset of plugging together allocators and data structures is something I find dubious and at best it feels like a poor compromise.
I assume this isn't a problem for EEs but for CS types who are taught logic gates, etc, in their curriculum I wonder if timing should be included in a first course. I'm still trying to help the person I mentioned earlier in a private chat and it sounds like that's the source of almost all their confusion. They think logic gates are instant and one of the "counterexamples" they came up for why delays seem logically inconsistent is y = xor(x, not(x)). Which is a standard edge detector.
@abecedarius Yeah, I mean, I figured this out myself eventually. But even when constructivist-style learning is the goal (that's my own preference) you can lay out a more optimal path (e.g. guided projects, exercises, examples, etc, that are intended to maximize opportunities for self-made discoveries and insights) than just letting people get stuck with wrong mental models. And there's only a finite amount of time and letting people figure out everything "the hard way" isn't ideal either.
@abecedarius I think programming is a uniquely good tool for stress testing your mental models across many different domains, though. You can hand-wave a lot of stuff to the point where you can fool yourself and others but once you're forced to implement something (e.g. a simulator for asynchronous digital circuits with combinational loops) you have to put it to a much harder test.
I was trying to help someone on reddit yesterday who was writing their first digital logic simulator and got confused on how to handle combinational loops when simulating the internals of an SR latch. I wish someone had explained it to me in terms of discrete-time state vectors instead of directly jumping into event-driven simulation, which is mostly just a sparse optimization of the state vector approach. https://gist.github.com/pervognsen/78b2baeac7f6ffba9fd6b41b6e6db284
Incidentally, if you do want to write an event-driven circuit simulator, there's an interesting analogy with event-driven IO in systems programming about readiness vs completion signaling. The readiness approach is IMHO the easiest to get right since it embraces the notion that event-driven simulation is an optimization. So you conservatively schedule pull-based node re-evaluations but you don't try to push new node values into the future.
(Event-driven simulation can also more easily handle non-discrete-time systems with discrete state changes, which is at least one sense in which it's not merely an optimization of the discrete-time state vector approach.)