interesting problem: progressively mapping a cosmically high number of unique strings of arbitrary length to an ordered set so that we can assign an index to each string, extract a substring from each index, and filter strings not in the set.
evidently, this approach requires compression. the compressed result is functionally equivalent to a regular expression, or a schema validation system.
On twitter @sebbbi mentions a cheap approximation of sin(π x) on [-1,1]
sin(π x)/4 ≈ x - x |x|
Normally if we want a two term approximation of an odd function we're stuck with the form:
a(x) = c₁ x + c₃ x³
but if we move away from pure polynomials then another two term form is possible:
b(x) = k₁ x + k₂ x |x|
and this generalizes to however many term odd/even function approximations (filling in otherwise skipped powers with one as abs which spit into two subexpressions w & w/o abs)
Hey @mbr , I remember you did that LDS thing with integers where you had an integer representation of Phi.
How did you calculate that integer version of Phi?
Did you just multiply phi by the the maximum value the int could represent and convert to int (floor / round)?
@demofox So [0,N) where N is now not a power of two: t = NΦ, find 'd' which is the nearest integer to 't' which is coprime with N? 'd' is the additive constant and the iteration visits all the elements. Sounds right to me.
For folks that know me as "the blue noise guy", I've put together a 50 minute video that talks about many of the things I've learned in my ~decade long dive into noise and related topics - up to and including our latest paper published days ago at I3D.
I hope you enjoy it! https://www.youtube.com/watch?v=tethAU66xaA
@demofox@dougmerritt To expand. With pseudo-random (or LDS) you have a set of known properties. And the fact you can repeat exactly the same sequences (or subset thereof) is of great value. Real random is a niche thing that's only has limited usefulness in "security" IMHO.
I don't know if I hallucinated this but maybe someone recognises it:
I'm sure I once (~20 years ago) saw an arbitrary precision real number library for C or C++ that worked by picking some fixed precision and worked until it produced a result at the required precision, or, if it convinced itself it couldn't achieve that precision, did some kind of backtracking so it could redo the computation at a higher fixed precision. Somewhat analogously to how transactional memory works - and I think under the hood there may have been some unusual memory model.
@pervognsen@dubroy I don't know what either GCC or clang is doing today but (considering) converting a integer multiply by constant to shift/add/sub sequence can be handled as a mini VM.
I really think my employers should be able to get compensation from Microsoft for the NUMBER OF TIMES THEY UPDATE WINDOWS LOSING ALL MY CURRENT DEBUGGING STATE AND FUCKING MY DAY UP.
Ranty shout over but Jeez Louise it'll take me a good hour to just get all the tools setup in the right way this morning SIGH.
If you enjoyed the visuals in the Dune 2 movie and want to try it out at a grand scale, at 230m tall, the Great Sand Dunes (CO) is a worthy stop. Just be careful, the nearby towns entire GDP comes from speeding tickets, and they will go as low as to ticket you for speeding during a lane crossing pass.
I'm in the process of completely redesigning my blog. I used Jekyll and tbh I never liked it - I often needed to use raw html for some stuff anyway. Now transitioning to a pure html+css+js site, like some caveman :)
@thomastc Aside: the other (very large scale) option is breaking everything up into local coordinate systems. This way coordinates can remain in single precision (or fixed point for that matter).
I have to say, the voice acting in bg3 is among the best I've seen (well, heard)
and man this game is taking a long time to finish
how do some people finish this multiple times?!
On uint128 (sigh…): kind of weird for the Rust u128 ABI post to suggest 16-byte aligned u128 is more efficient on x86-64 because of the architecture. There is no architectural support yet for memory operations on qword pairs. The benchmark counts instructions, so I guess maybe some copies were SSEd? I suppose we should align all 2, 4, 8, and 16-byte structs to their size on x86-64, because architecture… (and any real perf degradation from narrow-to-wide store forwarding failure wouldn't show up in the instruction count https://gist.github.com/travisdowns/bc9af3a0aaca5399824cf182f85b9e9c!)
[*]: "part of the reason for an ABI to specify the alignment of a datatype is because it is more efficient on that architecture. We actually got to see that firsthand: the initial performance run with the manual alignment change showed nontrivial improvements to compiler performance (which relies heavily on 128-bit integers to work with integer literals)." https://blog.rust-lang.org/2024/03/30/i128-layout-update.html