@wb@RL_Dane Does JXL not have any directional prediction or shared data between blocks? How does it handle generation loss so well?
I assumed the VarDCT built on every coding technique that previous DCT variants used, although I’m not well versed in what those were & I may be off base
Yeah, early JPEG was pretty exciting. To my knowledge, it was the first GOOD lossy algorithm of any kind -- QuickTime had lossy video codecs, but they were super simple: selectively updating the screen by rather large blocks, but no complex math or DCT AFAIK.
There were lossy audio codecs for voice, but they were pretty rudimentary. Nothing good for full-spectrum audio (music) until MP3.
I remember waiting minutes to decode a small JPEG on my 8mhz Mac SE. And of course, trying to compress 1-bit images was a miserable experience, so I didn't use it much until I finally got a color machine in '94 ;)
Ahh, that makes sense. I wouldn't have thought that encoders would be more economical (because of increased complexity of encoding vs decoding), but the point about not having to support such a wide range of possible algorithms makes a lot of sense.
That also explains why cameras record to relatively high bitrates -- not just for best possible quality, but possibly also so that the encoding can be a bit simpler/"faster."
The camera wasn't actually digital, it recorded onto a video disc, but it got digitized when you connected it to your computer for "upload." IIRC, you actually needed a video capture card to digitize the photos, but I can't remember for sure. ^___^
That definitely explains why a phone can encode h265 in realtime while really aggressive settings on an hour-long video can take a day to transcode on even a fast CPU. :D
It's so sad.
I'm imagining what smartphones could be like today if the iToy never happened.
Physical buttons, extensive and flexible mobile OSes, non-stupefied userbase.
And most of all, no destruction of desktop UIs to make it look like a 23" iFad.