I enabled Brotli compression on the CDN which serves the main BBC websites (www.bbc.co.uk. www.bbc.com etc.) outside the UK this morning.
Over ~4 hours, we're seeing a mean of ~20% better compression (smaller responses) via Brotli & ~95% of responses being Brotli now.
I've not had time to look in detail at performance but there doesn't look to be a significant change (LMK if you see diferent!).
(the spikes are breaking news events linking to a large "live" pages) #Brotli#WebDev#BBC
A little update on our enabling of Brotli for www.bbc.co.uk, www.bbc.com etc.
We're seeing compression improvements of roughly 15-40% over gzip. 15% is for HTML only, 40% is the overall. The caveat is that some clients which don't support Brotli request unusual content so this may be skewed to some degree.
I'll cover an issue which has cropped up in the next post. #WebDev#Brotli#CDN#Compression#BBC
Our stack is: Fastly -> GTM (BBC CDN) -> Belfrage (BBC routing) -> origins for most of our modern web pages.
Currently, only Fastly supports Brotli, the others do gzip, deflate & no compression.
Fastly strips gzip,deflate from the accept-encoding header sent to origin so our layers all return uncompressed content which means they're using more egress bandwidth. It's not a huge problem for us but something I thought might be useful for others to know. #WebDev#Brotli#CDN#Compression#BBC
Somehow, we never got round to enabling Brotli compression on www.bbc.co.uk & www.bbc.com so I am just in the final throws of deploying that.
So far in ~1 hour on our staging site, I'm seeing ~24% smaller files under Brotli (vs. gzip). 🤞this (or better) also happens on live which'll be tomorrow. #WebDev#Compression#Brotli#GZip
"The goal of the project is to develop an efficient #algorithm for compressing voxel models. As an input format, an unsorted list of 3D integer coordinates and attribute data is used. Multiple methods for encoding geometry data including Cuboid Extraction (CE), Sparse #Voxel#Octrees (SVOs) with Space-Filling Curves, and Run-Length Encoding (RLE) are explained and then compared in terms of complexity, #compression ratio, and real life performance."
Apparently the #samsung Galaxy S24's "downloadable" Gallery app (not sure what this means) supports JPEG-XL compression in RAW images!
> Downlodable App
> 1. Expert RAW
> The basic resolution has been improved from 12MP to 24MP, and image quality and tone in low light have been improved through nightography technology collaboration.
> In addition, Digital ND filter, which was supported as beta in previous S23, is officially provided and Auto mode is provided for user convenience.
> Additionally, storage capacity has been reduced while maintaining image quality by providing JPEG XL format.
Think to have pinpointed the folded asphalt that formed in #Grindavik close south to the small fissure. That is another sign there is some #transtension (extension with a #transform component) going on. Same as the en-echelon steps in the #fissures#Iceland. Orientation of Maximum horizontal #compression in the order of 050-230.
(This is, I think, a silly idea. But sometimes the silliest things lead to unexpected results.) The text of Shakespeare's Romeo and Juliet is about 146,000 characters long. Thanks to the English language, each character can be represented by a single byte. So a plain Unicode text file of the play is about 142KB. In […]
(This is, I think, a silly idea. But sometimes the silliest things lead to unexpected results.)
The text of Shakespeare's Romeo and Juliet is about 146,000 characters long. Thanks to the English language, each character can be represented by a single byte. So a plain Unicode text file of the play is about 142KB.
In Adventures With Compression, JamesG discusses a competition to compress text and poses an interesting thought:
Encoding the text as an image and compressing the image. I would need to use a lossless image compressor, and using RGB would increase the number of values associated with each word. Perhaps if I changed the image to greyscale? Or perhaps that is not worth exploring.
Image compression algorithms are, generally, pretty good at finding patterns in images and squashing them down. So if we convert text to an image, will image compression help?
The English language and its punctuation are not very complicated, so the play only contains 77 unique symbols. The ASCII value of each character spans from 0 - 127. So let's create a greyscale image which each pixel has the same greyness as the ASCII value of the character.
Here's what it looks like when losslessly compressed to a PNG:
That's down to 55KB! About 40% of the size of the original file. It is slightly smaller than ZIP, and about 9 bytes larger than Brotli compression.
The file can be read with the following Python:
from PIL import Imageimage = Image.open("ascii_grey.png")pixels = list(image.getdata())ascii = "".join([chr(pixel) for pixel in pixels])with open("rj.txt", "w") as file: file.write(ascii)
But, even with the latest image compression algorithms, it is unlikely to compress much further; the image looks like random noise. Yes, you and I know there is data in there. And a statistician looking for entropy would probably determine that the file contains readable data. But image compressors work in a different realm. They look for solid blocks, or predictable gradients, or other statistical features.
But there you go! A lossless image is a pretty efficient way to compress ASCII text.
Currently I have snapshots going back to mid 2022, which are being pruned according to a schedule, but I've already exceeded 5TB of storage.
I'd like something that'd perhaps slightly less convoluted, but also doesn't break the bank. I'd love to use straight ZFS #replication but that is priced out of my budget.
🆕 blog! “What's the smallest file size for a 1 pixel image?”
There are lots of new image compression formats out there. They excel at taking large, complex pictures and algorithmically reducing them to smaller file sizes. All of the comparisons I've seen show how good they are at squashing down big files. I wanted to go the other way. How good are modern codecs at …
Hi everyone! I wanted to introduce Av1an Command Generator, a relatively simple tool I wrote in Zig for generating Av1an commands for #AV1#encoding based on a limited set of parameters. GitHub link is below:
The bar to entry for understanding effective Av1an scripting is very high simply because there are a lot of well-documented ways to incorrectly set your parameters and very few who know what is psychovisually optimal. I am not one of those few, but because I know members of the AV1 community who've poured great time and effort into researching this kind of stuff, I am able to build from what they've learned. My more advanced tool, rAV1ator CLI, is also based on this research. I hope you enjoy!
I have a process that generates a JSON document (> 1 MB, < 1 GB) once per week. These documents will be pretty similar. Some data will be modified, some will be added.
I'd like to keep all of these documents, in a compressed way, benefiting from the similarities between them, as if I'd compressed a concatenation of all of them, but without having to recompress everything each week.
Ideas? If possible, only using #Python's standard lib.
The cost-gain balance in terms of optimizing #shell code for size is completely trash, because maintainable and documented code (even if it's just comments) is more important longtern than saving a few bytes omnitting line breaks...
Expechally when #compression like #xz does a better job at making things smaller than any coder could within i.e. #bash alone...
Also I'm not writing #malware that has to obfuscate everything to make #CodeForensics harder...
@bagder have you considered enabling compression by default in curl/libcurl? Given the large number of bots and other automation on the net that make use of it, seems like having that on by default could have a sizable impact on the amount of global network traffic. #curl#libcurl#http#compression#webperf
also #gzip is the lowest common denominator of #compression on #linux, so it does make sense given that #toybox aims to provide a good yet space-efficient userland, even if that means i.e. vi instead of neovim or ne.
And given that #mkroot is a minimum viable product of a complete toybox/linux distro that is able to reproduce itself from source, it's inevitably going to be big.
A "self-hosting" distro that fits on 1440kB is very likely impossible...
Putting this out to the Fediverse in case someone knows what this pipe is. #Plumbing#Help
Our water tank has died 😭 (and the replacement has to be energy efficient 😭 so it is $3400 😭!) but there is also now a pipe that seems to come out of the furnace and drain into the water main. Does anyone know what it’s for? Is it for A/C? We haven’t had the A/C on for months, of course. It has started to drip, and wondering so I can tell the plumbers we have another problem when they come Thursday..
Okay. There's one pipe coming out of the furnace a few inches above the floor. That's its condensate drain line.
The other one is from your water heater, and is its condensate drain line. It's joining the other one with a "T" fitting, but it looks like it's some kind of #compression#fitting rather than a "glued" joint, which they usually do because it's cheaper and lasts forever.
The pipe is #PVC. It's not actually joined with glue.