My pals in BBC World Service have been doing some awesome work on "lite" versions of their news articles (other page types to follow).
They essentially skip the Server-Side React hydration which means you end up with a simpler HTML+CSS page, no JS.
Page sizes drop significantly:
@tonypconway@kravietz Yep, spot on. It might be that this is a trailblazer which prompts the .co.uk stack to adopt it, the initial case is definitely compelling IMO.
Toby mentioned he'd be talking to you folks đđť
@tdp_org This makes me so happy. I still shrink and compress images all the time for email, web, etc. because 99% of the time a huge multi-megabyte image just isn't needed.
PERFORMANCE HERO ⢠per-FAWR-muhns HEER-oh ⢠noun ⢠A person who has made a huge contribution to the #webperf and #ux community, without whom the web would be a sadder, slower place.
Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backendâs ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.
For example, PieFed has a backend which listens on port 5000. Nginx listens on port 443 for POSTs from outside and sends them through to port 5000:
upstream app_server {Â Â Â server 127.0.0.1:5000 fail_timeout=0;}
server {   listen 443 ssl;   listen [::]:443 ssl;   server_name piefed.social www.piefed.social;   root /var/www/whatever;   location / {       # Proxy all requests to Gunicorn       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       proxy_set_header X-Forwarded-Proto $scheme;       proxy_set_header Host $http_host;       proxy_redirect off;       proxy_http_version 1.1;       proxy_set_header Connection "";       proxy_pass http://app_server;       ssi off;   }
To this basic config we need to add rate limiting, using the âlimit_req_zoneâ directive. Google that for further details.
This will use up to 100 MB of RAM as a buffer and limit POSTs to 10 per second, per IP address. Adjust as needed. If the sender is using multiple IP addresses the rate limit will not be as effective. Put this directive outside your server {} block.
Then after our first location / {} block, add a second one that is a copy of the first except with one additional line (and change it to apply to location /inbox or whatever the inbox URL is for your instance):
location /inbox {Â Â Â Â Â Â Â <strong>limit_req zone=one burst=300;</strong># Â Â Â Â Â Â limit_req_dry_run on;Â Â Â Â Â Â Â proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;Â Â Â Â Â Â Â proxy_set_header X-Forwarded-Proto $scheme;Â Â Â Â Â Â Â proxy_set_header Host $http_host;Â Â Â Â Â Â Â proxy_redirect off;Â Â Â Â Â Â Â proxy_http_version 1.1;Â Â Â Â Â Â Â proxy_set_header Connection "";Â Â Â Â Â Â Â proxy_pass http://app_server;Â Â Â Â Â Â Â ssi off;Â Â }
300 is the maximum number of POSTs it will have in the queue. You can use limit_req_dry_run to test the rate limiting without actually doing any limiting â watch the nginx logs for messages while doing a dry run.
Itâs been a while since I set this up so please let me know if I mixed anything crucial out or said something misleading.
I find that if a POST fails to be processed I donât really want the sender to retry anyway, I want them to stop doing it. So if the sender thinks it was successful itâs usually not the worst thing in the world.
It would be nice if Nginx responded with a HTTP 202 (Accepted, yet queued) if a POST was throttled and it would be nice if sending fediverse software knew what to do with that info. But I expect this is an edge case that hasnât been dealt with by most.
AFAIK once there are more items in the queue than the burst value (300 in my config) Nginx starts returning HTTP 503, which will cause a retry attempt on some senders (e.g. Lemmy). All other times it returns 200.
So if you wanted to be very careful you could set a tiny burst value (maybe zero??) which would return 503 as soon as the rate limit kicked in.
Another great analysis from @cliff. If your site uses a consent management platform (CMP), it's probably messing with your performance metrics.
Cliff breaks down the five most common issues â which affect all three Core Web Vitals, among other things â and he also provides some helpful scripting workarounds.
Cookie consent popups and banners are everywhere â and they're silently hurting the speed and UX of your pages. @cliff explains common issues â and solutions â related to measuring performance for consent popups.
Your time is the most precious thing you have. When I talk to customers, one of the best things I hear is how much time they DON'T spend using @speedcurve:
"We actually don't log in much. We've set up performance budgets and deploy testing. We just wait to get alerts and then dive in to fix things."
The best time to set up performance budgets was years ago. The next best time is today.
If you're not using budgets to fight page speed regressions, you're missing a vital tool in your #webperf / #ux toolkit. Here's everything you need to get started.
Web performance budgets help your team fight regressions and create a UX-first culture. Here's a collection of inspiring case studies from companies â like Farfetch, Leroy Merlin, GOV.UK, and Zillow â that use performance budgets to stay fast: https://www.speedcurve.com/customers/tag/performance-budgets/
Every year I revisit the topic of web performance budgets. Here's my updated guide, including:
â What are performance budgets?
â Why are they a crucial tool in fighting page speed regression?
â Best metrics to track
â Determining thresholds
â Pro tips
One thing fast sites have in common: they use performance budgets to fight regressions and deliver a consistently fast experience to their users. Here's a VERY detailed guide that covers:
â Budgets vs goals
â Which metrics to track (including and beyond Core Web Vitals)
â How to set thresholds
â Getting stakeholder buy-in
â Integrating budgets with your CI/CD process