My pals in BBC World Service have been doing some awesome work on "lite" versions of their news articles (other page types to follow).
They essentially skip the Server-Side React hydration which means you end up with a simpler HTML+CSS page, no JS.
Page sizes drop significantly:
Google provides a tool called PageSpeed Insights which gives a website some metrics to assess how well it is put together and how fast it loads. There are a lot of technical details but in general green scores are good, orange not great and red is bad.
I tried to ensure the tests were similar for each platform by choosing a page that shows a list of posts, like https://mastodon.social/explore.
The rest don’t seem to have prioritized performance or chose a software architecture that cannot be made to perform well on these metrics. It will be very interesting to see how that affects the cost of running large instances and the longevity of the platforms. Time will tell.
PERFORMANCE HERO • per-FAWR-muhns HEER-oh • noun • A person who has made a huge contribution to the #webperf and #ux community, without whom the web would be a sadder, slower place.
Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backend’s ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.
For example, PieFed has a backend which listens on port 5000. Nginx listens on port 443 for POSTs from outside and sends them through to port 5000:
upstream app_server { server 127.0.0.1:5000 fail_timeout=0;}
This will use up to 100 MB of RAM as a buffer and limit POSTs to 10 per second, per IP address. Adjust as needed. If the sender is using multiple IP addresses the rate limit will not be as effective. Put this directive outside your server {} block.
Then after our first location / {} block, add a second one that is a copy of the first except with one additional line (and change it to apply to location /inbox or whatever the inbox URL is for your instance):
300 is the maximum number of POSTs it will have in the queue. You can use limit_req_dry_run to test the rate limiting without actually doing any limiting – watch the nginx logs for messages while doing a dry run.
It’s been a while since I set this up so please let me know if I mixed anything crucial out or said something misleading.
Another great analysis from @cliff. If your site uses a consent management platform (CMP), it's probably messing with your performance metrics.
Cliff breaks down the five most common issues – which affect all three Core Web Vitals, among other things – and he also provides some helpful scripting workarounds.
Cookie consent popups and banners are everywhere – and they're silently hurting the speed and UX of your pages. @cliff explains common issues – and solutions – related to measuring performance for consent popups.
The best time to set up performance budgets was years ago. The next best time is today.
If you're not using budgets to fight page speed regressions, you're missing a vital tool in your #webperf / #ux toolkit. Here's everything you need to get started.
Your time is the most precious thing you have. When I talk to customers, one of the best things I hear is how much time they DON'T spend using @speedcurve:
"We actually don't log in much. We've set up performance budgets and deploy testing. We just wait to get alerts and then dive in to fix things."
Web performance budgets help your team fight regressions and create a UX-first culture. Here's a collection of inspiring case studies from companies – like Farfetch, Leroy Merlin, GOV.UK, and Zillow – that use performance budgets to stay fast: https://www.speedcurve.com/customers/tag/performance-budgets/
Every year I revisit the topic of web performance budgets. Here's my updated guide, including:
✅ What are performance budgets?
✅ Why are they a crucial tool in fighting page speed regression?
✅ Best metrics to track
✅ Determining thresholds
✅ Pro tips
One thing fast sites have in common: they use performance budgets to fight regressions and deliver a consistently fast experience to their users. Here's a VERY detailed guide that covers:
⭐ Budgets vs goals
⭐ Which metrics to track (including and beyond Core Web Vitals)
⭐ How to set thresholds
⭐ Getting stakeholder buy-in
⭐ Integrating budgets with your CI/CD process
"Putting up the appropriate guardrails to protect ourselves from regressions – then pairing that with a trail of breadcrumbs so that we can dive in and quickly identify what the source is when a regression occurs – are essential steps to ensure that when we make our sites fast, they stay that way."
Awesome post by @tkadlec, filled with high-level wisdom and ground-level best practices.
Hilarious! My colleague Fabian Krumbholz just landed 🚀 an excellent explanation of Google‘s new metric of responsiveness, Interaction to Next Paint (INP), and why it’s important. We in the #webperformance community are super excited about the metric and have been working hard either to measure it, optimize it and help people make their sites more responsive to user interaction. https://www.inp-check.com
Keeping your site fast is a crucial and endless game. Yet it's perilously easy to lose focus and suffer from regressions. In this excellent, detailed, best-practice-filled post, @tkadlec uses the analogy of guardrails (automated testing & alerts) and breadcrumbs (deployment tracking) to make performance more visible throughout the dev process.