speedcurve, to UX
@speedcurve@webperf.social avatar

ICYMI: Our latest release includes RUM attribution and subparts for INP! @cliff explains:

🟡 Element attribution

🟢 Where time is spent within INP, leveraging subparts

🔴 How to use this information to fix INP issues

https://www.speedcurve.com/blog/rum-attribution-subparts-interaction-to-next-paint/

#webperf #ux #corewebvitals #sitespeed #pagespeed #webperformance

tdp_org, to webdev
@tdp_org@mastodon.social avatar

My pals in BBC World Service have been doing some awesome work on "lite" versions of their news articles (other page types to follow).
They essentially skip the Server-Side React hydration which means you end up with a simpler HTML+CSS page, no JS.
Page sizes drop significantly:

Screenshot of a BBC World Service Mundo "lite" page with Dev Tools open showing bytes transferred and total as stated

tdp_org,
@tdp_org@mastodon.social avatar

@tonypconway @kravietz Yep, spot on. It might be that this is a trailblazer which prompts the .co.uk stack to adopt it, the initial case is definitely compelling IMO.
Toby mentioned he'd be talking to you folks 👍🏻

rasterweb,
@rasterweb@mastodon.social avatar

@tdp_org This makes me so happy. I still shrink and compress images all the time for email, web, etc. because 99% of the time a huge multi-megabyte image just isn't needed.

tammy, (edited ) to accessibility
@tammy@webperf.social avatar

There's more to web performance than page speed. A performant, usable site needs to be:

⏱️ Fast
✋ Accessible

So happy to see my Complete Guide to Performance Budgets included in the latest issue of A11y Weekly, alongside some fantastic other resources!

https://a11yweekly.com/issue/394/

speedcurve, to random
@speedcurve@webperf.social avatar

NEW: Our latest release includes RUM attribution and subparts for INP! @cliff explains:

🟡 Element attribution
🟢 Where time is spent within INP, leveraging subparts
🔴 How to use this information to fix INP issues

https://www.speedcurve.com/blog/rum-attribution-subparts-interaction-to-next-paint/

tammy, to UX
@tammy@webperf.social avatar

PERFORMANCE HERO • per-FAWR-muhns HEER-oh • noun • A person who has made a huge contribution to the and community, without whom the web would be a sadder, slower place.

Celebrating our inaugural @speedcurve Performance Hero, @paulcalvano!

https://www.speedcurve.com/blog/web-performance-hero-paul-calvano/

piefedadmin, to random
@piefedadmin@join.piefed.social avatar

Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backend’s ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.

For example, PieFed has a backend which listens on port 5000. Nginx listens on port 443 for POSTs from outside and sends them through to port 5000:

upstream app_server {   server 127.0.0.1:5000 fail_timeout=0;}
server {   listen 443 ssl;   listen [::]:443 ssl;   server_name piefed.social www.piefed.social;   root /var/www/whatever;   location / {       # Proxy all requests to Gunicorn       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       proxy_set_header X-Forwarded-Proto $scheme;       proxy_set_header Host $http_host;       proxy_redirect off;       proxy_http_version 1.1;       proxy_set_header Connection "";       proxy_pass http://app_server;       ssi off;   }

To this basic config we need to add rate limiting, using the ‘limit_req_zone’ directive. Google that for further details.

limit_req_zone $binary_remote_addr zone=one:100m rate=10r/s;

This will use up to 100 MB of RAM as a buffer and limit POSTs to 10 per second, per IP address. Adjust as needed. If the sender is using multiple IP addresses the rate limit will not be as effective. Put this directive outside your server {} block.

Then after our first location / {} block, add a second one that is a copy of the first except with one additional line (and change it to apply to location /inbox or whatever the inbox URL is for your instance):

location /inbox {       <strong>limit_req zone=one burst=300;</strong>#       limit_req_dry_run on;       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       proxy_set_header X-Forwarded-Proto $scheme;       proxy_set_header Host $http_host;       proxy_redirect off;       proxy_http_version 1.1;       proxy_set_header Connection "";       proxy_pass http://app_server;       ssi off;  }

300 is the maximum number of POSTs it will have in the queue. You can use limit_req_dry_run to test the rate limiting without actually doing any limiting – watch the nginx logs for messages while doing a dry run.

It’s been a while since I set this up so please let me know if I mixed anything crucial out or said something misleading.

https://join.piefed.social/2024/04/17/handling-large-bursts-of-post-requests-to-your-activitypub-inbox-using-a-buffer-in-nginx/

#nginx #webPerformance

piefedadmin,
@piefedadmin@join.piefed.social avatar

Probably, yes.

I find that if a POST fails to be processed I don’t really want the sender to retry anyway, I want them to stop doing it. So if the sender thinks it was successful it’s usually not the worst thing in the world.

It would be nice if Nginx responded with a HTTP 202 (Accepted, yet queued) if a POST was throttled and it would be nice if sending fediverse software knew what to do with that info. But I expect this is an edge case that hasn’t been dealt with by most.

piefedadmin,
@piefedadmin@join.piefed.social avatar

AFAIK once there are more items in the queue than the burst value (300 in my config) Nginx starts returning HTTP 503, which will cause a retry attempt on some senders (e.g. Lemmy). All other times it returns 200.

So if you wanted to be very careful you could set a tiny burst value (maybe zero??) which would return 503 as soon as the rate limit kicked in.

tammy, to UX
@tammy@webperf.social avatar

Another great analysis from @cliff. If your site uses a consent management platform (CMP), it's probably messing with your performance metrics.

Cliff breaks down the five most common issues – which affect all three Core Web Vitals, among other things – and he also provides some helpful scripting workarounds.

https://www.speedcurve.com/blog/web-performance-cookie-consent/

speedcurve, to UX
@speedcurve@webperf.social avatar

Cookie consent popups and banners are everywhere – and they're silently hurting the speed and UX of your pages. @cliff explains common issues – and solutions – related to measuring performance for consent popups.

https://www.speedcurve.com/blog/web-performance-cookie-consent/

tammy, to UX
@tammy@webperf.social avatar

Your time is the most precious thing you have. When I talk to customers, one of the best things I hear is how much time they DON'T spend using @speedcurve:

"We actually don't log in much. We've set up performance budgets and deploy testing. We just wait to get alerts and then dive in to fix things."

Learn more about how to efficiently fight regressions in @tkadlec's excellent post: https://www.speedcurve.com/blog/continuous-web-performance/

tammy, to UX
@tammy@webperf.social avatar

The best time to set up performance budgets was years ago. The next best time is today.

If you're not using budgets to fight page speed regressions, you're missing a vital tool in your / toolkit. Here's everything you need to get started.

https://www.speedcurve.com/blog/performance-budgets/

speedcurve, to UX
@speedcurve@webperf.social avatar

Web performance budgets help your team fight regressions and create a UX-first culture. Here's a collection of inspiring case studies from companies – like Farfetch, Leroy Merlin, GOV.UK, and Zillow – that use performance budgets to stay fast: https://www.speedcurve.com/customers/tag/performance-budgets/

speedcurve, to UX
@speedcurve@webperf.social avatar

Fighting regressions is more important than optimizations. Some resources to help you:

👉 @tkadlec's excellent governance post: https://www.speedcurve.com/blog/continuous-web-performance/
👉 @tammy's opinionated guide to performance budgets: https://www.speedcurve.com/blog/performance-budgets/
👉 @Joseph_Wynn's detailed CI/CD integration how-to post: https://www.speedcurve.com/blog/web-performance-test-pull-requests/

tammy, to UX
@tammy@webperf.social avatar

Every year I revisit the topic of web performance budgets. Here's my updated guide, including:

✅ What are performance budgets?
✅ Why are they a crucial tool in fighting page speed regression?
✅ Best metrics to track
✅ Determining thresholds
✅ Pro tips

https://www.speedcurve.com/blog/performance-budgets/

mrtnvh,
@mrtnvh@techhub.social avatar

@tammy Thank you for this!

tammy,
@tammy@webperf.social avatar

@mrtnvh You're most welcome!

benwis, to rust
@benwis@hachyderm.io avatar

Using Rust for full stack web development has a lot of advantages, and represents a competitive advantage for some applications.

My talk from Rust Nation UK is now a blog post!
https://benw.is/posts/full-stack-rust-with-leptos

tammy, to UX
@tammy@webperf.social avatar

One thing fast sites have in common: they use performance budgets to fight regressions and deliver a consistently fast experience to their users. Here's a VERY detailed guide that covers:

⭐ Budgets vs goals
⭐ Which metrics to track (including and beyond Core Web Vitals)
⭐ How to set thresholds
⭐ Getting stakeholder buy-in
⭐ Integrating budgets with your CI/CD process

https://www.speedcurve.com/blog/performance-budgets/
#webperf #webperformance #ux #corewebvitals #sitespeed #pagespeed

  • All
  • Subscribed
  • Moderated
  • Favorites
  • •
  • provamag3
  • tacticalgear
  • magazineikmin
  • ngwrru68w68
  • thenastyranch
  • osvaldo12
  • Youngstown
  • Durango
  • slotface
  • everett
  • rosin
  • InstantRegret
  • kavyap
  • DreamBathrooms
  • JUstTest
  • mdbf
  • tester
  • normalnudes
  • GTA5RPClips
  • cisconetworking
  • khanakhh
  • ethstaker
  • cubers
  • Leos
  • megavids
  • modclub
  • anitta
  • lostlight
  • All magazines