speedcurve, to UX
@speedcurve@webperf.social avatar

ICYMI: Our latest release includes RUM attribution and subparts for INP! @cliff explains:

🟡 Element attribution

🟢 Where time is spent within INP, leveraging subparts

🔴 How to use this information to fix INP issues

https://www.speedcurve.com/blog/rum-attribution-subparts-interaction-to-next-paint/

#webperf #ux #corewebvitals #sitespeed #pagespeed #webperformance

tdp_org, to webdev
@tdp_org@mastodon.social avatar

My pals in BBC World Service have been doing some awesome work on "lite" versions of their news articles (other page types to follow).
They essentially skip the Server-Side React hydration which means you end up with a simpler HTML+CSS page, no JS.
Page sizes drop significantly:

Screenshot of a BBC World Service Mundo "lite" page with Dev Tools open showing bytes transferred and total as stated

speedcurve, to random
@speedcurve@webperf.social avatar

NEW: Our latest release includes RUM attribution and subparts for INP! @cliff explains:

🟡 Element attribution
🟢 Where time is spent within INP, leveraging subparts
🔴 How to use this information to fix INP issues

https://www.speedcurve.com/blog/rum-attribution-subparts-interaction-to-next-paint/

tammy, to UX
@tammy@webperf.social avatar

PERFORMANCE HERO • per-FAWR-muhns HEER-oh • noun • A person who has made a huge contribution to the and community, without whom the web would be a sadder, slower place.

Celebrating our inaugural @speedcurve Performance Hero, @paulcalvano!

https://www.speedcurve.com/blog/web-performance-hero-paul-calvano/

piefedadmin, to random
@piefedadmin@join.piefed.social avatar

Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backend’s ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.

For example, PieFed has a backend which listens on port 5000. Nginx listens on port 443 for POSTs from outside and sends them through to port 5000:

upstream app_server {   server 127.0.0.1:5000 fail_timeout=0;}
server {   listen 443 ssl;   listen [::]:443 ssl;   server_name piefed.social www.piefed.social;   root /var/www/whatever;   location / {       # Proxy all requests to Gunicorn       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       proxy_set_header X-Forwarded-Proto $scheme;       proxy_set_header Host $http_host;       proxy_redirect off;       proxy_http_version 1.1;       proxy_set_header Connection "";       proxy_pass http://app_server;       ssi off;   }

To this basic config we need to add rate limiting, using the ‘limit_req_zone’ directive. Google that for further details.

limit_req_zone $binary_remote_addr zone=one:100m rate=10r/s;

This will use up to 100 MB of RAM as a buffer and limit POSTs to 10 per second, per IP address. Adjust as needed. If the sender is using multiple IP addresses the rate limit will not be as effective. Put this directive outside your server {} block.

Then after our first location / {} block, add a second one that is a copy of the first except with one additional line (and change it to apply to location /inbox or whatever the inbox URL is for your instance):

location /inbox {       <strong>limit_req zone=one burst=300;</strong>#       limit_req_dry_run on;       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       proxy_set_header X-Forwarded-Proto $scheme;       proxy_set_header Host $http_host;       proxy_redirect off;       proxy_http_version 1.1;       proxy_set_header Connection "";       proxy_pass http://app_server;       ssi off;  }

300 is the maximum number of POSTs it will have in the queue. You can use limit_req_dry_run to test the rate limiting without actually doing any limiting – watch the nginx logs for messages while doing a dry run.

It’s been a while since I set this up so please let me know if I mixed anything crucial out or said something misleading.

https://join.piefed.social/2024/04/17/handling-large-bursts-of-post-requests-to-your-activitypub-inbox-using-a-buffer-in-nginx/

#nginx #webPerformance

speedcurve, to UX
@speedcurve@webperf.social avatar

Cookie consent popups and banners are everywhere – and they're silently hurting the speed and UX of your pages. @cliff explains common issues – and solutions – related to measuring performance for consent popups.

https://www.speedcurve.com/blog/web-performance-cookie-consent/

tammy, to UX
@tammy@webperf.social avatar

The best time to set up performance budgets was years ago. The next best time is today.

If you're not using budgets to fight page speed regressions, you're missing a vital tool in your / toolkit. Here's everything you need to get started.

https://www.speedcurve.com/blog/performance-budgets/

speedcurve, to UX
@speedcurve@webperf.social avatar

Web performance budgets help your team fight regressions and create a UX-first culture. Here's a collection of inspiring case studies from companies – like Farfetch, Leroy Merlin, GOV.UK, and Zillow – that use performance budgets to stay fast: https://www.speedcurve.com/customers/tag/performance-budgets/

speedcurve, to UX
@speedcurve@webperf.social avatar

Fighting regressions is more important than optimizations. Some resources to help you:

👉 @tkadlec's excellent governance post: https://www.speedcurve.com/blog/continuous-web-performance/
👉 @tammy's opinionated guide to performance budgets: https://www.speedcurve.com/blog/performance-budgets/
👉 @Joseph_Wynn's detailed CI/CD integration how-to post: https://www.speedcurve.com/blog/web-performance-test-pull-requests/

tammy, to UX
@tammy@webperf.social avatar

Every year I revisit the topic of web performance budgets. Here's my updated guide, including:

✅ What are performance budgets?
✅ Why are they a crucial tool in fighting page speed regression?
✅ Best metrics to track
✅ Determining thresholds
✅ Pro tips

https://www.speedcurve.com/blog/performance-budgets/

benwis, to rust
@benwis@hachyderm.io avatar

Using Rust for full stack web development has a lot of advantages, and represents a competitive advantage for some applications.

My talk from Rust Nation UK is now a blog post!
https://benw.is/posts/full-stack-rust-with-leptos

speedcurve, to UX
@speedcurve@webperf.social avatar

Data Viz of the Day: Competitive benchmarks!

Showing your team (and your boss!) how fast you are compared to your competitors can be a great way to get buy-in and budget for investing in and optimization. Here's how to create your own: https://support.speedcurve.com/docs/competitive-benchmarking

tammy, to UX
@tammy@webperf.social avatar

"Putting up the appropriate guardrails to protect ourselves from regressions – then pairing that with a trail of breadcrumbs so that we can dive in and quickly identify what the source is when a regression occurs – are essential steps to ensure that when we make our sites fast, they stay that way."

Awesome post by @tkadlec, filled with high-level wisdom and ground-level best practices.

https://www.speedcurve.com/blog/continuous-web-performance/

screenspan, to random
@screenspan@mastodon.green avatar

Hilarious! My colleague Fabian Krumbholz just landed 🚀 an excellent explanation of Google‘s new metric of responsiveness, Interaction to Next Paint (INP), and why it’s important. We in the community are super excited about the metric and have been working hard either to measure it, optimize it and help people make their sites more responsive to user interaction. https://www.inp-check.com

tammy, to UX
@tammy@webperf.social avatar

Keeping your site fast is a crucial and endless game. Yet it's perilously easy to lose focus and suffer from regressions. In this excellent, detailed, best-practice-filled post, @tkadlec uses the analogy of guardrails (automated testing & alerts) and breadcrumbs (deployment tracking) to make performance more visible throughout the dev process.

https://www.speedcurve.com/blog/continuous-web-performance/

ryantownsend, to SEO
@ryantownsend@webperf.social avatar

⚠️ Today is a monumental day for the web ⚠️

Today marks the day Google replaces one of it’s Core Web Vitals, a metric called “First Input Delay”, with “Interaction to Next Paint”—INP for short.

But this may not be as monumental as people are making out...

Featured Links:

RUMvision's Core Web Vitals History: https://www.rumvision.com/tools/core-web-vitals-history/

SpeedCurve's research on INP to Conversion Rate Correlation:
https://www.speedcurve.com/blog/INP-user-experience-correlation/

video/mp4

speedcurve, to UX
@speedcurve@webperf.social avatar

Debugging INP can be tricky! You're in great hands with @andydavies. In this post, Andy walks through:

🔵 How he helps people identify the causes of poor INP
🟡 Examples of the most common issues
🟢 How to actually improve INP

https://www.speedcurve.com/blog/debugging-interaction-to-next-paint-inp/

monospace, to SEO
@monospace@floss.social avatar

You don't need a CDN! With Varnish, the open source cache proxy, you can shield your website from traffic spikes and speed up its content delivery by orders of magnitude. In my Varnish Master Course, you'll learn everything you need to set it up and configure it for optimal performance. https://www.monospacementor.com/courses/varnish-master-course/

piefedadmin, to fediverse
@piefedadmin@join.piefed.social avatar

For a very small instance with only a couple of concurrent users a CDN might not make much difference. But if you take a look at your web server logs you’ll quickly notice that every post / like / vote triggers a storm of requests from other instances to yours, looking up lots of different things. It’s easy to imagine how quickly this would overwhelm an instance once it gets even a little busy.

One of the first web performance tools people reach for is to use a CDN, like Cloudflare. But how much difference will it make? In this video I show you my web server logs before and after and compare them.

The short answer is – before CDN: 720 requests. After CDN: 100 requests.

Usually just turning on a CDN with default settings will not help very much, you’ll need to configure some caching rules or settings. By watching your server logs for a while you’ll get a sense for what needs to be cached but check out mine for a starting point:

https://join.piefed.social/wp-content/uploads/2024/02/caching_activity1-1024x577.pngAll these are frequently requested on my instance. Depending on the fediverse platform you have installed, you’ll probably see different patterns and so need different caching settings.

Beware of caching by URI Path because often fediverse software will return different data depending on the Accept header that the requester sets. For example, on PieFed and Lemmy instances a request by a web browser to /post/123 will return HTML to show the post to someone. But when that same URL is requested with the Accept: application/ld+json header set, the response will be an ActivityPub representation of the post! You don’t want people getting activitypub data in their browser and you don’t want to be serving HTML to other instances. Once you spot a URL you want to cache, use a tool like Postman to set the Accept header and make a fake ActivityPub request to your instance and see if you get back HTML or JSON.

Another problem that can happen is that often a response will vary depending on whether the viewer is logged in, or who is logged in. If you can figure out how to configure the CDN to pay attention to cookies or whatever headers are used for Authentication by your platform then you might be able to cache things like /post/*… I couldn’t.

The things I’ve chosen to cache by URI Path above are ones that I know don’t vary by HTTP header or by authentication.

Although we can’t use URI Path a lot of the time, we can cache ActivityPub requests by detecting the Accept: allocation/ld+json header:

https://join.piefed.social/wp-content/uploads/2024/02/caching_activity2-1024x811.pngThis will cache all ActivityPub requests, regardless of URL. People browsing the same URLs as those used by ActivityPub will be unaffected as their requests won’t have the special HTTP header. I used a short TTL to avoid serving stale data when someone quickly edits a post straight after creating it.

There seems to be a deep vein of optimization here which I’ve only just started to dig into. These changes have made a huge difference already and for now my instance is under very little load so I’ll leave it there for now…

https://join.piefed.social/2024/02/20/how-much-difference-does-a-cdn-make-to-a-fediverse-instance/

piefedadmin, to kbin
@piefedadmin@join.piefed.social avatar

Google provides a tool called PageSpeed Insights which gives a website some metrics to assess how well it is put together and how fast it loads. There are a lot of technical details but in general green scores are good, orange not great and red is bad.

I tried to ensure the tests were similar for each platform by choosing a page that shows a list of posts, like https://mastodon.social/explore.

https://join.piefed.social/?attachment_id=308Mastodonhttps://join.piefed.social/?attachment_id=307Peertubehttps://join.piefed.social/?attachment_id=311Misskeyhttps://join.piefed.social/?attachment_id=309Lemmyhttps://join.piefed.social/?attachment_id=313kbinhttps://join.piefed.social/?attachment_id=315Akkomahttps://join.piefed.social/?attachment_id=310PieFedhttps://join.piefed.social/?attachment_id=314pixelfedhttps://join.piefed.social/?attachment_id=312PleromaPieFed and kbin do very well. pixelfed is pretty good, especially considering the image-heavy nature of the content.

The rest don’t seem to have prioritized performance or chose a software architecture that cannot be made to perform well on these metrics. It will be very interesting to see how that affects the cost of running large instances and the longevity of the platforms. Time will tell.

https://join.piefed.social/2024/02/13/technical-performance-of-each-fediverse-platform/

piefedadmin, (edited ) to random
@piefedadmin@join.piefed.social avatar

Us sitting here with our fiber internet and recent model phones have it pretty good. But the “i” in iPhone stands for “inequality”. Most people in the world still have pretty bad internet and old/slow phones. For a platform to be widely adopted and to serve the needs of those who often miss out, it needs to be frugal in network and cpu usage.

Lemmy Kbin PieFed
Home page 4.5 MB 1.65 MB 700 KB – 930 KB
Viewing a post 360 KB 826 KB (varies) 29 KB

Home pages

Due to Lemmy’s javascript-heavy software architecture, visiting a Lemmy home page involves downloading . And this only gets you 20 posts! Also community thumbnails, even if displayed as a 22px by 22px icon are served directly from their home instances, unresized, which can often be multiple megabytes in size. The home page of lemmy.nz is currently weighing over 9 MB.

Kbin’s home page comes in at a respectable 1.65 MB due to relying less on JavaScript. However it is let down by not using loading=”lazy” on images so they all need to be loaded immediately and by generating post thumbnails that are twice as big as they need to be.

The PieFed home page, showing 5x more posts than Lemmy, weighs between 700 and 930 KB, depending on which posts are shown. In low bandwidth mode, the home page is only 220 KB due to not having any thumbnails.

Viewing posts

When viewing a post, we can assume various assets (CSS, JS and some images) are cached due to loading the home page first.

The picture looks similar when viewing a post, which is a bit surprising. One of the usual benefits of the JS-heavy SPA architecture used by Lemmy is that once all the ‘app’ is loaded into the browser, subsequent pages only involve a small API call. However, going to a page in Lemmy involves two API calls (one for the page and one for the comments) both of which return quite a bit of data. If you look at the ‘get the comments on this post’ JSON response you can see the developers have fallen into the classic SPA pitfall of “over-fetching“. They’re retrieving a whole haystack from the backend and then using JavaScript to find the needle they want, which involves transferring the haystack over the internet. Ideally the backend would find the needle and just send that to the frontend.

Kbin sends more data than it needs to when viewing a post, again because of not using loading=”lazy” which causes every profile picture of the commenters to be loaded at once. Making this simple fix would bring the weight down, from ~800 KB to around 50 KB.

PieFed only sends 10 KB – 30 KB to show a post, but it varies depending on the number and length of comments. This could be reduced even more by minifying the HTML response but with PieFed under active development I prefer the source to be as readable as possible to aid in debugging.

This is no accident. It is the result of choices made very early on in the development process, well before any code was written. These choices were made based on certain priorities and values which will continue to shape PieFed in the future as it grows. In a world where digital access remains unequal, prioritizing accessible and fast-loading websites isn’t just about technology; it’s a step towards a more inclusive and equitable society.

https://join.piefed.social/2024/02/09/comparing-network-utilization-of-lemmy-kbin-and-piefed/

speedcurve, to UX
@speedcurve@webperf.social avatar

"If you don't consider time a crucial usability factor, you're missing a fundamental aspect of the user experience." This insightful post from @tammy covers topics like:

⚡ How fast we need pages to be
⚡ How delays hurt productivity
⚡ Measuring "web stress"
⚡ How slowness affects your brand

https://www.speedcurve.com/blog/psychology-site-speed/

tammy, to UX
@tammy@webperf.social avatar

More important INP insights from @cliff, including:

🔵 Only 2/3 mobile sites have "good" INP
🟢 Mobile INP = Android INP
🟡 Mobile INP has an even stronger correlation with bounce rate and conversions than desktop INP

https://www.speedcurve.com/blog/core-web-vitals-inp-mobile/

calibre, to random
@calibre@webperf.social avatar

📬 Performance Newsletter №144:

🔹 Measuring soft navigation experiment by @tunetheweb & @Yoav
🔹 LCP & FCP on single page application on soft navigations @dwsmart
🔹INP meets Puppeteer by Tsvetan Stoychev

and more!

https://mailchi.mp/perf.email/144

speedcurve, to UX
@speedcurve@webperf.social avatar

Performance budgets are your best tool for fighting page speed regressions. This guide covers how to:

✅ Differentiate between budgets and goals
✅ Track the right metrics
✅ Determine thresholds
✅ Integrate with your CI/CD process
✅ Get stakeholder buy-in

https://www.speedcurve.com/blog/performance-budgets/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • JUstTest
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines