[ARCH] [EASYAF] Use RAM for Firefox and boost performance and decrease drive wear

Might be late to the party, but I just discovered you can do this. Super simple and easy to do.

After having a read of the linked page, I backed up and just used this option:

wiki.archlinux.org/title/Profile-sync-daemon

Installed, created config, and enabled service:

systemctl --user enable psd.service systemctl --user enable psd-resync.service

I definitely notice an increase in speed and less SSD usage should hopefully increase lifespan.

I’m sure there would be options for alternative distros, anything using Systemd should be able to use the daemon.

Helix,

thanks for reminding me. Didn’t activate this on my new install since I got 64G of RAM :)


<span style="color:#323232;">systemctl --user enable psd-resync.service
</span>

I think this is not needed since psd.service has the following in it:


<span style="color:#323232;">[Unit]
</span><span style="color:#323232;">…
</span><span style="color:#323232;">Wants=psd-resync.service
</span>
SuperIce,

I used to use this when I still had a hard drive, but this does nothing for performance if you’re on an SSD and profile writes are so few with browsers that it doesn’t significantly affect drive wear. In the end, all this does is make it more likely that something will break.

lemonuri,

Are you implying the tabs backups are not written into the profile folder? Because think 10 - 20 GB a day is still something to be convened about. servethehome.com/firefox-is-eating-your-ssd-here-…

I have used Firefox in ram for a couple of months now without problems and am pretty happy with it.

Valmond, (edited )

Can someone back up my claim that 10-20GB writes per day is nothing for a modern SSD?

Edit: with a 256 TBW and a 20GB write/day it gives some 13.000 days so the lifespan of an SSD will largely be the limiting factor.

MonkderZweite, (edited )

Wrong. Using inotify-wait (inotify-tools), you see that FF has a bunch of read and write access on every page load (mostly in <profile>/storage). This is with the about:config option to use RAM as cache enabled.

Every single webbrowser is one giant clusterfuck.

NegativeLookBehind,
@NegativeLookBehind@lemmy.world avatar

Is there a specific package I can install to increase my RAM?

bigkahuna1986,

No I believe you have to download more RAM actually. But what would I know I’m just a proctologist.

just_another_person,

No, this is wrong. I saw this documentary, ‘Johmny Neumonic’ I think, and it specifically showed a computer scientist increasing his storage and RAM through software, but you need a special device to plug in to do it. I’m sure Best Buy sells it.

Crashumbc,

Yes! They also showed the amount of RAM was just a guideline and it’s possible to “overfill” your RAM!

just_another_person,

No, you are also wrong and need to rewatch that documentary. Sheer will and determination will allow you to double your RAM.

Wodge,
@Wodge@lemmy.world avatar

Also befriending a drugged up DARPA dolphin will be a massive boon too.

just_another_person,

No, you are also wrong.

bigkahuna1986,

As a proctologist I recommend against this.

MonkderZweite,

Mount .cache as tmpfs. Rarely needs a workaround for some offenders to XDG spec though.

AmbroisindeMontaigu,

Have a look at https://wiki.archlinux.org/title/Zram - a compressed block device in RAM that can be formatted as swap. There are various tools to set it up, maybe your distro already includes one of them. And htop has a meter for it, so you can see how effective the compression is (besides its own zramctl tool).

FreudianCafe,

Nah i think the right way to do it is go to some site (you can Google some) and download some RAM. They even make the link flash so its easy to find. If you need more RAM just download some more

Pantherina,

Btw the private browsing mode is also RAM-only which is a hard requirement for the Tor browser (“no disk policy”)

ulterno,
@ulterno@lemmy.kde.social avatar

Finally, a way to use the loads of RAM I have other than Compiling and Blendering.
Well, I guess we also have RAM drives

luthis,

Just reconfigured /etc/makepkg.conf to use extra cores and tmpfs… I’ve been compiling on the SSD with one core for so long it’s embarrassing.

d3Xt3r,

While you’re still in your makepkg.conf, don’t forget to set march=native (and remove mtune) in your CFLAGS! (unless you’re sharing your compiled packages with other systems)

Helix,

Where’s the difference between march=native and march=x86-64 in that case?

d3Xt3r, (edited )

A ton of difference! march stands for microarchitecture levels (or feature levels). “x86-64” is the baseline feature set targeting common x86_64 instructions found in early 64-bit CPUs, circa 2003. Since 2003 obviously there have been several advancements in CPUs and the x86_64 arch, and these have been further classified as:

  • x86-64-v2 (2008; includes the SSE3, SSE4 instructions and more)
  • x86-64-v3 (2013; includes AVX, AVX2 and more)
  • x86-64-v4 (2017; includes AVX512 mainly)

So if you’re still on x86-64, you’re missing out on some decent performance gains by not making use of all the newer instructions/optimisations made in the past two decades(!).

If you’re on a recent CPU (2017+), ideally you’d want to be on at least x86-64-v3 (v4 has seemingly negligible gains, at least on Intel). There’s also CPU-family specific marches such as znver4 for AMD Zen 4 CPUs, which would be an even better choice than x86-64-v4.

But the best march you want use is of course native - this makes available all instructions and compiler optimisations that’s specific to your particular CPU, for the best performance you can possibly get. The disadvantage of native is that any binaries compiled with this can run only on your CPU (or a very similar one) - but that’s only an issue for those who need to distribute binaries (like software developers), or if you’re sharing your pkg cache with other machines.

Since the flags defined in makepkg.conf only affect AUR/manual source builds (and not the default core/extra packages), I’d recommend also reinstalling all your main packages from either the ALHP or CachyOS repos, in order to completely switch over to x86-64-v3 / v4.

Further reading on microarchitectures:

Benchmarks:

cc: @luthis

Helix,

Can I also compile a list of selected packages from the repositories fresh easily? E.g. Firefox? Or do I have to download their PKGBUILD to makepkg?

d3Xt3r,

The repositories already contain pre-compiled packages. To install them, just add the repository before the Arch repos, and then simply reinstall the packages to install their optimised versions.

Helix,

How can I trust them? At least with Arch there’s the “many eyes” principle.

d3Xt3r,

It’s the same principle. Both CachyOS and ALHP are reasonably popular, and all their stuff is open for anyone to review - Cachy’s stuff is all on Github and ALHP is on SomeGit.

luthis,

Oh boy…

Total Download Size: 3390.65 MiB Total Installed Size: 13052.08 MiB Net Upgrade Size: 291.24 MiB

I wonder if I’m going to notice any better performance…

luthis,

holy shit!!! I’m definitely doing that!

MonkderZweite,

Mount .cache as tmpfs. Rarely needs a workaround for some offenders to XDG spec though.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • DreamBathrooms
  • magazineikmin
  • ngwrru68w68
  • mdbf
  • rosin
  • Youngstown
  • vwfavf
  • slotface
  • modclub
  • khanakhh
  • cubers
  • kavyap
  • thenastyranch
  • PowerRangers
  • provamag3
  • everett
  • Durango
  • InstantRegret
  • osvaldo12
  • tester
  • normalnudes
  • tacticalgear
  • ethstaker
  • GTA5RPClips
  • anitta
  • Leos
  • cisconetworking
  • megavids
  • All magazines