capt_kafei,
@capt_kafei@lemmy.ca avatar

Damn, it is actually scary that they managed to pull this off. The backdoor came from the second-largest contributor to xz too, not some random drive-by.

possiblylinux127,

It would be nice if we could press formal charges

sim642,

Assuming that it’s just that person, that it’s their actual name and that they’re in the US…

ugjka, (edited )
@ugjka@lemmy.world avatar

there will be federal investigation just speculation if the culprit is a foreign actor

possiblylinux127,

Do you have a source for this?

ugjka,
@ugjka@lemmy.world avatar

updated my post, it was just some speculation i misread

Virulent,

I don’t have a source but I think it is safe to say given the large corporations and government institutions that rely on XZ utils. I’m sure Microsoft, Amazon, redhat ect are in talks with the federal government about this

worsedoughnut,
@worsedoughnut@lemdro.id avatar

Source: they made it up

stsquad, (edited )

Time to audit all their contributions although it looks like they mostly contribute to xz. I guess we’ll have to wait for comments from the rest of the team or if the whole org needs to be considered comprimised.

AmbiguousProps,

They’ve been contributing to xz for two years, and commited various “test” binary files.

stsquad,

It’s looking more like a long game to compromise an upstream.

cjk,

Either that or the attacker was very good at choosing their puppet…

stsquad,

Well the account is focused on one particular project which makes sense if you expect to get burned at some point and don’t want all your other exploits to be detected. It looks like there was a second sock puppet account involved in the original attack vector support code.

We should certainly audit other projects for similar changes from other psudoanonymous accounts.

cjk,

Yeah, and the 700 commits should be reverted… just in case we missed something.

possiblylinux127,

That is a should be standard procedure

Doombot1,

ELI5 what does this mean for the average Linux user? I run a few Ubuntu 22.04 systems (yeah yeah, I know, canonical schmanonical) - but they aren’t bleeding edge, so they shouldn’t exhibit this vulnerability, right?

kbal,
@kbal@fedia.io avatar

The average user? Nothing. Mostly it just affects those who get the newest versions of everything.

flying_sheep,
@flying_sheep@lemmy.ml avatar

In this case I think that’s just Fedora and Debian Sid users or so.

The backdoor only activates during DEB or RPM builds, and was quickly discovered so only rolling release distros using either package format were affected.

subtext, (edited )

Not regular Fedora, though, it was only in Fedora Rawhide and Fedora 41, so very very early, bleeding edge distributions. Nothing that a regular Fedora user would be using.

access.redhat.com/security/cve/cve-2024-3094

E: and Fedora 40 beta which some regular users could conceivably be using

redhat.com/…/urgent-security-alert-fedora-41-and-…

0xtero,

It mostly affects/targets the build systems of binary distros - infecting their build machines with this would result in complete compromise of released distro down the line.

rotopenguin,
@rotopenguin@infosec.pub avatar

apt info xz-utils

Your version is old as balls. Even if you were on Mantic, it would still be old as balls.

trk,
@trk@aussie.zone avatar

Security through antiquity

atzanteol,

They noticed that some ssh sessions took 0.5 seconds too long under certain circumstances. 😲

Holy hell that’s good QA.

ijhoo,

Microsoft employee.

burrito,

Definitely not from the team working on search on Windows then.

krash,

Don’t see why you’re being downvoted, the person in question who discovered this is a postgres maintainer employed by Microsoft.

ijhoo,

Probably people think this is a troll or something.

I wrote it because I was surprised, especially since I’m not a fan of microsoft and their policies. Lately, I have the feeling Microsoft is better than Google (relative terms) when it comes to oss.

What is additionally surprising is the breaches of Microsoft services in the last year. There is one every few weeks or so… And then they pick up a backdoor because login took 0.5 instead of 0.1s.

Anyway, his findings are amazing.

Wes_Dev,

This isn’t the same thing, but I’m reminded of Minecraft.

Minecraft is a massively popular game. Notch once said he planned to make it open source when its popularity died down. But now Microsoft owns it.

Not only that, but Mojang accounts don’t work anymore. You have to have a Microsoft account to play it now. Even trying to download and play an older version of the game offline requires Microsoft to approve it. Microsoft is actively tightening the leash on the game because it makes them money. Open sourcing the game will likely never happen now. The best we can hope for it for versions to fall into public domain after 70-ish years.

That’s how I see Microsoft. They only care about what its beneficial for them to drive profits. Working on open source projects, and open sourcing a few of their tools to get the benefits of community adoption and code review is great, sure. But they’d sooner try to incorporate Linux into Windows to keep people in their surveillance ecosystem, than to open source Windows.

Remember when Windows 10 was the last version, until they changed their minds? Remember when they floated the idea of charging a recurring subscription to use Windows, before they silently dropped the idea? Remember when there was credible talk about the next version of Windows being cloud-based where they controlled all your data and you had no privacy? Hell, you have basically no privacy on Windows 10. Trying to reclaim some involves registry edits, special third party tools, and a constant battle with automatic updates reverting your changes.

I’ll say it again. Microsoft doesn’t care about OSS. It’s just currently beneficial for them to pretend they do.

Goggle seemed to care a lot about OSS, then started making everything in Android depend on their proprietary ecosystem to function. Now Google is using the dominant position they got by taking advantage of OSS adoption, and have been pushing privacy-invading standards and trying to get rid of ad blockers online, among many other things.

For these huge companies, OSS is just a tool to get more control and power. The moment it’s no longer useful, they’ll find ways to work around the license and enshitify everything again.

It keeps happening. I refuse to keep trusting bad actors every time they dangle a shiny trinket over our heads.

I do appreciate the work this person did in finding the bug. It’s not all doom and gloom.

ijhoo,

I agree with you sentiment here. That’s why I wrote ‘relative terms’ in my comment.

Since Nadela took over, Microsoft did some open thing which benefited community. So, Microsoft opened somewhat.

During the same time, under Pichai, google went the other way: they focus more on monetization and try to control stuff the apple way. Manifest v3? Google also didn’t do anything really worth mentioning in the last 10y in terms of products. Well, except ‘attention’ article. And even this they didn’t believe in and they cannot deliver a decent product. I just tried google advanced Gemini and it’s, to put it politely, shit. Google also had some positive actions like mainlining a lot of stuff in Linux Kernel to more easily upgrade android.

So, while google is closing down and making mistakes, Microsoft is opening a bit up.

If you look the state from the last year and the state now. Microsoft improved. Google went the other way.

Microsoft doesn’t care about open source, they care about the money Cloud Services using open source bring them. I don’t think google cares as well. For reason read this: www.joelonsoftware.com/2002/…/strategy-letter-v/

30p87,

Well half a second delay is pretty noticeable when you ssh into a machine sitting right next to you. It should be instant. And if it isn’t something’s off.

lemmyreader,

t y for sharing.

The problem is in upstream and has only entered Debian Sid/unstable. Does this mean that for example bleeding edge Arch (btw) sshd users are compromised already ?

SMillerNL,

Homebrew rolled back the release after finding out

cantankerous_cashew,

Here’s a link to the PR for anyone who’s interested

Atemu,
@Atemu@lemmy.ml avatar

Arch is on 5.6.1 as of now: archlinux.org/packages/core/x86_64/xz/

We at Nixpkgs have barely evaded having it go to a channel used by users and we don’t seem to be affected by the backdoor.

Static_Rocket, (edited )
@Static_Rocket@lemmy.world avatar

Arch had a patch rolled out yesterday [1][2][3] that switches to the git repo. On top of that the logic in the runtime shim and build script modifier was orchestrated to target Debian and RPM build systems and environments [4].

[1] …archlinux.org/…/881385757abdc39d3cfea1c3e34ec09f…

[2] gitlab.archlinux.org/archlinux/packaging/…/2

[3] security.archlinux.org/CVE-2024-3094

[4] www.openwall.com/lists/oss-security/2024/03/29/4

blaise,

The link mentions that it is only ran as part of a debian or RPM package build. Not to mention that on Arch sshd is not linked against liblzma anyways.

LastoftheDinosaurs,

It was also on Gentoo. I had this version installed for a day or two.

flying_sheep,
@flying_sheep@lemmy.ml avatar

Since you didn’t build a RPM or DEB package however, your didn’t compile in the backdoor.

LastoftheDinosaurs,

Yeah, it’s probably fine. I also don’t use systemd. I was just pointing out that another rolling release distribution had the affected version.

Dima,

Looks like the 5.6.1-2 release on Arch moved from using the published GitHub releases to just using the git repository directly, which as I understand avoids the exploit (because the obfuscated script to inject the exploit is only present in the packaged tarballs and not the git repo itself)

…archlinux.org/…/881385757abdc39d3cfea1c3e34ec09f…

festus, (edited )

They also believe we (Arch users) are unaffected because this backdoor targeted Debian and Redhat type packaging specifically and also relied on a certain SSH configuration Arch doesn’t use. To be honest while it’s nice to know we’re unaffected, it’s not at all comforting that had the exploiter targeted Arch they would have succeeded. Just yesterday I was talking to someone about how much I love rolling release distros and now I’m feeling insecure about it.

More details here: gitlab.archlinux.org/archlinux/packaging/…/2

tal,
@tal@lemmy.today avatar

Someone always has to be the guinea pig.

That being said, maybe there’s an argument for distros that do rolling releases to have an “intentionally delayed rolling release” that just trails the regular rolling release by a fixed amount of time to provide more time for guinea pigs to run into things. If you want rolling, but can live with the delay, just use that.

Gobbel2000,

OpenSuse Slowroll does pretty much that, a slightly delayed rolling release.

redcalcium,

Arch has pushed the patched xz just a few hours ago: archlinux.org/…/the-xz-package-has-been-backdoore…

lemmyreader,

Thanks a bunch.

redcalcium,

If you’re using xz version 5.6.0 or 5.6.1, please upgrade asap, especially if you’re using a rolling-release distro like Arch or its derivatives. Arch has rolled out the patched version a few hours ago.

UnfortunateShort,

Dang, Arch never sleeps, does it? That’s a 24/7 incident response squad level of support.

mumblerfish,

Gentoo just reverted back to the last tar signed by another author than the one seeming responsible for the backdoor. The person has been on the project for years, so one should keep up to date and possibly revert even further back than just from 5.6.*. Gentoo just reverted to 5.4.2.

namingthingsiseasy,

Just updated on void and saw the same thing

flying_sheep,
@flying_sheep@lemmy.ml avatar

Backdoor only gets inserted when building RPM or DEB. So while updating frequently is a good idea, it won’t change anything for Arch users today.

SavvyBeardedFish,

Archlinux’s XZ was compromised as well.

News post

Git change for not using tarballs from source

progandy, (edited )

I think that was a precaution. The malicious build script ran during the build, but the backdoor itself was most likely not included in the resuling package as it checked for specific packaging systems.

www.openwall.com/lists/oss-security/2024/03/…/22

flying_sheep,
@flying_sheep@lemmy.ml avatar

No, read the link you posted:

Arch does not directly link openssh to liblzma, and thus this attack vector is not possible. You can confirm this by issuing the following command:


<span style="color:#323232;">ldd "$(command -v sshd)"
</span>

However, out of an abundance of caution, we advise users to remove the malicious code from their system by upgrading either way.

corsicanguppy,

when building RPM or DEB.

Which ones? Everything I run seems to be clear.

access.redhat.com/security/cve/CVE-2024-3094

Products / Services Components State
Enterprise Linux 6 xz Not affected
Enterprise Linux 7 xz Not affected
Enterprise Linux 8 xz Not affected
Enterprise Linux 9 xz Not affected

(and thus all the bug-for-bug clones)

Laser,

Fedora 41, Fedora Rawhide, Debian Sid are the currently known affected ones AFAIK.

progandy,

Those getting the most recent software versions, so nothing that should be running in a server.

flying_sheep,
@flying_sheep@lemmy.ml avatar

I think it needs to be

  • rolling release (because it was caught so quickly that it hasn’t made its way into any cadence based distro yet)
  • using the upstream Makefile task to build a RPM or DEB (because the compromised build script directly checks for that and therefore doesn’t trigger for a destdir build like Gentoo’s or Arch’s)
  • using the upstream provided tarball as opposed to the one GitHub provides, or a git clone (because only that contains the compromised Makefile, running autotools yourself is safe)

Points 1 and 2 mean that only rolling release RPM and DEB distros like Debian Sid and Fedora are candidates. I didn’t check if they use the Makefile and the compromised tarballs.

fireshell,
@fireshell@lemmy.world avatar

Some no-name came and without any problems asked to become a maintainer in a project used in almost any distro, took it over, put a backdoor in there and no one had any questions? In this case, everything turned out thanks to pure chance. Noname screwed up his backdoor, which attracted the attention of a guy from Microsoft, and out of boredom, he dug up what was what. And if I hadn’t messed up, or that guy from Microsoft decided to go drink beer instead of poking around in the xz code, then no one would have discovered anything. It’s scary to imagine how many of these nonames are sitting in all these thousands of open source projects, waiting in the wings to roll out a malicious patch.

umami_wasbi,

It seems like a RCE, rather an auth bypass once though. bsky.app/profile/…/3kowjkx2njy2b

tal,
@tal@lemmy.today avatar

Apparently the backdoor reverts back to regular operation if the payload is malformed or the signature from the attacker’s key doesn’t verify. Unfortunately, this means that unless a bug is found, we can’t write a reliable/reusable over-the-network scanner.

Maybe not. But it does mean that you can write a crawler that slams the door shut for the attacker on any vulnerable systems.

EDIT: Oh, maybe he just means that it reverts for that single invocation.

OsrsNeedsF2P,

Time to bring back the reproducible build hype

chameleon,
chameleon avatar

Won't help here; this backdoor is entirely reproducible. That's one of the scary parts.

OsrsNeedsF2P,

The backdoor wasn’t in the source code, only in the distributed binary. So reproducible builds would have flagged the tar as not coming from what was in Git

Virulent,

Not exactly - it was in the source tarbal available for download from the releases page but not the git source tree.

chameleon,
chameleon avatar

Reproducible builds generally work from the published source tarballs, as those tend to be easier to mirror and archive than a Git repository is. The GPG-signed source tarball includes all of the code to build the exploit.

The Git repository does not include the code to build the backdoor (though it does include the actual backdoor itself, the binary "test file", it's simply disused).

Verifying that the tarball and Git repository match would be neat, but is not a focus of any existing reproducible build project that I know of. It probably should be, but quite a number of projects have legitimate differences in their tarballs, often pre-compiling things like autotools-based configure scripts and man pages so that you can have a relaxed ./configure && make && make install build without having to hunt down all of the necessary generators.

flying_sheep,
@flying_sheep@lemmy.ml avatar

Time to change that tarball thing. Git repos come with built in checksums, that should be the way to go.

tal,
@tal@lemmy.today avatar

Honestly, while the way they deployed the exploit helped hide it, I’m not sure that they couldn’t have figured out some similar way to hide it in autoconf stuff and commit it.

Remember that the attacker had commit privileges to the repository, was a co-maintainer, and the primary maintainer was apparently away on a month-long vacation. How many parties other than the maintainer are going to go review a lot of complicated autoconf stuff?

I’m not saying that your point’s invalid. Making sure that what comes out of the git repository is what goes to upstream is probably a good security practice. But I’m not sure that it really avoids this.

Probably a lot of good lessons that could be learned.

  • It sounds like social engineering, including maybe use of sockpuppets, was used to target the maintainer, to get him to cede maintainer status.
  • Social engineering was used to pressure package maintainers to commit.
  • Apparently automated software testing software did trip on the changes, like some fuzz-tesing software at Google, but the attacker managed to get changes committed to avoid it. This was one point where a light really did get aimed at the changes. That being said, the attacker here was also a maintainer, and I don’t think that the fuzzer guys consider themselves responsible for identifying security holes. And while it did highlight the use of ifunc, it sounds like it was legitimately a bug. But, still, it might be possible to have some kind of security examination taking place when fuzzing software trips, especially if the fuzzing software isn’t under control of a project’s maintainer (as it was not, here).
  • The changes were apparently aimed at getting in shortly before Ubuntu freeze; the attacker was apparently recorded asking and ensuring that Ubuntu fed off Debian testing. Maybe there needs to be more-attention paid to things that go in shortly before freeze.
  • Part of the attack was hidden in autoconf scripts. Autoconf, especially with generated data going out the door, is hard to audit.
  • As you point out, using a chain that ensures that a backdoor that goes into downstream also goes into git would be a good idea.
  • Distros should probably be more careful about linking stuff to security-critical binaries like sshd. Apparently this was very much not necessary to achieve what they wanted to do in this case; it was possible to have a very small amount of code that performed the functionality that was actually needed.
  • Unless the systemd-notifier changes themselves were done by an attacker, it’s a good bet that the Jia Tan group and similar are monitoring software, looking for dependencies like the systemd-notifier introduction. Looking for similar problems that might affect similar remotely-accessible servers might be a good idea.
  • It might be a good idea to have servers run their auth component in an isolated module. I’d guess that it’d be possible to have a portion of sshd that accepts incoming connections (and is exposed to the outside, unauthenticated world) as an isolated process. That’d be kind of inetd-like functionality. The portion that performed authentication (and is also running exposed to the outside) as an isolated process, and the code that runs only after authentication succeeds run separately, with only the latter bringing in most libraries.
  • I’ve seen some arguments that systemd itself is large and complicated enough that it lends itself to attacks like this. I think that maybe there’s an argument that some sort of distinction should be made between more- or less-security-critical software, and different policies applied. Systemd alone is a pretty important piece of software to be able to compromise. Maybe there are ways to rearchitect things to be somewhat more-resilient and auditable.
  • I’m not familiar with the ifunc mechanism, but it sounds like attackers consider it to be a useful route to hide injected code. Maybe have some kind of auditing system to look for that.
  • The attacker modified the “in the event of an identified security hole” directions to discourage disclosure to anyone except the project for a 90-day embargo period, and made himself the contact point. That would have provided time to continue to use the exploit. In practice, perhaps software projects should not be the only contact point – perhaps it should be the norm to both notify software projects and a separate, unrelated-to-a-project security point. That increases the risk of the exploit leaking, but protects against compromise of the project maintainership.
flying_sheep,
@flying_sheep@lemmy.ml avatar

You’re right, there’s more parts to it, especially social engineering. Maybe there’s other ways to hide a payload, but there aren’t many avenues. You have to hide the payload in a binary artefact, which are pretty suspicious when you don’t do it in a (well scrutinized) cryptography lib, or a compression lib.

Then that payload has to be executed for some reason, which means you need a really good reason to embed it (e.g. something like widevine), or have to modify the build script.

friend_of_satan,

The back door is not in the source code though, so it’s not reproducible from source.

Static_Rocket,
@Static_Rocket@lemmy.world avatar

Part of the payload was in the tarball. There was still a malicious shim in the upstream repo

fruitycoder,

Probally more SLSA which reproducible builds is apart of

danielquinn,
@danielquinn@lemmy.ca avatar

Why didn’t this become a thing? Surely in 2024, we should be able to build packages from source and sign releases with a private key.

Natanael,

It’s becoming more of a thing but a lot of projects are so old that they haven’t been able to fix their entire build process yet

umami_wasbi,

Is this only happened with SSH, or other network facing services using liblzma too?

Atemu,
@Atemu@lemmy.ml avatar

We know that sshd is targeted but we don’t know the full extent of the attack yet.

tal,
@tal@lemmy.today avatar

Also, even aside from the attack code here having unknown implications, the attacker made extensive commits to liblzma over quite a period of time, and added a lot of binary test files to the xz repo that were similar to the one that hid the exploit code here. He also was signing releases for some time prior to this, and could have released a signed tarball that differed from the git repository, as he did here. The 0.6.0 and 0.6.1 releases were contained to this backdoor aimed at sshd, but it’s not impossible that he could have added vulnerabilities prior to this. Xz is used during the Debian packaging process, so code he could change is active during some kind of sensitive points on a lot of systems.

It is entirely possible that this is the first vulnerability that the attacker added, and that all the prior work was to build trust. But…it’s not impossible that there were prior attacks.

dan,
@dan@upvote.au avatar

The malicious code attempts to hook in to libcrypto, so potentially other services that use libcrypto could be affected too. I don’t think extensive research has been done on this yet.

SSH doesn’t even use liblzma. It’s pulling in the malicious code via libsystemd, which does use liblzma.

Edit: “crypto” meaning cryptography of course, not cryptocurrency.

bloodfart,

Lzma balls

Wes_Dev,

Damn fine work all around.

I know this is an issue fraught with potential legal and political BS, and it’s impossible to check everything without automation these days, but is there an organization that trains and pays people to work as security researchers or QA for open source projects?

Basically, a watchdog group that finds exploitable security vulnerabilities, and works with individuals or vendors to patch them? Maybe make it a publicly owned and operated group with mandatory reporting of some kind. An international project funded by multiple governments, where it’s harder for a single point of influence to hide exploits, abuse secrets, or interfere with the researchers? They don’t own or control any code, just find security issues and advise.

I don’t know.

Just thinking that modern security is getting pretty complicated, with so many moving parts and all.

dan,
@dan@upvote.au avatar

This is the best post I’ve read about it so far: boehs.org/…/everything-i-know-about-the-xz-backdo…

SpaceCadet,
@SpaceCadet@feddit.nl avatar

In the fallout, we learn a little bit about mental health in open source.

Reminded me of this, relevant as always, xkcd:

Image

lemmyreader,

Yes, exactly.

And looking at you npm : npm

worsedoughnut,
@worsedoughnut@lemdro.id avatar

That whole timeline is insane, and the fact that anyone even found this in the totally coincidental way they did is very lucky for the rest of us.

chameleon,
chameleon avatar

This is a fun one we're gonna be hearing about for a while...

It's fortunate it was discovered before any major releases of non-rolling-release distros were cut, but damn.

rolaulten,

That’s the scary thing. It looks like this narrowly missed getting into Debian and RH. Downstream downstream that is… everything.

fireshell, (edited )
@fireshell@lemmy.world avatar

Since the actual operation of the liblzma SSH backdoor payload is still unknown, there’s a protocol for securing your impacted systems:

• Consider all data, including key material and secrets on the impacted system as compromised. Expand the impact to other systems, as needed (for example: if a local SSH key is used to access a remote system then the remote system must be considered impacted as well, within the scope the key provides).

• Wipe the impacted host and reinstall it from scratch. Use known good install that does not contain the malicious payload. Generate new keys and passwords. Do not reuse any from the impacted systems.

• Restore configuration and data from backups, but from before the time the malicious liblzma package was installed. However, be careful not to allow potentially leaked credentials or keys to have access to the newly installed system (for example via $HOME/.ssh/authorized_keys).

This handles the systems themselves. Unfortunately any passwords and other credentials stored, accessed or processed with the impacted systems must be considered compromised as well. Change passwords on web sites and other services as needed. Consider the fact that the attacker may have accessed the services and added ways to restore access via for example email address or phone number in their control. Check all information stored on the services for correctness.

This is a lot of work, certainly much more than just upgrading the liblzma package. This is the price you have to pay to stay safe. Just upgrading your liblzma package and hoping for the best is always an option, too. It’s up to you to decide if this is a risk worth taking.

This recovery protocol might change somewhat once the actual operation of the payload is figured out. There might be situations where the impact could be more limited.

As an example: If it turns out that the payload is fully contained and only allows unauthorized remote access via the tampered sshd, and the host is not directly accessible from the internet (the SSH port is not open to internet) this would mean that it might be possible to clean up the system locally without full reinstall.

However, do note that the information stored on the system might have still been leaked to outside world. For example leaked ssh keys without a passphrase could still afford the attacker access to remote systems.

This is a long con, and honestly the only people at fault are the bad actors themselves. Assuming Jia Tan’s GitHub identity and pgp key weren’t compromised by someone else, this backdoor appears to be the culmination of three years of work.

gwendolencopper,

could a Flatpak contain one of the backdoored builds of xz or liblzma? Is there a way to check? Would such a thing be exploitable, or does this backdoor only affect ssh servers?

chameleon,
chameleon avatar

The base runtime pretty much every Flatpak uses includes xz/liblzma, but none of the affected versions are included. You can poke around in a base runtime shell with flatpak run --command=sh org.freedesktop.Platform//23.08 or similar, and check your installed runtimes with flatpak list --runtime.

23.08 is the current latest version used by most apps on Flathub and includes xz 5.4.6. 22.08 is an older version you might also still have installed and includes xz 5.2.12. They're both pre-backdoor.

It seems there's an issue open on the freedesktop-sdk repo to revert xz to an even earlier version predating the backdoorer's significant involvement in xz, which some other distros are also doing out of an abundance of caution.

So, as far as we know: nothing uses the backdoored version, even if it did use that version it wouldn't be compiled in (since org.freedesktop.Platform isn't built using Deb or RPM packaging and that's one of the conditions), even if it was compiled in it would to our current knowledge only affect sshd, the runtime doesn't include an sshd at all, and they're still being extra cautious anyway.

One caveat: There is an unstable version of the runtime that does have the backdoored version, but that's not used anywhere (I don't believe it's allowed on Flathub since it entirely defeats the point of it).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • DreamBathrooms
  • mdbf
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • InstantRegret
  • slotface
  • osvaldo12
  • kavyap
  • khanakhh
  • Durango
  • megavids
  • everett
  • cisconetworking
  • normalnudes
  • tester
  • ngwrru68w68
  • cubers
  • modclub
  • tacticalgear
  • provamag3
  • Leos
  • anitta
  • JUstTest
  • lostlight
  • All magazines