expr,

This whole situation just emphasizes the fact that rebasing >>>>>>>>>> merge squashing.

girlfreddy,
@girlfreddy@lemmy.ca avatar

A small blurb from The Guardian on why Andres Freund went looking in the first place.

So how was it spotted? A single Microsoft developer was annoyed that a system was running slowly. That’s it. The developer, Andres Freund, was trying to uncover why a system running a beta version of Debian, a Linux distribution, was lagging when making encrypted connections. That lag was all of half a second, for logins. That’s it: before, it took Freund 0.3s to login, and after, it took 0.8s. That annoyance was enough to cause him to break out the metaphorical spanner and pull his system apart to find the cause of the problem.

possiblylinux127,

The post on the oss is more detailed and informative

uis,
EmperorHenry,
@EmperorHenry@discuss.tchncs.de avatar

At least microsoft is honest enough to admit their software needs protection, unlike apple and unlike most of the people who have made distros of linux. (edit: microsoft is still dishonest about what kind of protection it needs though)

Even though apple lost a class action lawsuit for false advertising over the claim “mac can’t get viruses” they still heavily imply that it doesn’t need an antivirus.

any OS can get infected, it’s just a matter of writing the code and finding a way to deliver it to the system…Now you might be thinking “I’m very careful about what I click on” that’s a good practice to have, but most malware gets delivered through means that don’t require the user to click on anything.

You need an antivirus on every computer you have, linux, android, mac, windows, iOS, all of them. There’s loads of videos on youtube showing off how well or not so well different antivirus programs work for windows and android.

possiblylinux127,

A “antivirus” tends to be a proprietary black box. Such “antivirus” programs could not of detected the XZ backdoor

EmperorHenry,
@EmperorHenry@discuss.tchncs.de avatar

But a good whitelisting antivirus could’ve stopped it.

possiblylinux127,

What?

EmperorHenry,
@EmperorHenry@discuss.tchncs.de avatar

Prevention and detection

Most of the time, detection also means prevention, but with a whitelisting antivirus, prevention often means that the threat isn’t detected, it was just prevented from running.

A whitelisting application has a list of what it knows it bad AND what it knows in advance to be good.

Anything it can’t identify on the spot is treated as unknown and not allowed to run, not deleted, not quarantined, just blocked from running until the user can upload it to things like virustotal and other services like it to figure out if its safe.

upload it to virustotal, if it wasn’t already known, do a re-scan a few hours later to see if it’s malicious, if it was already known, do a re-scan to see if anything has figured out if its malicious.

which is why I think it’s borderline criminal that most antivirus programs don’t work that way.

Portable4775,

A whitelisting application has a list of what it knows it bad AND what it knows in advance to be good.

How would it know this? Is this defined by a person/people? If so, that wouldn’t have mattered. liblzma was known in advance to be good, then the malicious update was added, and people still presumed that it was good.

This wasn’t a case of some random package/program wreaking havoc. It was trusted malicious code.

Also, you’re asking for an antivirus that uploads and uses a sandbox to analyze ALL packages. Good luck with that. (AVs would probably have a hard time detecting malicious build actions, anyways).

EmperorHenry,
@EmperorHenry@discuss.tchncs.de avatar

Also, you’re asking for an antivirus that uploads and uses a sandbox to analyze ALL packages. Good luck with that. (AVs would probably have a hard time detecting malicious build actions, anyways).

three different antivirus programs already do that. Comodo for example has a built in sandbox to do that.

Portable4775,

It places unknown/new software in a sandbox. You want an AV that tests all pre-existing packages in a sandbox.

possiblylinux127,

That would do nothing for liblzma as it was trusted.

EmperorHenry,
@EmperorHenry@discuss.tchncs.de avatar

who was it trusted by? There’s whitelisting applications that indiscriminately block everything that isn’t already installed too.

possiblylinux127, (edited )

The developer of XZ. What your describing is package verification which already happens

z00s,

All it took was one set of nerd eyeballs

KillingTimeItself,

this was one hell of an april fools joke i tell you what.

possiblylinux127,

Imagine

KillingTimeItself,

i mean, to some degree, it is.

index,

Give this guy a medal and a mastodon account

possiblylinux127,

Hopefully the later

CosmicCleric,
@CosmicCleric@lemmy.world avatar

Why not both.

abogical,

He already has a mastodon account : infosec.exchange/

trk,
@trk@aussie.zone avatar

Give him another one!

Pantherina,
noddy,

The scary thing about this is thinking about potential undetected backdoors similar to this existing in the wild. Hopefully the lessons learned from the xz backdoor will help us to prevent similar backdoors in the future.

possiblylinux127,

I think we need focus on zero trust when it comes to upstream software

jackpot,
@jackpot@lemmy.ml avatar

exactly, stop depending on esoteric libraries

possiblylinux127,

It is fine to use them just know how they work and check the commit log.

That of course requires you to pull from got instead of a tarball

billgamesh,

this was well hidden. not sure anyone would have spotted this by checking commit log

possiblylinux127,

It was hidden in the Tarball

billgamesh,

i’m not an expert, but my reading was that it was hidden in a binary used for testing EDIT: oh yeah, i see what you mean

merthyr1831,

I know this is being treated as a social engineering attack, but having unreadable binary blobs as part of your build/dev pipeline is fucking insane.

ipkpjersi,

Yep, I consider it a failure of the build/dev pipeline.

xlash123,
@xlash123@sh.itjust.works avatar

As mentioned, binary test files makes sense for this utility. In the future though, there should be expected to demonstrate how and why the binary files were constructed in this way, kinda like how encryption algorithms explain how they derived any arbitrary or magic numbers. This would bring more trust and transparency to these files without having to eliminate them.

noli,

You mean that instead of having a binary blob you have a generator for the data?

nxdefiant,

It’s not uncommon to keep example bad data around for regression to run against, and I imagine that’s not the only example in a compression library, but I’d definitely consider that a level of testing above unittests, and would not include it in the main repo. Tests that verify behavior at run time, either when interacting with the user, integrating with other software or services, or after being packaged, belong elsewhere. In summary, this is lazy.

inetknght,

and would not include it in the main repo

Tests that verify behavior at run time belong elsewhere

The test blobs belong in whatever repository they’re used.

It’s comically dumb to think that a repository won’t include tests. So binary blobs like this absolutely do belong in the repository.

nxdefiant, (edited )

A repo dedicated to non-unit-test tests would be the best way to go. No need to pollute your main code repo with orders of magnitude more code and junk than the actual application.

That said, from what I understand of the exploit, it could have been avoided by having packaging and testing run in different environments (I could be wrong here, I’ve only given the explanation a cursory look). The tests modified the code that got released. Tests rightly shouldn’t be constrained by other demands (like specific versions of libraries that may be shared between the test and build steps, for example), and the deploy/build step shouldn’t have to work around whatever side effects the tests might create. Containers are easy to spin up.

Keeping them separate helps. Sure, you could do folders on the same repo, but test repos are usually huge compared to code repos (in my experience) and it’s nicer to work with a repo that keeps its focus tight.

It’s comically dumb to assume all tests are equal and should absolutely live in the same repo as the code they test, when writing tests that function multiple codebases is trivial, necessary, and ubiquitous.

balp,

It’s also easier to work if one simple git command can get everything you need. There is a good case for a bigger nono-repo. It should be easy to debug tests on all levels else it’s hard to fix issues that the bigger tests find. Many new changes in git make the downsides of a bigger repo less hurtful and the gains now start to outweigh the losses of a bigger repo.

Hexarei,
@Hexarei@programming.dev avatar

A single git command can get everything for split repos if you use submodules

balp,

I would say yes and no, but yes the clone command can do it. But branching and CI get a bit more complicated. Pushing and reviewing changes gets more complicated to get the overview. If the functionality and especially the release cycle is different the submodules still have great values. As always your product and repo structure is a mix of different considerations and always a compromise. I think the additions in git the last years have made the previous really bad pain points with bigger repos less annoying. So that I now see more situations it works well.

I always recommend keeping all testing in the same repo as the code that affects the tests. It keeps tracking changes in functionality easier, needing to coordinate commits, merging, and branches in more than one repo is a bigger cognitive load.

balp,

I agree that in most cases it’s more of an E2E or integratiuon test, not sure of the need to split into different repo, and well in the end I’m not sure that would have made any big protection anyhow.

suy,

Is it, really? If the whole point of the library is dealing with binary files, how are you even going to have automated tests of the library?

The scary thing is that there is people still using autotools, or any other hyper-complicated build system in which this is easy to hide because who the hell cares about learning about Makefiles, autoconf, automake, M4 and shell scripting at once to compile a few C files. I think hiding this in any other build system would have been definitely harder. Check this mess:


<span style="color:#323232;">  dnl Define somedir_c_make.
</span><span style="color:#323232;">  [$1]_c_make=`printf '%sn' "$[$1]_c" | sed -e "$gl_sed_escape_for_make_1" -e "$gl_sed_escape_for_make_2" | tr -d "$gl_tr_cr"`
</span><span style="color:#323232;">  dnl Use the substituted somedir variable, when possible, so that the user
</span><span style="color:#323232;">  dnl may adjust somedir a posteriori when there are no special characters.
</span><span style="color:#323232;">  if test "$[$1]_c_make" = '"'"${gl_final_[$1]}"'"'; then
</span><span style="color:#323232;">    [$1]_c_make='"$([$1])"'
</span><span style="color:#323232;">  fi
</span><span style="color:#323232;">  if test "x$gl_am_configmake" != "x"; then
</span><span style="color:#323232;">    gl_[$1]_config='sed "rn" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2>/dev/null'
</span><span style="color:#323232;">  else
</span><span style="color:#323232;">    gl_[$1]_config=''
</span><span style="color:#323232;">  fi
</span>
Farnsworth,

The tukaani github repos are gone, is there a mirror somewhere?

fluxion,

Tukaani main website

TheFadingOne, (edited )

Though unfortunately (or I guess for most use-cases fortunately) you can’t find the malicious m4/build-to-host.m4 file on there afaik. The best way to find that now, should you really want to, is by looking through the commit history of the salsa.debian.org/debian/xz-utils repository which is, as far as I understand it, the repository that the debian packages are built from and consequently also what the compromised packages were built from.

JoeKrogan,
@JoeKrogan@lemmy.world avatar

I think going forward we need to look at packages with a single or few maintainers as target candidates. Especially if they are as widespread as this one was.

In addition I think security needs to be a higher priority too, no more patching fuzzers to allow that one program to compile. Fix the program.

I’d also love to see systems hardened by default.

amju_wolf,
@amju_wolf@pawb.social avatar

Packages or dependencies with only one maintainer that are this popular have always been an issue, and not just a security one.

What happens when that person can’t afford to or doesn’t want to run the project anymore? What if they become malicious? What if they sell out? Etc.

slazer2au,

What if the repository becomes stupid and takes a package away from a developer and said developer deletes his other packages. See leftpad.

JasonDJ,
fruitycoder,

There’s gotta be a better way to verify programs then just what the devs do. For example patching the fuzzer, that should be seen as a clear separation of duties problem.

That constant issue of low Dev/high use dependencies is awful and no one I’ve met on the business end can seem to figure out that need to support those kind of people or accept, what should frankly be, legal liability for what goes wrong. This isn’t news its just a cover song. And its not an open source problem, its just a software problem. (

Potatos_are_not_friends,

In the words of the devs in that security email, and I’m paraphrasing -

“Lots of people giving next steps, not a lot people lending a hand.”

I say this as a person not lending a hand. This stuff over my head and outside my industry knowledge and experience, even after I spent the whole weekend piecing everything together.

JoeKrogan,
@JoeKrogan@lemmy.world avatar

You are right, as you note this requires a set of skills that many don’t possess.

I have been looking for ways I can help going forward too where time permits. I was just thinking having a list of possible targets would be helpful as we could crowdsource the effort on gitlab or something.

I know the folks in the lists are up to their necks going through this and they will communicate to us in good time when the investigations have concluded.

suy,

no more patching fuzzers to allow that one program to compile. Fix the program

Agreed.

Remember Debian’s OpenSSL fiasco? The one that affected all the other derivatives as well, including Ubuntu.

It all started because OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID. Who the hell relies on uninitialized memory ever? The Debian maintainer wanted to fix Valgrind errors, and submitted a patch. It wasn’t properly reviewed, nor accepted in OpenSSL. The maintainer added it to the Debian package patch, and then everything after that is history.

Everyone blamed Debian “because it only happened there”, and definitely mistakes were done on that side, but I surely blame much more the OpenSSL developers.

dan,
@dan@upvote.au avatar

OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID.

Did they have a comment above the code explaining why it was doing it that way? If not, I’d blame OpenSSL for it.

The OpenSSL codebase has a bunch of issues, which is why somewhat-API-compatible forks like LibreSSL and BoringSSL exist.

suy,

I’d have to dig it, but I think it said that it added the PID and the uninitialized memory to add a bit more data to the entropy pool in a cheap way. I honestly don’t get how that additional data can be helpful. To me it’s the very opposite. The PID and the undefined memory are not as good quality as good randomness. So, even without Debian’s intervention, it was a bad idea. The undefined memory triggered valgrind, and after Debian’s patch, if it weren’t because of the PID, all keys would have been reduced to 0 randomness, which would have probably raised the alarm much sooner.

Socsa,

This has always been the case. Maybe I work in a unique field but we spend a lot of time duplicating functionality from open source and not linking to it directly for specifically this reason, at least in some cases. It’s a good compromise between rolling your own software and doing a formal security audit. Plus you develop institutional knowledge for that area.

And yes, we always contribute code back where we can.

datelmd5sum,

We run our forks not because of security, but because pretty much nothing seems to work for production use without some source code level mods.

UnityDevice,

If this was done by multiple people, I’m sure the person that designed this delivery mechanism is really annoyed with the person that made the sloppy payload, since that made it all get detected right away.

fluxion,

I hope they are all extremely annoyed and frustrated

acockworkorange,

Inconvenienced, even.

Hupf,

Inconceivable!

possiblylinux127,
bobburger,

I like to imagine this was thought up by some ambitious product manager who enthusiastically pitched this idea during their first week on the job.

Then they carefully and meticulously implemented their plan over 3 years, always promising the executives it would be a huge pay off. Then the product manager saw the writing on the wall that this project was gonna fail. Then they bailed while they could and got a better position at a different company.

The new product manager overseeing this project didn't care about it at all. New PM said fuck it and shipped the exploit before it was ready so the team could focus their work on a new project that would make new PM look good.

The new project will be ready in just 6-12 months, and it is totally going to disrupt the industry!

nxdefiant,

I see a dark room of shady, hoody-wearing, code-projected-on-their-faces, typing-on-two-keyboards-at-once 90’s movie style hackers. The tables are littered with empty energy drink cans and empty pill bottles.

A man walks in. Smoking a thin cigarette, covered in tattoos and dressed in the flashiest interpretation of “Yakuza Gangster” imaginable, he grunts with disgust and mutters something in Japanese as he throws the cigarette to the floor, grinding it into the carpet with his thousand dollar shoes.

Flipping on the lights with an angry flourish, he yells at the room to gather for standup.

MonkeMischief,

Cigarette is stomped.

Stickies fall from kanban board.

Backdoor dishonor.

umbrella,
@umbrella@lemmy.ml avatar

did we find out who was that guy and why was he doing that?

intrepid,

We probably never will.

drwho,
@drwho@beehaw.org avatar

If we ever do, it’ll be 40 or 50 years from now.

Unyieldingly,

The CIA will know, we will most likely not.

fluxion,

It was Spez trying to collect more user data to make Reddit profitable

possiblylinux127,

Probably a state actor

FatTony,

In a nutshell you say…

alphafalcon,

Coconut at least…

tourist, (edited )
@tourist@lemmy.world avatar

I’m going to read it later, but if I don’t find a little red Saddam Hussein hidden in there I’ll be disappointed

edit: eh my day wasn’t good anyway

d3Xt3r,

This is informative, but unfortunately it doesn’t explain how the actual payload works - how does it compromise SSH exactly?

possiblylinux127, (edited )

I do believe it does

Aatube,

It allows a patched SSH client to bypass SSH authentication and gain access to a compromised computer

d3Xt3r, (edited )

From what I’ve heard so far, it’s NOT an authentication bypass, but a gated remote code execution.

There’s some discussion on that here: bsky.app/profile/…/3kowjkx2njy2b

But it would be nice to have a similar digram like OP’s to understand how exactly it does the RCE and implements the SSH backdoor. If we understand how, maybe we can take measures to prevent similar exploits in the future.

underisk,
@underisk@lemmy.ml avatar

I think ideas about prevention should be more concerned with the social engineering aspect of this attack. The code itself is certainly cleverly hidden, but any bad actor who gains the kind of access as Jia did could likely pull off something similar without duplicating their specific method or technique.

whereisk,

Ideally you need a double-blind checking mechanism definitionally impervious to social engineering.

That may be possible in larger projects but I doubt you can do much in where you have very few maintainers.

I bet the lesson here for future attackers is: do not affect start-up time.

underisk,
@underisk@lemmy.ml avatar

I imagine if this attacker wasn’t in a rush to get the backdoor into the upcoming Debian and Fedora stable releases he would have been able to notice and correct the increased CPU usage tell and remain undetected.

Aatube,

Under the right circumstances this interference could potentially enable a malicious actor to break sshd authentication and gain unauthorized access to the entire system remotely.
—Wikipedia, sourced to RedHat

Of course, the authentication bypass allows remote code execution.

drwho,
@drwho@beehaw.org avatar

Somebody wrote a PoC for it: github.com/amlweems/xzbot#backdoor-demo

Basically, if you have a patched SSH client with the right ED448 key you can have the gigged sshd on the other side run whatever commands you want. The demo just does id > /tmp/.xz but it could be whatever command you want.

baseless_discourse, (edited )

I am not a security expert, but the scenario they describe sounds exactly like authentication bypass to a layman like me.

According to www.youtube.com/watch?v=jqjtNDtbDNI the software installs a malicious library that overwrite the signature verification function of ssh.

I was wondering if the bypass function was designed to be slightly less resource intensive, it probably won’t be discovered and will be shipped to production.

Also I have mixed feeling about dynamic linking, on the one hand, it allows projects like harden malloc to easily integrate into the system, on the other hand, it also enables the attacker to hijack the library in a similar fashion.

EDIT: This is a remote code execution exploit, not authentication bypass. The payload is sent as an authentication message and will be executed by the compromised authentication function.

This means:

  • the payload will be executed as root, since sshd run as root.
  • the payload will leave no trace in login log.

So this is much worse than ssh authentication bypass.

Aatube,

5.6.1 in fact made it less resources-intensive, but the distro happened to not have updated yet when Freund discovered the backdoor.

Cochise,

Authentication bypass should give you interactive access. “I’m in” like. Remote code execution only allows you to run a command, without permanent access. You can use some RCE vulnerabilities to bypass authentication, but not all.

baseless_discourse, (edited )

Yeah, but the malicious code replaces the ssh signature verification function to let it allow a specific signature. Hence attacker, with the key, can ssh into any system without proper authentication by ssh.

This kind of describes authentication by-pass, not just remote code execution…

EDIT: it is remote code execution, see the edit of parent comment.

uis,

There is RedHat’s patch for OpenSSH that adds something for systemd, which adds libsystemd as dependency, which has liblzma as its own dependency.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • DreamBathrooms
  • magazineikmin
  • thenastyranch
  • hgfsjryuu7
  • Youngstown
  • rosin
  • InstantRegret
  • slotface
  • mdbf
  • PowerRangers
  • tsrsr
  • kavyap
  • khanakhh
  • vwfavf
  • normalnudes
  • ngwrru68w68
  • ethstaker
  • tacticalgear
  • cubers
  • Durango
  • everett
  • osvaldo12
  • modclub
  • tester
  • cisconetworking
  • GTA5RPClips
  • Leos
  • anitta
  • All magazines