@mttaggart@infosec.town

mttaggart

@mttaggart@infosec.town

Displaced Philly boy. Threat hunter. Streamer. Educator. Dad. Captain in the fight against #llm insanity. #infosec, #programming #rust, #python, #haskell, and #webapp. #opensource advocate. Cofounder of https://infosec.exchange/@thetaggartinstitute. Made wtfbins.wtf. Not your bro. All opinions my own. #fedi24 #searchable

This profile is from a federated server and may be incomplete. Browse more on the original instance.

mttaggart, to windows

Here's some ridiculous inside baseball for you: when creating an answer file for unattended installs, you have the option of using base64-encoded passwords in the XML. This is great for handling special characters and other issues. But if you don't create the answer file with Windows System Image Manager, you might miss this bananas behavior: the plaintext gets either Password or AdministratorPassword appended to it before encoding! And in fact, the installation is expecting that string to be present in the plaintext after decoding!

This is not mentioned in the Answer File reference.

mttaggart, to Discord

Do you run a ? Here's how to get out of your house.

mttaggart,

The list of known accounts has been published. Recommend mass-banning them ASAP: gist.github.com/Dziurwa14/05db50c66e4dcc67d129838e1b9d739a

mttaggart, to random

In retrospect, hosting attachments under the repo URL—you know, the one that is supposed to have access control—was a ridiculously bad design choice.

www.bleepingcomputer.com/news/security/github-comments-abused-to-push-malware-via-microsoft-repo-urls/

mttaggart, to random

Hey @obsidian: It is a value-add that there is no LLM nonsense in . I really, really hope it stays that way.

RE: social.coop/users/bitprophet/statuses/112304932366870056

bitprophet, to random
@bitprophet@social.coop avatar

LOL, so glad I'd already started replacing my use with – the former has, to nobody's real surprise, jumped on the AI bandwagon.

mttaggart,

@bitprophet At this point, easier to count the services that haven't.

mttaggart, to random

In the Age of the Models, we are obliged to be more human, not less. More sincere, more creative, more quirky, more unpredictable. And most importantly, more kind.

But we are also obliged to be more careful, more rigorous, more thorough. We must be exceptional at all the things the models do poorly. It must be immediately clear when charlatans are trying to pass off the slurry from a model as the work of a person.

Become more human.

mttaggart, to random

This is the height of UI and no one can convince me otherwise.

mttaggart, to random

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.

Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

  1. RCE
  2. XSS
  3. SQLI
  4. CSRF
  5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

LinuxAndYarn, to Phillies
@LinuxAndYarn@mastodon.social avatar

The revealed their uniforms today, and now I really feel cheated by what the got.

Of course, people in Queens will be like "black pin stripes? WTF? Yankees suck!"

mttaggart,

@LinuxAndYarn The Phils' CC uni is growing on me, but I do wish it had more Philly in it. The bridge on that hat is super good.

But let's be honest; we all just want the powder blues back for good.

mttaggart, to random

Hey Fedi! What are you grateful for this week?

mttaggart, to random

Almost nobody has grappled with what it really means for truth when anyone can appear to say anything, do anything. This will impact everything from economic stability to climate policy. It will also impact individual lives in deeply intimate and hurtful ways.

Which is why I believe combating generative content will be one of the most important technical efforts of the next decade.

arstechnica.com/information-technology/2024/04/microsofts-vasa-1-can-deepfake-a-person-with-one-photo-and-one-audio-track/

mttaggart,

@AAKL The only lame explanation I can arrive at is "Making content creation easier," but who gives a crap if TikTokers can automate their stuff if women are gonna lose their kids in custody hearings because of deepfakes—just one awful example I thought of off the top of my head.

No: this is all harm. The rationale is profit, which is definitionally evil when it comes from suffering.

mttaggart,

@blake @AAKL That's a good one—in fact basically how the Star Trek viewscreen has to work—although even that is way, way, open for abuse.

mttaggart, to random
mttaggart,

Not to be overlooked, the ridiculous claim by the LAPD foundation to own the name of a, uh, public government entity.

Kinda saying the quiet part out loud there.

mttaggart, to random

Honestly just have fail2ban installed by default, especially on cloud images.

mttaggart, to random

This is pretty slick, and targeting LastPass is really evil, because most users who would know better have likely bailed.

I know that I recommended LP to family at one time, and now I am kind of screwed because getting it set up for parents at all was such a hassle, migrating them off seems like I'd lose them entirely. I'm sure I'm not alone in this.

www.bleepingcomputer.com/news/security/cybercriminals-pose-as-lastpass-staff-to-hack-password-vaults/

mttaggart, to random

The NVD thing is becoming a Real Problem™

This enrichment process has slowed significantly starting on Feb 12 - depending on the CVE this process may be effectively stopped entirely. This means that if your scanner, SIEM or SOC process needs that additional metadata, a good chunk of the last 2 months worth of vulnerabilities essentially have not yet happened as far as the metadata goes.

Feb 12 coincidentally is just ahead of the new FedRAMP requirements (Rev 5) being released www.fedramp.gov/blog/2023-05-30-rev-5-baselines-have-been-approved-and-released/. Does this match up mean that NIST perhaps had some advance notice, and they maybe have outsourcers that don't (yet) meet these FedRAMP requirements? Or is NIST itself not yet in compliance with those regulations?

isc.sans.edu/diary/rss/30852

mttaggart, to random

Of course this happened, and of course kids are gonna abuse this technology and make life-ruining choices with it. Because you can't expect them to responsible wield that much power.

Once again, generative models are the first non-military technology I've wanted to disinvent. There is just so much harm to be found here.

RE: infosec.exchange/users/douglevin/statuses/112286897517215627

mttaggart, to random

Okay so the folks who said the LLM-generated "PoC" repos for were just claiming the space were dead-on. This repo, which was fake, is not using the known exploit. I expect the others have done the same.

So the lesson here is that we have a PoC ParkingCrew.

mttaggart, to random

Blue team, if your daily operational time is spent more on satisfying internal objectives than addressing yourself to your adversary, you are doing exactly what they want.

mttaggart, to random
mttaggart, to random

The recent attempted XZ Utils backdoor (CVE-2024-3094) may not be an isolated incident as evidenced by a similar credible takeover attempt intercepted by the OpenJS Foundation, home to JavaScript projects used by billions of websites worldwide. The Open Source Security (OpenSSF) and OpenJS Foundations are calling all open source maintainers to be alert for social engineering takeover attempts, to recognize the early threat patterns emerging, and to take steps to protect their open source projects.

openssf.org/blog/2024/04/15/open-source-security-openssf-and-openjs-foundations-issue-alert-for-social-engineering-takeovers-of-open-source-projects/

AAKL, to opensource
@AAKL@noc.social avatar

deleted_by_author

  • Loading...
  • mttaggart,

    @AAKL @theregister I don't think the US came up once in that piece. Europe did, and that's worth considering. But the EU has far more power to apply regulatory requirements on vendors than the US does. Nevertheless, I think the author's "if this works" section is rather fanciful.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Leos
  • modclub
  • magazineikmin
  • osvaldo12
  • tacticalgear
  • InstantRegret
  • Youngstown
  • slotface
  • rosin
  • love
  • kavyap
  • ngwrru68w68
  • mdbf
  • thenastyranch
  • megavids
  • DreamBathrooms
  • khanakhh
  • GTA5RPClips
  • cisconetworking
  • everett
  • Durango
  • cubers
  • tester
  • provamag3
  • anitta
  • ethstaker
  • normalnudes
  • JUstTest
  • All magazines