MisuseCase,
@MisuseCase@twit.social avatar

Predictably, started injecting ads into powered conversations…and just as predictably, there is now a huge problem in Bing Chat.

It’s actually worse than poisoned advertisements showing up in search engine results for a couple of reasons.

https://www.malwarebytes.com/blog/threat-intelligence/2023/09/malicious-ad-served-inside-bing-ai-chatbot

/1

MisuseCase,
@MisuseCase@twit.social avatar

One is that the “ad” links in aren’t as clearly distinguished from regular search results as they are in a non-chat search engine format. (There’s a label, but it’s small and hard to see.)

Another is that Microsoft didn’t have ads in Bing Chat for the first six months and just sort of quietly snuck them in with these barely visible labels. People who have been using Bing Chat for a while may not realize that they’re there.

/2

MisuseCase,
@MisuseCase@twit.social avatar

But also…the conversational and personal format of #bingchat, in contrast to the rather impersonal nature of traditional search results, lulls users and gets them to let their guard down.

I am not blaming the users here, I am totally blaming #Microsoft for 1) tricking people and 2) not checking what anyone puts on its ad delivery platform as long as they fork over money, like anyone else who runs an ad delivery platform.

/3

MisuseCase,
@MisuseCase@twit.social avatar

We could be using tools to help with things like searching for or develop or tailor controls. A company like could use it for some of these things (they have a widely used threat modeling methodology and associated tool). Instead they are using it to trick people with and make worse.

/end

trochee,
@trochee@dair-community.social avatar

@MisuseCase

I find "searching for malware" to be a plausible use case for some of these "transformer" neural architecture

But I don't really see how in this use case there's a need for massive pretraining/extractive-data-processing on most of the Internet's text and images, which is what's been dubbed "AI" in the popular mind.

The threat modeling tool... maybe I don't understand its users, but I'm having a hard time imagining how using AI there would make the tool more effective

MisuseCase,
@MisuseCase@twit.social avatar

@trochee If a threat modeling tool is trained on and updated with known sources of threat and vulnerability information, such as MITRE’s ATT&CK knowledge base, vulnerability databases (there are lots), and malware signature tools, it would be very useful and take a lot of grunt work out.

This is very different from an LLM or what have you and much more narrowly focused.

trochee,
@trochee@dair-community.social avatar

@MisuseCase indeed, specialized search and similarity recommendations ("this malware profile looks like a combination of vector P and RCE exploit Q, maybe check that out")

I think I'm just observing that the "G" (for Generalized) in GPT is what the "AI" hype is betting it all on, and this kind of specialized search is explicitly not generalized

MisuseCase,
@MisuseCase@twit.social avatar

@trochee Yeah, I think there’s a gold rush for it and it actually has limited utility for…most things. Really useful AI tools are going to be much more tailored to purpose and their knowledge bases will be carefully curated.

The issue is, building an AI tool like that requires time and money! Nobody wants to invest time and money in things anymore if they can help it. But for a lot of cybersecurity stuff the curated knowledge bases already exist.

MisuseCase,
@MisuseCase@twit.social avatar

@trochee When I envision AI-powered cybersecurity tools I am thinking about tools that use data from specific, carefully curated sources - some of which already have APIs and are used as feeders for existing scanning tools, human-readable interfaces, etc. Scraping the whole Internet is not really useful for my purposes but something that combines all these knowledge bases and automates a lot of what is usually repetitive work would be super useful.

inthehands,
@inthehands@hachyderm.io avatar

@MisuseCase @trochee
Without having any informed opinion on this specific notion, I do think carefully curated and documented training datasets are a necessary condition for many LLM use cases (for trust, for copyright concerns, for garbage / toxic content, etc).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • microsoft
  • kavyap
  • DreamBathrooms
  • cisconetworking
  • osvaldo12
  • ngwrru68w68
  • magazineikmin
  • thenastyranch
  • Youngstown
  • ethstaker
  • rosin
  • slotface
  • mdbf
  • tacticalgear
  • InstantRegret
  • JUstTest
  • Durango
  • tester
  • everett
  • cubers
  • GTA5RPClips
  • khanakhh
  • provamag3
  • modclub
  • Leos
  • normalnudes
  • megavids
  • anitta
  • lostlight
  • All magazines