jonny,
@jonny@neuromatch.social avatar

Seeing people praise #copilot for finally getting rid of hallucinations through simple RAG techniques of checking for reality in eg. citations. This moment where a lot of the trivial claims against #LLMs stopped being true, but the deeper harms of surveillance and information monopoly remained was inevitable and the chief danger of dismissing it as "fancy autocomplete." That is why I wrote this almost a year ago, as a warning of what comes next and what we can do about it: https://jon-e.net/surveillance-graphs/
#SurveillanceGraphs

datarama,
@datarama@hachyderm.io avatar

@jonny A footnote: RAG mitigates hallucinations, but it doesn't eliminate them.

I've had one RAG system claim that encrypting my hard drive would protect against data loss if the power gets cut. It even gave a citation (which said nothing of the sort). Another invented a bunch of non-existent functionality in a piece of software (referencing a manual that didn't support its claims).

Wordpress, Amazon and MDN have deployed RAG systems that also still made shit up.

jonny,
@jonny@neuromatch.social avatar

@datarama
Yes of course. I am not suggesting it actually works https://neuromatch.social/@jonny/111850288640838937

jonny,
@jonny@neuromatch.social avatar

Criticizing arguments you agree with is hard, but rewind to last March when all the criticism was focused on "are these things sentient or not" and the dominant counterargument to LLMs was that they were Just Text Generators. That was true, and it will remain true that they arent "AI," but that argument doesnt address the intended use as surveillance vectors disguised as tools. The risk of that strategy was it evaporating with grounding in knowledge graphs, leaving the vast majority of non-ideologically aligned tool users with nothing but LGTM on a cool new set of platforms. Hopefully the assistant and enterprise platform thrust is no longer unexpected, but thats just the normalization phase of the new cloud enclosure era.

To be very clear: I am not arguing that just because the tech conglomerates are promising magic that they will deliver it, almost precisely the opposite. I am not taking the claims made in research and public communications from these companies at face value and projecting theoretical risks78. My argument is that these technologies won’t work and that’s worse. As with search, the fuzziness and uninspectable failure of these systems is a feature not a bug. The harms I will describe are not theoretical future apocalypses, but deepen existing patterns of harm. Most of them don’t require mass gullibility or even particularly sophisticated technologies, but are impacts of a particular ideological mode of infrastructure development that includes bypassing much of the agency individual people might otherwise have to avoid them. Two prominent forms of the combined knowledge graph + LLM infrastructure that are in focus are their use in “personal assistants” and tailored enterprise platforms.

jonny,
@jonny@neuromatch.social avatar

That part of "bypassing much of the agency individual people might otherwise have to avoid them" is reflected in #Google integrating #Bard into android messenger. While academics worry about AI sentience and toy with them as objects of study, this is capitalism baby, and the information conglomerates came to kill each other and capture the market now that investors are calling due on stagnant ad money. It doesnt matter if the LLMs work. That's about as effective of an argument as saying politicians lie. Theyre just the pretense for a new modality of surveillance spanning individuals, groups, businesses, and governments.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • LLMs
  • DreamBathrooms
  • magazineikmin
  • cubers
  • thenastyranch
  • normalnudes
  • Youngstown
  • ngwrru68w68
  • slotface
  • mdbf
  • rosin
  • InstantRegret
  • kavyap
  • osvaldo12
  • khanakhh
  • tester
  • anitta
  • modclub
  • Leos
  • everett
  • ethstaker
  • Durango
  • GTA5RPClips
  • provamag3
  • megavids
  • tacticalgear
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines