Im as anti-"AI" as the next person, but I think its important to keep in mind the larger strategic picture of "AI" w.r.t. #search when it comes to #DuckDuckGo - both have the problem of inaccurate information, mining the commons, etc. But Google's use of LLMs in search is specifically a bid to cut the rest of the internet out of information retrieval and treat it merely as a source of training data - replacing traditional search with #LLM search. That includes a whole ecosystem of surveillance and enclosure of information systems including assistants, chrome, android, google drive/docs/et al, and other vectors.
DuckDuckGo simply doesnt have the same market position to do that, and their system is set up as just an allegedly privacy preserving proxy. So while I think more new search engines are good and healthy, and LLM search is bad and doesnt work, I think we should keep the bigger picture in mind to avoid being reactionary, and I dont think the mere presence of LLM search is a good reason to stop using it.
@plaidtron3000
Totally. With this I assume its just trying to keep a rough feature parity for ppl who think AI search is good, and w/ this and other recent moves I also wonder how much their hand is forced by their relationship with Microsoft via Bing. Like I said I think more search engines are good, but as usual its good to have something you can recommend to someone who is not a tech person at all and is just a normal website they can use everywhere they use google.
@plaidtron3000
TBC I am not a DDG diehard I just think its the least bad option in the general use search engine space ATM. Though I also love searXNG and think distributed social bookmarking and indexing is the way forward.
Seeing people praise #copilot for finally getting rid of hallucinations through simple RAG techniques of checking for reality in eg. citations. This moment where a lot of the trivial claims against #LLMs stopped being true, but the deeper harms of surveillance and information monopoly remained was inevitable and the chief danger of dismissing it as "fancy autocomplete." That is why I wrote this almost a year ago, as a warning of what comes next and what we can do about it: https://jon-e.net/surveillance-graphs/ #SurveillanceGraphs
@jonny A footnote: RAG mitigates hallucinations, but it doesn't eliminate them.
I've had one RAG system claim that encrypting my hard drive would protect against data loss if the power gets cut. It even gave a citation (which said nothing of the sort). Another invented a bunch of non-existent functionality in a piece of software (referencing a manual that didn't support its claims).
Wordpress, Amazon and MDN have deployed RAG systems that also still made shit up.
Molly White is right as usual: "We’ve already tried out having a tech industry led by a bunch of techno-utopianists and those who think they can reduce everything to markets and equations. Let’s try something new, and not just give new names to the old."
trying to articulate new ideologies for computing is where my mind has been at the last few years too. i joke about the 'anti-perf manifesto,' but forging imaginaries that can run on computers that are actively antagonistic to the techno-utopians is all about killing myths of heroism where we are the someone else who goes out and "brings home the spoils." how do we reach a computing that isn't foundationally based on asymmetric power, we serfs at the mercy of the lord of the platform and vice versa, we altrustic platform providers building things the commoners couldn't possibly understand. The language of "scale" where one or a few services need to expand to provide for millions hides futures where we can provide for each other horizontally in overlapping quilts of dozens, hundreds. You could shorthand the "#AI" boom as the continuation of the information conglomerates trying to provide the everything platform, and if our dreams are to meaningfully challenge theirs we can't also aspire to simply "do what they're doing, except it's us doing it."
I tried to articulate this as the cloud orthodoxy vs. a still-nebulous idea i've landed on as vulgarity in computing, but i'll probably be orbiting this idea for as long as i am on line.
@mauve yeah agreed. I think most wiki engines went the route of "lets be the everything app platform" (xwiki is explicitly this) rather than "lets distill something core about the wiki model and make it wildly interoperable" - most of what I have reused from that package is the interface to mediawiki whose API is mysteriously godawful. I think if they looked more like ways to link a bunch of subpage chunks from different mediums together they would be a lot more interesting, and a way of bridging interfaces like discord and fb by writing adapters that represent their slots and verbs.
@mauve its a shame the matrix bridges are so promising and yet so limiting, like the double puppeting stuff is cool but extremely underused just to do 1:1 mirrors of channels across mediums, like it was easier to abandon matrix bridges altogether than it was to do an all-to-all bridge between multiple channels across slack and discord
#Amazon releases details on its Alexa #LLM, which will use its constant surveillance data to "personalize" the model. Like #Google, they're moving away from wakewords towards being able to trigger Alexa contextually - when the assistant "thinks" it should be responding, which of course requires continual processing of speech for content, not just a word.
The consumer page suggests user data is "training" the model, but the developer page describes exactly the augmented LLM, iterative generation process grounded in a personal knowledge graph that Microsoft, Facebook, and Google all describe as the next step in LLM tech.
We can no longer think of LLMs on their own when we consider these technologies, that era was brief and has passed. Ive been waving my arms up and down about this since chatGPT was released - criticisms of LLMs that stop short at their current form, arguing about whether the language models themselves can "understand" language miss the bigger picture of what they are intended for. These are surveillance technologies that act as interfaces to knowledge graphs and external services, putting a human voice on whole-life surveillance
@jonny@ewhac The patents on beamforming were all about steering the mic arrays. Our brains and ears do a lot of directional filtering that computers just cant out of the box.
And yep, manual annotation is the only way to build a golden set that you trust, especially on such a wide-ranging data set!
And my apologies if I came off heavy handed, I forgot I’d moved Alexa out of my profile and robbed you of that context.
So Alexa (and others, surely) got from Always Listening, hoping to hear a magic word by interpreting every sound it hears at HQ to Always Listening, planning to insert itself into the conversation when commercial opportunities avail themselves.
When you talk to a family member about this pain you've been having, will the underpaid contract Amazon driver show up with aspirin or do you have to opt out?
The NYTimes story on the AI writing news is a story about the repackaging of the knowledge graph. the language model is just an interface. Repackaging as an assistant, the examples of broken factboxes, the sale as a labor saving device, "we don't intend to replace your writers, we want to give you more convenient access to factual information" - here's a piece that should help make sense of that. #SurveillanceGraphs https://jon-e.net/surveillance-graphs/#the-lens-of-search-re-centers-our-focus-away-from-the-generative
The rewriting titles idea is perfectly in line with what they discuss in their investor calls in the context of advertising. it's a natural move if you see the LLMs as scope-limited enterprise tools that are intend to hook companies into dependence on their information access systems (consolidation of power) and hook people into them as means of interacting with an ecosystem of apps, commerce, etc. (intimacy of surveillance).
The debate about whether the LLMs are sentient is not serving us well. It's true, of course they aren't sentient, but it's obscuring more of the truth of the strategy than it is innoculating us against it at this point. Whether the LLMs are sentient is irrelevant because the plan was never to just continue to use the LLMs on their own. They are interfaces to other systems, can be presented as tools that can be conditioned by "factual information."
They won't work as advertised, of course, but we have to be very clear about the threat: The threat is not that LLMs will write the news. That's already happening, do any search. The threat is that the LLMs will be used to leverage greater control over our access to information by destabilizing our already fragile information ecosystem and presenting themselves as precisely not sentient, but handy assistants to interact with trusted databases - the last trustable sources of information left.
The addition of context-optimized clickbait headers for those willing to pay to be the brand beneath them is just an especially cynical product to sell to whichever suckers are desperate enough to buy it.
A bit of an overview and then I'll get into some of the more specific arguments in a thread:
This piece is in three parts:
First I trace the mutation of the liberatory ambitions of the #SemanticWeb into #KnowledgeGraphs, an underappreciated component in the architecture of #SurveillanceCapitalism. This mutation plays out against the backdrop of the broader platform capture of the web, rendering us as consumer-users of information services rather than empowered people communicating over informational protocols.
I then show how this platform logic influences two contemporary public information infrastructure projects: the NIH's Biomedical Data Translator and the NSF's Open Knowledge Network. I argue that projects like these, while well intentioned, demonstrate the fundamental limitations of platformatized public infrastructure and create new capacities for harm by their enmeshment in and inevitable capture by information conglomerates. The dream of a seamless "knowledge graph of everything" is unlikely to deliver on the utopian promises made by techno-solutionists, but they do create new opportunities for algorithmic oppression -- automated conversion therapy, predictive policing, abuse of bureacracy in "smart cities," etc. Given the framing of corporate knowledge graphs, these projects are poised to create facilitating technologies (that the info conglomerates write about needing themselves) for a new kind of interoperable corporate data infrastructure, where a gradient of public to private information is traded between "open" and quasi-proprietary knowledge graphs to power derivative platforms and services.
When approaching "AI" from the perspective of the semantic web and knowledge graphs, it becomes apparent that the new generation of #LLMs are intended to serve as interfaces to knowledge graphs. These "augmented language models" are joint systems that combine a language model as a means of interacting with some underlying knowledge graph, integrated in multiple places in the computing ecosystem: eg. mobile apps, assistants, search, and enterprise platforms. I concretize and extend prior criticism about the capacity for LLMs to concentrate power by capturing access to information in increasingly isolated platforms and expand surveillance by creating the demand for extended personalized data graphs across multiple systems from home surveillance to your workplace, medical, and governmental data.
I pose Vulgar Linked Data as an alternative to the infrastructural pattern I call the Cloud Orthodoxy: rather than platforms operated by an informational priesthood, reorienting our public infrastructure efforts to support vernacular expression across heterogeneous #p2p mediums. This piece extends a prior work of mine: Decentralized Infrastructure for (Neuro)science) which has more complete draft of what that might look like.
(I don't think you can pre-write threads on masto, so i'll post some thoughts as I write them under this) /1
Though the aims of the project themselves dip into the colonial dream of the great graph of everything, the true harms for both of these projects come what happens with the technologies after they end. Many information conglomerates are poised to pounce on the infrastructures built by the NIH and NSF projects, stepping in to integrate their work or buy the startups that spin off from them.
The NSF's Open Knowledge Network is much more explicitly bound to the national security and economic interests of the US federal government, intended to provide the infrastructure to power an "AI-driven future." That project is at a much earlier stage, but in its early sketches it promises to take the same patterns of knowledge-graphs plus algorithmic platforms and apply them to government, law enforcement, and a broad range of other domains.
This pattern of public graphs for private profits is well underway at existing companies like Google, and I assume the academics and engineers in both of these projects are operating with the best of intentions and perhaps playing a role they are unaware of.
@tkuhn@bengo@photocyte@jonny@knowledgepixels
Love this thread synthesizing so many cool directions! Adding our own perspective to the mix (along with @InferenceActive on birdsite) : https://osf.io/preprints/metaarxiv/9nb3u/
TL;DR
Trying to make the case that attention/sensemaking data (eg what researchers are attending to and their assessments of content) are an important kind of nano-scientific knowledge that gets extracted by platforms instead of helping to power content curation and discovery networks
ok we might not make it to an arXiv submission today, but the document is all prepped and ready to go except the abstract so we definitely will make tomorrow. phew. finally. #SurveillanceGraphs
sometimes big data solutionism jumps the shark and is just very funny
harnessing the vast amounts of data generated in every sphere of life and transforming them into useful, actionable information and knowledge is crucial to the efficient functioning of a modern society