jonny,
@jonny@neuromatch.social avatar

The NYTimes story on the AI writing news is a story about the repackaging of the knowledge graph. the language model is just an interface. Repackaging as an assistant, the examples of broken factboxes, the sale as a labor saving device, "we don't intend to replace your writers, we want to give you more convenient access to factual information" - here's a piece that should help make sense of that.

https://jon-e.net/surveillance-graphs/#the-lens-of-search-re-centers-our-focus-away-from-the-generative

The lens of search re-centers our focus away from the generative capabilities of LLMs towards parsing natural language: one of the foundations of contemporary search and what information giants like Google have spent the last 20 years building. The context of knowledge graphs that span public “factual” information with private “personal” information gives further form to their future. The Microsoft Copilot model above is one high-level example of the intended architecture: LLMs parse natural language queries, conditioned by factual and personal information within a knowledge graph, into computer-readable commands like API calls or other interactions with external applications, which can then have their output translated back into natural language as generated by the LLM. Facebook AI researchers describe another “reason first, then respond” system that is more specifically designed to tune answers to questions with factual knowledge graphs [189]. The LLM being able to “understand” the query is irrelevant, it merely serves the role as a natural language interface to other systems.
Historically, these personal assistants have worked badly83 and are rightly distrusted84 by many due to the obvious privacy violation represented by a device constantly recording ambient audio85. Impacts from shifts in assistants might be then limited by people simply continuing to not use them. Knowledge graph-powered LLMs appear to be a catalyst in shifting the form of these assistants to make them more difficult to avoid. There is already a clear push to merge assistants with search — eg. Bing Search powered by chatGPT, and Google has merged its Assistant team with the team that is working on its LLM search, Bard [199]. Microsoft’s Copilot 365 demo also shows a LLM prompt modeled as an assistant integrated as a first-class interface feature in its Office products. Google’s 2022 I/O Keynote switches fluidly between a search-like, document-like, and voice interface with its assistant. Combined with the restructuring of App ecosystems to more tightly integrate with assistants, their emerging form appears to look less like a traditional voice assistant and more like a combined search, app launcher, and assistant underlay that is continuous across devices. The intention is to make the assistant the primary means of interacting with apps and other digital systems. As with many stretches of the enclosure of the web, UX design is used as a mechanism to coerce patterns of expectation and behavior.
Regardless of how well this new iteration of assistants work, the intention of their design is to dramatically deepen the intimacy and intensity of surveillance and further consolidate the means of information access.

jonny,
@jonny@neuromatch.social avatar

The rewriting titles idea is perfectly in line with what they discuss in their investor calls in the context of advertising. it's a natural move if you see the LLMs as scope-limited enterprise tools that are intend to hook companies into dependence on their information access systems (consolidation of power) and hook people into them as means of interacting with an ecosystem of apps, commerce, etc. (intimacy of surveillance).

The debate about whether the LLMs are sentient is not serving us well. It's true, of course they aren't sentient, but it's obscuring more of the truth of the strategy than it is innoculating us against it at this point. Whether the LLMs are sentient is irrelevant because the plan was never to just continue to use the LLMs on their own. They are interfaces to other systems, can be presented as tools that can be conditioned by "factual information."

They won't work as advertised, of course, but we have to be very clear about the threat:
The threat is not that LLMs will write the news. That's already happening, do any search.
The threat is that the LLMs will be used to leverage greater control over our access to information by destabilizing our already fragile information ecosystem and presenting themselves as precisely not sentient, but handy assistants to interact with trusted databases - the last trustable sources of information left.

The addition of context-optimized clickbait headers for those willing to pay to be the brand beneath them is just an especially cynical product to sell to whichever suckers are desperate enough to buy it.

https://jon-e.net/surveillance-graphs/#the-most-obvious-power-grab-from-pushing-kg-llms-in-place-of-sea

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines