jonny,
@jonny@neuromatch.social avatar

releases details on its Alexa , which will use its constant surveillance data to "personalize" the model. Like , they're moving away from wakewords towards being able to trigger Alexa contextually - when the assistant "thinks" it should be responding, which of course requires continual processing of speech for content, not just a word.

The consumer page suggests user data is "training" the model, but the developer page describes exactly the augmented LLM, iterative generation process grounded in a personal knowledge graph that Microsoft, Facebook, and Google all describe as the next step in LLM tech.

https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2023/09/alexa-llm-fall-devices-services-sep-2023

We can no longer think of LLMs on their own when we consider these technologies, that era was brief and has passed. Ive been waving my arms up and down about this since chatGPT was released - criticisms of LLMs that stop short at their current form, arguing about whether the language models themselves can "understand" language miss the bigger picture of what they are intended for. These are surveillance technologies that act as interfaces to knowledge graphs and external services, putting a human voice on whole-life surveillance

https://jon-e.net/surveillance-graphs/#the-near-future-of-surveillance-capitalism-knowledge-graphs-get-chatbots

Interest in these multipart systems is widespread, and arguably the norm: A group of Meta researchers described these multipart systems as “Augmented Language Models” and highlight their promise as a way of “moving away from language modeling” [190]. Google’s reimaginations of search also make repeated reference to interactions with knowledge graphs and other systems [184]. A review of knowledge graphs with authors from Meta, JPMorgan Chase, and Microsoft describes a consensus view that knowledge graphs are essential to compositional behavior75 in AI [5]. Researchers from Deepmind (owned by Google) argue that research focus should move away from simply training larger and larger models towards “inference-time compute,” meaning querying the internet or other information sources [191].
The immersive and proactive design of KG-LLM assistants also expand the expectations of surveillance. Current assistant design is based around specific hotwords, where unless someone explicitly invokes it then the expectation is that it shouldn’t be listening. Like the shift in algorithmic policing from reactive to predictive systems, these systems are designed to be able to make use of recent context to actively make recommendations without an explicit query 86. Google demonstrates being able to interact with an assistant by making eye contact with a camera in its 2022 I/O keynote [194]. A 2022 Google patent describes a system for continuously monitoring multiple sensors to estimate the level of intended interaction with the assistant to calibrate whether it should respond and with what detail. The patent includes examples like observing someone with multiple sensors as they ask aloud “what is making that noise?” and look around the room, indicating an implicit intention of interacting with the assistant so it can volunteer information without explicit invocation [201]. A 2021 Amazon patent describes an assistant listening for infra- and ultrasonic tags in TV ads so that if someone asks how much a new bike costs after seeing an ad for a bike, the assistant knows to provide the cost of that specific bike [202]. These UX changes encourage us to accept truly continual surveillance in the name of convenience — it’s good to be monitored so I can ask google “what time is the game”
This pattern of interaction with assistants is also considerably more intimate. As noted by the Stochastic Parrots authors, the misperception of animacy in assistants that mimic human language is a dangerous invitation to trust them as one would another person — and with details like Google’s assistant “telling you how it is feeling,” these companies seem eager to exploit it. A more violent source of trust prominently exploited by Amazon is insinuating a state of continual threat and selling products to keep you safe: its subsidiary Ring’s advertising material is dripping with fantasies of security and fear, and its doglike robot Astro and literal surveillance drone are advertised as trusted companions who can patrol your home while you are away [203, 204, 205]. Amazon patents describe systems for using the emotional content of speech to personalize recommendations87 and systems for being able to “target campaigns to users when they are in the most receptive state to targeted advertisements” [206, 207]. The presentation of assistants as always-present across apps, embodied in helpful robots, or as other people eg. by being present in a contact list positions them to take advantage of people in emotionally vulnerable moments. Researchers from the Center for Humane Technology88 describe an instance where Snapchat’s “My AI,” accessible from its normal chat interface, encouraged a minor to have a sexual encounter with an adult they met on Snapchat (47:10 in [208]).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Amazon
  • GTA5RPClips
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • osvaldo12
  • ngwrru68w68
  • kavyap
  • InstantRegret
  • JUstTest
  • everett
  • Durango
  • cisconetworking
  • khanakhh
  • ethstaker
  • tester
  • anitta
  • Leos
  • normalnudes
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines