Google “We Have No Moat, and Neither Does OpenAI” - Google insider admits that Open Source #AI is already far ahead of anything Google, Microsoft or any other private corp is working on. Not surprising. Release exciting, high-potential new technology into the wild and this is what happens. So Google asks: Where do we add value to the process? Good question. Or is this a case where legacy industry giants simply won’t have a role? #OpenSource is leading the way, and that’s both encouraging AND somewhat alarming. Alarming because the potential for negative outcomes, fraud, misinformation and massive job loss remain in the absence of government oversight and regulation. #ArtificialIntelligence#LLM#ChatGPT#GPT4#singularityhttps://www.semianalysis.com/p/google-we-have-no-moat-and-neither
I'm intrigued by @simon's exploration of #LLM prompt injections.¹ It reminds me #GEB Contracrostipunctus — If Record Player X is sufficiently high-fidelity, then when it attempts to play the song 'I Cannot Be Played on Record Player X', it will break.² And jumping out of the system with "Record Player Omega" that works by scanning the record and reconfiguring the player into something which won’t break.³
I wonder if future #LLM-driven IDEs will look more like GitHub than a code editor: you file an "issue" to tell the bot what you need, it creates a PR for you to review, you test it, review it, make some changes, ask it to change other things and refine your specs in the issue until you're happy and merge.
That's the workflow I find myself using with ChatGPT, but all the copying and pasting between browser and IDE, and repetitions in the chat make it clumsy. GitHub solved most of the UX for that already.
Little more hands-on with a certain #LLM for some time now.
Once I learned what the "stop sequence" is actually good for, my instinctive ascription of at least a little bit of personality to the thing disappeared immediately.
I find the "oh, I know them, they're good people" argument about A.I. researchers to be such a tired and garbage statement. It doesn't matter how good the scientists who worked on the atomic bomb were, they still ushered in the age of mutually assured destruction.
I'd like to reiterate that my concern about #AI is not that it's going to become conscious, or that it's going to displace a huge portion of the workforce (it will) but before it does any of that it's going to be leveraged by bad actors to flood society with truthy sounding bullshit the likes of which we've never seen and it will lead to killing in the streets.
@mmitchell_ai maybe we could help kick start the collaboration of leading scientists on ways of controlling #LLM technology by improving the discourse on this very platform in such away that it brings together the many sources of academic expertise that are already here?
People overestimate what "AI" of today is and what it can actually do, because memes are spreading that describe any machine-assisted process in a highly glossed-over form, ignoring the required human effort to make it work.
The naive impression is that you just gave some generative engine a prompt and the result came out fully formed, when the actual process is that the people behind the project used multiple purpose-built engines, for each of the engines they iterated on prompts that would output something semi-coherent, and then they used human efforts to tie the result together.
This is currently spreading as "this AI-generated pizza commercial" with no further explanation, but Tom's Hardware interviewed the actual people who made it work:
Menschen empfinden als harmonisch, was sie gewohnt sind. Sie mögen, was sie kennen, am liebsten leicht geremixt, aber wiedererkennbar.
Auftritt "#KünstlicheIntelligenz": Bildgeneratoren rekombinieren wohlbekannte Bildinhalte und Stile. #LLM-Textgeneratoren ermitteln aus riesigen Textkorpora, welche Wörter am gewöhnlichsten aufeinander folgen. Sie sind beliebt und angenehm, weil sie harmonische Ergebnisse erzeugen - außer dem Betrachter fällt eine Disharmonie auf, insbesondere eine zur Realität.
Are LLMs generic writers - or rather generic books? Would anyone say of a book, it had theory of mind or feelings?
What if the #LLM is a compressed storage of utterances of human actors? The utterances are what we are used to see as signs of the actors’ knowledge or theory of mind or feeling.
That compression is not lossless. The decompression, triggered by prompt, does not restore the original faithfully. We misunderstand the errors of the book as creativity of a writer.
• Generative AI models learn from mass data scraped from web
• Indigenous groups fear losing control over their data
• Some move to protect their information from commercial use
"When U.S. tech firm OpenAI rolled out Whisper, a speech recognition tool offering audio transcription and translation into English for dozens of languages including Māori, it rang alarm bells for many Indigenous New Zealanders.
"Whisper, launched in September by the company behind the ChatGPT chatbot, was trained on 680,000 hours of audio from the web, including 1,381 hours of the Māori language."
"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/
"Recent research has taken this approach, training language models [#LLM's] to generate physics simulations, interact with physical environments and even generate #robotic action plans.
Embodied language understanding might still be a long way off, but these kinds of multisensory interactive projects are crucial steps on the way there."
...in disagreement about the definition and the terminology.
E.g. in this @reuters article, they state that #LLM's are a form of #GenerativeArtificialIntelligence (#GAI), while also starting that "Like other forms of artificial intelligence, generative AI learns how to take actions from past data. "