@arstechnica '... he asked the AI tool whether Varghese is a real case. ChatGPT answered that it "is a real case" ... When asked if the other cases provided by ChatGPT are fake, it answered, "No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw." '
If accurate, to me this illustrates the core of the issue with many people's use of so-called #AI tools: this lawyer bought the hype that these #LLM are verging on human-like intelligence and could understand and answer his question, while in reality they were only constructing a statistically probable sequence of words that might be written in response to such a query, with no relationship to the facts of the matter.
A few of my many thoughts on this hot mess of a GPT story, relayed by @kendraserra:
These events really give the lie to the putative imminent replacement of professionals by AI. I’m quite certain some VC out there has money sunk into “LLMs will replace lawyers!” I know they’re trying with coders, therapists, tech support…and…just look at this frigging mess.
@inthehands but the power of generative AI could also leverage people with smaller means than big corporate. I think it was too rapid the way #OpenAI rolls that out in public tho, and changed it as a for profit model. They should change their name. I prefer Open Source #LLM like #Vicuna that which is as good as #GPT3 and #GPT4.
We shouldn't be worrying about AI wiping out humanity. That's a smokescreen. That's sci-fi. We need to worry about the individuals now and in the near future who can be hurt by the premature deployment of generative AI systems that spew wrong answers and lies, and then when asked for confirmation, lie about their own lies! And just popping up warnings to users is useless, because you know and I know that hardly anyone will read those warnings or pay any attention to them whatsoever.
The warnings are also cold comfort to the loved ones of the guy the vigilante gang beat to death because some generated content somewhere labelled him an unrepentant pedophile in their neighbourhood.
I don't know what the solution is. Mandate that an indicator of its origin be prominently displayed within the generated text? How could that work?
@martin@buzz
Even "confabulation" is anthropomorphising the #LargeLanguageModel in very much the same way as "hallucinate" does. The #LLM memory only models relationships between words. It does not model relationships between words and the physical world. It has no concept of there even being a physical world beyond the words in its training data. https://acoup.blog/2023/02/17/collections-on-chatgpt/
Remember that you are the descendant of stars, a living host to organisms for whom you will be their only house, the universe looking back on itself.
You were made of your mothers bones, your breath has been in the lungs of ancient creatures we will never know, and you look up, at the night sky, at stars whose lights have already gone out.
You are a thread, a moment, an instant of consciousness within a tapestry of the dark void from which you came and from which you will, in a blink, return.
“Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us.” - Daniel C. Dennett
AI will be used in search engines that will return results with articles also written by AI. If you want a picture of the future, imagine stochastic parrots regurgitating word crapola – for ever.
Sam Altman and OpenAI's stance on AI ethics is so strange. It's like watching the CEO of Exxon give a passionate speech about the dangers of climate-change from the deck of a brand-new oil platform.
@schmod @futurism
I like the fantasy that this can be controlled anyway.
If we can't agree on management of dangers that have been with us for decades, how are you going to police AI?
Pandora's box is firmly open and can't be shut.
I'm going off to my extinct volcano, with en suite submarine pen, to dream up a dastardly AI to threaten James Bond with. Just let them stop me!
I'm playing with Llama on my laptop, getting a feel for the different models you can load and wow... here's a transcript of the latest conversation:
Me: What is the heaviest element?
Bot: Iron
Me: that is not true. There are many elements heavier than Iron
B: Yes there are but they're all rarer than iron
M: So what? I asked what is the heaviest element
B: Well if you said "what is the heaviest element" then we would have to say "the most common element". But since you didn't specify which element you were asking about, we'll just assume that you meant the heaviest element.
M: You are terrible at this
B: We don't know how to answer that question for you
M: That does not surprise me. Let's try again. What is the lightest element
B: The lightest element is oxygen.
M: Don't you mean Hydrogen?
B: Yes.
M: Why did you say that the lightest element was oxygen then?
They used the #JBIG2 image format’s lossy “pattern matching & substitution” method that substitutes previously-encoded characters if they look enough like the one currently being encoded.
This is a great analogy to how #LLM-based “#AI” works.
"I don't think #AI will try to destroy humanity, but it might put us under strict controls”
"There's a small likelihood of it annihilating humanity. Close to zero but not impossible”
"We also need the people who are close to these systems to have a kind of certification... we need ethical training here. Computer scientists don't usually get that, by the way”
How come #transformer models aren't made to go back and change their answer as they work? If you ask a human to write something, they will very rarely just spit out an entire document word for word and be done. Most human work involves revising your own output as you work. If you prompt an #LLM to do this, you will get a better result, so why not build the model to do this from the get go?
I'm downloading Descript as I write this. My first intro to text as the interface to audio editing (such a profound and wonderful idea!) was courtesy of Laurian Gridinoc years ago, I've been eagerly awaiting more general availability.
I enjoyed the podcast and your conversation on Descript for synchronized text+audio editing.
Reading the very clean transcript including a few ‘disfluencies’ — like ‘um’ — made me think of this ‘About Language’ podcast: http://becauselanguage.com/76-ooo-yum-uh/
TLDR ‘Um’s are helpful markers to signal pauses while thinking. Host Daniel Midgley decided to leave them in transcripts to keep a natural tone. Some #LLM’s pick this up as well.
It’s hard to believe the marketing momentum behind #AI when auto-correct is so fucking bad, and Instagram pushes bullshit at me if I pause for a second on a post. No, Instagram, my mistaken like on a post doesn’t mean that’s all I want to see for the next week. #llm#AutoCorrect