...source community's democratization approach of #LLM's now outpacing even #Open-AI's development leaps. (See first link) This is the evolutionary booster predicted at the end of the last century as "humanity has evolved into a socio-technological complex."
@arstechnica '... he asked the AI tool whether Varghese is a real case. ChatGPT answered that it "is a real case" ... When asked if the other cases provided by ChatGPT are fake, it answered, "No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw." '
If accurate, to me this illustrates the core of the issue with many people's use of so-called #AI tools: this lawyer bought the hype that these #LLM are verging on human-like intelligence and could understand and answer his question, while in reality they were only constructing a statistically probable sequence of words that might be written in response to such a query, with no relationship to the facts of the matter.
I'm downloading Descript as I write this. My first intro to text as the interface to audio editing (such a profound and wonderful idea!) was courtesy of Laurian Gridinoc years ago, I've been eagerly awaiting more general availability.
I enjoyed the podcast and your conversation on Descript for synchronized text+audio editing.
Reading the very clean transcript including a few ‘disfluencies’ — like ‘um’ — made me think of this ‘About Language’ podcast: http://becauselanguage.com/76-ooo-yum-uh/
TLDR ‘Um’s are helpful markers to signal pauses while thinking. Host Daniel Midgley decided to leave them in transcripts to keep a natural tone. Some #LLM’s pick this up as well.
We shouldn't be worrying about AI wiping out humanity. That's a smokescreen. That's sci-fi. We need to worry about the individuals now and in the near future who can be hurt by the premature deployment of generative AI systems that spew wrong answers and lies, and then when asked for confirmation, lie about their own lies! And just popping up warnings to users is useless, because you know and I know that hardly anyone will read those warnings or pay any attention to them whatsoever.
The warnings are also cold comfort to the loved ones of the guy the vigilante gang beat to death because some generated content somewhere labelled him an unrepentant pedophile in their neighbourhood.
I don't know what the solution is. Mandate that an indicator of its origin be prominently displayed within the generated text? How could that work?
I just got the weirdest CV cover letter ever for a #python developer role, mixing Adrian Newey (the Adrian Newey of F1 fame), Hitler (that Hitler) and epigenetics (wtf epigenetics has to do with software engineering??). Quickest rejection ever
Sam Altman and OpenAI's stance on AI ethics is so strange. It's like watching the CEO of Exxon give a passionate speech about the dangers of climate-change from the deck of a brand-new oil platform.
@schmod @futurism
I like the fantasy that this can be controlled anyway.
If we can't agree on management of dangers that have been with us for decades, how are you going to police AI?
Pandora's box is firmly open and can't be shut.
I'm going off to my extinct volcano, with en suite submarine pen, to dream up a dastardly AI to threaten James Bond with. Just let them stop me!
“Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us.” - Daniel C. Dennett
"Do I gamble that the flood of language model texts will put a premium on thoughtful writing? That I’m not just improving the models by putting more writing out in the world?
Do I figure out ways of putting more of my writing behind some sort of pay- or login wall, even though that would be counterproductive for my career? Wouldn’t that also just disconnect me from my friends and the online community in general?"
➥ @baldur
"If you google around enough, you’ll find people talking about using LangChain to chain together LLM calls and get better outputs. However, chaining calls to an LLM just makes the latency problem with LLMs worse, which is a nonstarter for us. But even if it wasn’t, we have the potential to get bitten by compound probabilities."
➥ @honeycombio
It’s hard to believe the marketing momentum behind #AI when auto-correct is so fucking bad, and Instagram pushes bullshit at me if I pause for a second on a post. No, Instagram, my mistaken like on a post doesn’t mean that’s all I want to see for the next week. #llm#AutoCorrect
Whenever you think that something spat out by #chatgpt or another #LLM makes sense, remember that you are the one making sense out of it and not the LLM itself.
Explaining tasks to chatbots is not unlike making complicated restaurant orders. Ordering custom food for three people in restaurants is something I am very used to.
There are always mistakes. If you want something without coriander, you're occasionally going to get everything without coriander.
You'll get wrong numbers of utensils for the starters unless you go for the most common options of "share all starters with everyone".
In restaurants, like in code, people learn to "keep-it-simple-stupid", KISS principle. With #LLM#chatbots people generally don't perceive the complexities of the things they are asking, because it is clear to them, and they aren't reading what they wrote from the point of view of being the one who has to follow the instructions.
Just now I got Tom Yum with prawn in front of me even though I ordered one with chicken.
Waiters, like chatbots, become confused with complex orders. The way to handle this is to keep orders #simple.
"We've trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome supervision”)."