I get that it’s hot right now, but man, the user experience of LLMs being this bot you type text to seems like a huge step backwards compared to just integrating these AI features natively into products.
#ChatGPT is powered by a hidden army of contractors making $15 per hour. For a technology that is supposedly threatening many jobs, #AI tools require a large workforce to ensure accuracy and trust through a human feedback loop, something #Google has failed to do with #Bard.
“There is a world in which generative #AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own” - Naomi Klein
“The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While #AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist.”
Q&A with Vint Cerf, chief internet evangelist at #Google and recipient of IEEE's Medal of Honor, on how Google has changed since 2005, the hazards of #LLMs, #accessibility for disabled people, and more.
It’s not just the public. Some of your friends at your newspaper have been a bit credulous. In my book, “Rebooting A.I.,” we talked about the Eliza effect — we called it the “gullibility gap.” In the mid-1960s, Joseph Weizenbaum wrote this primitive piece of software called Eliza, and some people started spilling their guts to it. It was set up as a psychotherapist, and it was doing keyword matching. It didn’t know what it was talking about, but it wrote text, and people didn’t understand that a machine could write text and not know what it was talking about. The same thing is happening right now. It is very easy for human beings to attribute awareness to things that don’t have it. The cleverest thing that OpenAI did was to have GPT type its answers out one character at a time — made it look like a person was doing it. That adds to the illusion. It is sucking people in and making them believe that there’s a there there that isn’t there. That’s dangerous. We saw the Jonathan Turley incident, when it made up sexual harassment charges. You have to remember, these systems don’t understand what they’re reading. They’re collecting statistics about the relations between words. If everybody looked at these systems and said, “It’s kind of a neat party trick, but haha, it’s not real,” it wouldn’t be so disconcerting. But people believe it because it’s a search engine. It’s from Microsoft. We trust Microsoft. Combine that human overattribution with the reality that these systems don’t know what they’re talking about and are error-prone, and you have a problem."
Solving causal #reasoning tasks is a hallmark of intelligence. One recent study [1] categorizes these tasks into covariance-based and logic-based reasoning (screenshot) and examines how #GPT models perform on causal discovery, actual causality, and causal judgments.
#Video#TV#Streaming#Hollywood#Screenwriters#AI#LLMs:"Outside of the Netflix headquarters in New York City on Wednesday, hundreds of members of the Writer’s Guild of America (WGA) marched for a better contract on the second day of the writer's strike. They were there to communicate a clear message: Writers refuse to be replaced by AI.
Signs showcased slogans such as “Writers Generate All of it,” “Don’t Let ChatGPT Write ‘Yellowstone’,” “I Told ChatGPT To Make A Sign and It Sucked,” and “Don’t Uber Writing.” These signs referred to the unprecedented “AI” category in the guild’s proposal in which they asked to regulate the use of AI on union projects but were met with refusal from studios. Writers are seeking pay for episodes on streaming platforms, and to not have their work devalued and turned into gig labor due to the use of text-generating AI programs to write dialog."
In the chaos around #NLP, I went back and re-read the beautiful article by Lawrence Barsalou on the function of language in human cognition.
Barsalou argues that language evolved in humans to support coordinated action. Archival function of language is secondary. He highlights that #CognitiveScience#Linguistics has largely studied the secondary function and made minimal advances on the primary.
If we focus on the primary function of language - communication, we have to study how language production is tied to reasoning and inference in an #AI#ML system.
Human agents use language to provide information to their partner about the state of their beliefs, knowledge, and reasoning.
If reasoning and language generation are independent (like in #LLMs), tokens cannot be considered 'information'.
@shiwali 2/n You say that "If reasoning and language generation are independent (like in #LLMs), tokens cannot be considered 'information'." But they are not independent. Language is used to do many things including sharing information, but also to even be able to think in abstract systematic terms. So, LLM's do have knowledge, they are able to reason over the semantics from language. The article says... [see next]
The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. #AI will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.
The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. #AI will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.
i've noticed that there's a lot of fruitful #ai development around purely treating #LLMs as black boxes and focusing on prompt engineering + the ReAct pattern. Simply forcing the LLM to draw out it's thoughts over more text increases it's accuracy, and if you also interleave that with input from the user or calls to external services, e.g. Google, you can achieve very interesting results