#ML#AI#GenerativeAI#LLMs#FoundationModels#PoliticalEconomy: "A recent innovation in the field of machine learning has been the creation of very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets of data than typical deep learning systems and can be applied to a wide variety of tasks. Underpinning text-based systems such as OpenAI's ChatGPT and image generators such as Midjourney, these models have received extraordinary amounts of public attention, in part due to their reliance on prompting as the main technique to direct and apply them. This paper thus uses prompting as an entry point into the critical study of foundation models and their implications. The paper proceeds as follows: In the first section, we introduce foundation models in more detail, outline some of the main critiques, and present our general approach. We then discuss prompting as an algorithmic technique, show how it makes foundation models programmable, and explain how it enables different audiences to use these models as (computational) platforms. In the third section, we link the material properties of the technologies under scrutiny to questions of political economy, discussing, in turn, deep user interactions, reordered cost structures, and centralization and lock-in. We conclude by arguing that foundation models and prompting further strengthen Big Tech's dominance over the field of computing and, through their broad applicability, many other economic sectors, challenging our capacities for critical appraisal and regulatory response." https://journals.sagepub.com/doi/full/10.1177/20539517241247839
#AI#GenerativeAI#AITraining#Copyright#IP#Amazon: "A lawsuit is alleging Amazon was so desperate to keep up with the competition in generative AI it was willing to breach its own copyright rules.…
The allegation emerges from a complaint [PDF] accusing the tech and retail mega-corp of demoting, and then dismissing, a former high-flying AI scientist after it discovered she was pregnant.
The lawsuit was filed last week in a Los Angeles state court by Dr Viviane Ghaderi, an AI researcher who says she worked successfully in Amazon's Alexa and LLM teams, and achieved a string of promotions, but claims she was later suddenly demoted and fired following her return to work after giving birth. She is alleging discrimination, retaliation, harassment and wrongful termination, among other claims.
Montana MacLachlan, an Amazon spokesperson, said of the suit: "We do not tolerate discrimination, harassment, or retaliation in our workplace. We investigate any reports of such conduct and take appropriate action against anyone found to have violated our policies.""
#TheMetalDogArticleList #MetalInjection
The Threat Of AI Art: An Eye-Opening Interview With An Artist, A Musician & A College Professor
Why AI is an existential threat to artists.
#AI#GenerativeAI#HumanInTheLoop#GhostWork#Fauxtomation: "The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":"
I was listing something on eBay, and they encourage starting with an existing listing—presumably to increase the amount of detail and decrease the amount of work.
When I selected the same model, I got a default description that was extremely robotic and wordy while just repeating the spec sheet. I thought it sounded LLM-generated; sure enough when I went to edit it, there is a big shiny “write with AI” button.
It makes EVERY listing sound identical, lifeless, and lacking critical context like the SPECIFIC condition of the item, why it’s being sold, etc. You get an online marketplace with descriptions masquerading as human-authored all sporting the same useless regurgitation of the structured spec sheet, in a less digestible format.
Companies, don’t do this.
I don’t actually mind some of the “summarize/distill customer reviews” type generative AI stuff!
But this is worse as it mixes machine-written nonsense with the corpus of human-written text. And from poking at a few other listings, everyone is just using this feature and its output as-is without actually adding anything. It’s not being used to improve the experience, it’s being used to replace the one critical human part of the experience.
🌖NEXT WEEK🌗
Tues April 30 18:30 (BST)
with @ana_valdi
LIVE @UCLanthropology and on ZOOM
'The supply chain capitalism of AI: a call to (re)think algorithmic harms and resistance'
Everybody welcome FREE, LIVE and online! Just turn up!
Ana Valdivia, Lecturer at Oxford Internet Institute, will be speaking LIVE in the Daryll Forde Room, 2nd Floor of the UCL Anthropology Dept, 14 Taviton St, London WC1H 0BW
**NB We can now use the front door in Taviton St again **
You can also join us on ZOOM (ID 384 186 2174 passcode Wawilak)
@RadicalAnthro@ana_valdi Comprehension question: Is this about #AI (e.g. for #science when creating climate crisis calculations, forest damage surveys, astronomy, etc.) or #generativeAI (this picture-text-pest)/ #LLM or both?
Consider how long it took society to realize the net negative that #SocialMedia is. How long before we saw the scam of the#gigeconomy for what it was? After a decade of breathless promises, people are no longer seriously waiting on #AutonomousVehicles and #Cryptocurrency is finally seen by the wider public as the toxic libertarian fraud it always was.
Consider all this and that #GenerativeAI has only been in the public consciousness for about a year and ALREADY non-tech-oriented outlets are lampooning it for the bald-faced nonsense that it is as well as the #techbros who shill it as the robber barons they are. I am not going to say that we have yet learned our lesson, but this is to me a very promising sign. #aiisascam#goodnewshttps://youtu.be/20TAkcy3aBY
Just a random thought.
As Generative AI is being used for creating a lot of content these days, what happens when the next generation of AI models are trained using that content.
When AI models are trained on AI generated content, we'll officially enter the 8th circle of hell #ai#generativeai#AItransparency#discussion
#AI#GenerativeAI#AIEthics#ResponsibleAI#Hype: "We have been here before. Other overhyped new technologies have been accompanied by parables of doom. In 2000, Bill Joy warned in a Wired cover article that “the future doesn’t need us” and that nanotechnology would inevitably lead to “knowledge-enabled mass destruction”. John Seely Brown and Paul Duguid’s criticism at the time was that “Joy can see the juggernaut clearly. What he can’t see—which is precisely what makes his vision so scary—are any controls.” Existential risks tell us more about their purveyors’ lack of faith in human institutions than about the actual hazards we face. As Divya Siddarth explained to me, a belief that “the technology is smart, people are terrible, and no one’s going to save us” will tend towards catastrophizing.
Geoffrey Hinton is hopeful that, at a time of political polarization, existential risks offer a way of building consensus. He told me, “It’s something we should be able to collaborate on because we all have the same payoff”. But it is a counsel of despair. Real policy collaboration is impossible if a technology and its problems are imagined in ways that disempower policymakers. The risk is that, if we build regulations around a future fantasy, we lose sight of where the real power lies and give up on the hard work of governing the technology in front of us."
#Journalism#Media#News#AI#GenerativeAI#PressFreedom: "Some journalists fear AI regulation can be used to stifle press freedom. A panel moderated by researcher Felix Simon looked at how governments should (and shouldn’t) regulate AI, and how those rules might impact journalism in the years ahead. Indian editor Ritu Kapur said she understood the need for legislation but expressed her concerns about governments using this as an excuse to increase their control over the public sphere.
“I’m wary about regulation from governments. During this year’s Indian elections, politicians are using AI for campaign purposes, so any regulation coming from entities using AI for their own agenda will be very skewed,” said Kapur, who mentioned a recent incident in which a user asked Google Gemini whether Prime Minister Narendra Modi is a fascist, prompting a response from the government and some backlash online." https://reutersinstitute.politics.ox.ac.uk/news/international-journalism-festival-2024-what-we-learnt-perugia-about-future-news
#AI#GenerativeAI#ChatBots#Healthcare#WHO: "The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.
SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.
#Copyright#AI#GenerativeAI#GeneratedMusic#Music#Udio: "It’s not my intention in this article to highlight examples of regurgitation of copyrighted material, with a view to suggesting this regurgitation constitutes copyright infringement. The question that is much more important than regurgitation, in my view, is what models like Udio’s are trained on.
There are many people, myself included, who think that training generative AI models on copyrighted work without permission at all constitutes copyright infringement, whether material from the training set is regurgitated in the output or not.
When likenesses to copyrighted music show up in the outputs of AI music generation systems, there are generally three possibilities: either it is chance (this is of course not impossible), or the systems were trained on copyrighted music with licenses to do so, or they were trained on copyrighted music without licenses in place.
If the models used in Udio’s product are trained on copyrighted work, it is possible they have licenses in place with rightsholders that permit them to train on the copyrighted work whose likenesses are found in these examples.
#AI#GenerativeAI#Energy: "AI’s voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Holdings Plc Chief Executive Officer Rene Haas.
By 2030, the world’s data centers are on course to use more electricity than India, the world’s most populous country, Haas said. Finding ways to head off that projected tripling of energy use is paramount if artificial intelligence is going to achieve its promise, he said.
“We are still incredibly in the early days in terms of the capabilities,” Haas said in an interview. For AI systems to get better, they will need more training — a stage that involves bombarding the software with data — and that’s going to run up against the limits of energy capacity, he said.
Haas joins a growing number of people raising alarms about the toll AI could take on the world’s infrastructure. But he also has an interest in the industry shifting more to Arm chips designs, which are gaining a bigger foothold in data centers. The company’s technology — already prevalent in smartphones — was developed to use energy more efficiently than traditional server chips." https://www.bloomberg.com/news/articles/2024-04-17/ai-computing-is-on-pace-to-consume-more-energy-than-india-arm-says
#AI#GenerativeAI#Hype#Blockchain: "When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs." https://www.citationneeded.news/ai-isnt-useless/