br00t4c, to ai
@br00t4c@mastodon.social avatar
remixtures, to ML Portuguese
@remixtures@tldr.nettime.org avatar

: "A recent innovation in the field of machine learning has been the creation of very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets of data than typical deep learning systems and can be applied to a wide variety of tasks. Underpinning text-based systems such as OpenAI's ChatGPT and image generators such as Midjourney, these models have received extraordinary amounts of public attention, in part due to their reliance on prompting as the main technique to direct and apply them. This paper thus uses prompting as an entry point into the critical study of foundation models and their implications. The paper proceeds as follows: In the first section, we introduce foundation models in more detail, outline some of the main critiques, and present our general approach. We then discuss prompting as an algorithmic technique, show how it makes foundation models programmable, and explain how it enables different audiences to use these models as (computational) platforms. In the third section, we link the material properties of the technologies under scrutiny to questions of political economy, discussing, in turn, deep user interactions, reordered cost structures, and centralization and lock-in. We conclude by arguing that foundation models and prompting further strengthen Big Tech's dominance over the field of computing and, through their broad applicability, many other economic sectors, challenging our capacities for critical appraisal and regulatory response." https://journals.sagepub.com/doi/full/10.1177/20539517241247839

br00t4c, to generativeAI
@br00t4c@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "A lawsuit is alleging Amazon was so desperate to keep up with the competition in generative AI it was willing to breach its own copyright rules.…

The allegation emerges from a complaint [PDF] accusing the tech and retail mega-corp of demoting, and then dismissing, a former high-flying AI scientist after it discovered she was pregnant.

The lawsuit was filed last week in a Los Angeles state court by Dr Viviane Ghaderi, an AI researcher who says she worked successfully in Amazon's Alexa and LLM teams, and achieved a string of promotions, but claims she was later suddenly demoted and fired following her return to work after giving birth. She is alleging discrimination, retaliation, harassment and wrongful termination, among other claims.

Montana MacLachlan, an Amazon spokesperson, said of the suit: "We do not tolerate discrimination, harassment, or retaliation in our workplace. We investigate any reports of such conduct and take appropriate action against anyone found to have violated our policies.""

https://www.msn.com/en-us/news/crime/ex-amazon-exec-claims-she-was-asked-to-break-copyright-law-in-race-to-ai/ar-AA1nrNEG

TheMetalDog, to ArtificialIntelligence
@TheMetalDog@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.

This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.

This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.

However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":"

https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs

cassidy, (edited ) to ai
@cassidy@blaede.family avatar

I was listing something on eBay, and they encourage starting with an existing listing—presumably to increase the amount of detail and decrease the amount of work.

When I selected the same model, I got a default description that was extremely robotic and wordy while just repeating the spec sheet. I thought it sounded LLM-generated; sure enough when I went to edit it, there is a big shiny “write with AI” button.

🤢

This is not actually helping anyone.

cassidy,
@cassidy@blaede.family avatar

It makes EVERY listing sound identical, lifeless, and lacking critical context like the SPECIFIC condition of the item, why it’s being sold, etc. You get an online marketplace with descriptions masquerading as human-authored all sporting the same useless regurgitation of the structured spec sheet, in a less digestible format.

Companies, don’t do this.

I don’t actually mind some of the “summarize/distill customer reviews” type generative AI stuff!

cassidy,
@cassidy@blaede.family avatar

But this is worse as it mixes machine-written nonsense with the corpus of human-written text. And from poking at a few other listings, everyone is just using this feature and its output as-is without actually adding anything. It’s not being used to improve the experience, it’s being used to replace the one critical human part of the experience.

I hate this.

RadicalAnthro, to ai
@RadicalAnthro@c.im avatar

FREE community please BOOST!

🌖NEXT WEEK🌗
Tues April 30 18:30 (BST)
with @ana_valdi
LIVE @UCLanthropology and on ZOOM

'The supply chain capitalism of AI: a call to (re)think algorithmic harms and resistance'

Everybody welcome FREE, LIVE and online! Just turn up!

Ana Valdivia, Lecturer at Oxford Internet Institute, will be speaking LIVE in the Daryll Forde Room, 2nd Floor of the UCL Anthropology Dept, 14 Taviton St, London WC1H 0BW
**NB We can now use the front door in Taviton St again **

You can also join us on ZOOM (ID 384 186 2174 passcode Wawilak)

NatureMC,
@NatureMC@mastodon.online avatar

@RadicalAnthro @ana_valdi Comprehension question: Is this about (e.g. for when creating climate crisis calculations, forest damage surveys, astronomy, etc.) or (this picture-text-pest)/ or both?

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

As noted in this video from AI Explained ( https://youtu.be/pal-dMJFU6Q?si=Y2qE8S35zvO_i742 ) If you see what a company like Hippocratic AI is doing, https://www.hippocraticai.com/linda
combine this with the human-style interaction VASA-1 from Microsoft will make possible, interaction with real human-like assistants may not be far away and that could really change things, even without a dramatic increase in the capabilities of current LLMs.

eddyizm, to StableDiffusion
@eddyizm@fosstodon.org avatar
redork, to internet
@redork@vivaldi.net avatar

Consider how long it took society to realize the net negative that is. How long before we saw the scam of the#gigeconomy for what it was? After a decade of breathless promises, people are no longer seriously waiting on and is finally seen by the wider public as the toxic libertarian fraud it always was.

Consider all this and that has only been in the public consciousness for about a year and ALREADY non-tech-oriented outlets are lampooning it for the bald-faced nonsense that it is as well as the who shill it as the robber barons they are. I am not going to say that we have yet learned our lesson, but this is to me a very promising sign. https://youtu.be/20TAkcy3aBY

mush42, to ai
@mush42@hachyderm.io avatar

Just a random thought.
As Generative AI is being used for creating a lot of content these days, what happens when the next generation of AI models are trained using that content.
When AI models are trained on AI generated content, we'll officially enter the 8th circle of hell

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "We have been here before. Other overhyped new technologies have been accompanied by parables of doom. In 2000, Bill Joy warned in a Wired cover article that “the future doesn’t need us” and that nanotechnology would inevitably lead to “knowledge-enabled mass destruction”. John Seely Brown and Paul Duguid’s criticism at the time was that “Joy can see the juggernaut clearly. What he can’t see—which is precisely what makes his vision so scary—are any controls.” Existential risks tell us more about their purveyors’ lack of faith in human institutions than about the actual hazards we face. As Divya Siddarth explained to me, a belief that “the technology is smart, people are terrible, and no one’s going to save us” will tend towards catastrophizing.

Geoffrey Hinton is hopeful that, at a time of political polarization, existential risks offer a way of building consensus. He told me, “It’s something we should be able to collaborate on because we all have the same payoff”. But it is a counsel of despair. Real policy collaboration is impossible if a technology and its problems are imagined in ways that disempower policymakers. The risk is that, if we build regulations around a future fantasy, we lose sight of where the real power lies and give up on the hard work of governing the technology in front of us."

https://www.science.org/doi/10.1126/science.adp1175

remixtures, to journalism Portuguese
@remixtures@tldr.nettime.org avatar

: "Some journalists fear AI regulation can be used to stifle press freedom. A panel moderated by researcher Felix Simon looked at how governments should (and shouldn’t) regulate AI, and how those rules might impact journalism in the years ahead. Indian editor Ritu Kapur said she understood the need for legislation but expressed her concerns about governments using this as an excuse to increase their control over the public sphere.

“I’m wary about regulation from governments. During this year’s Indian elections, politicians are using AI for campaign purposes, so any regulation coming from entities using AI for their own agenda will be very skewed,” said Kapur, who mentioned a recent incident in which a user asked Google Gemini whether Prime Minister Narendra Modi is a fascist, prompting a response from the government and some backlash online." https://reutersinstitute.politics.ox.ac.uk/news/international-journalism-festival-2024-what-we-learnt-perugia-about-future-news

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.

SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.

WHO warns on its website that this early prototype, introduced on April 2, provides responses that “may not always be accurate.” Some of SARAH’s AI training is years behind the latest data. And the bot occasionally provides bizarre answers, known as hallucinations in AI models, that can spread misinformation about public health." https://www.bloomberg.com/news/articles/2024-04-18/who-s-new-ai-health-chatbot-sarah-gets-many-medical-questions-wrong

br00t4c, to ai
@br00t4c@mastodon.social avatar

Adobe's new AI feature lets you make some really random photos and videos

https://qz.com/adobe-express-ai-ios-android-generate-photos-videos-1851422407

br00t4c, to ai
@br00t4c@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "It’s not my intention in this article to highlight examples of regurgitation of copyrighted material, with a view to suggesting this regurgitation constitutes copyright infringement. The question that is much more important than regurgitation, in my view, is what models like Udio’s are trained on.

There are many people, myself included, who think that training generative AI models on copyrighted work without permission at all constitutes copyright infringement, whether material from the training set is regurgitated in the output or not.

When likenesses to copyrighted music show up in the outputs of AI music generation systems, there are generally three possibilities: either it is chance (this is of course not impossible), or the systems were trained on copyrighted music with licenses to do so, or they were trained on copyrighted music without licenses in place.

If the models used in Udio’s product are trained on copyrighted work, it is possible they have licenses in place with rightsholders that permit them to train on the copyrighted work whose likenesses are found in these examples.

However, if Udio doesn’t have licenses in place for training, many may see this as an issue." https://www.musicbusinessworldwide.com/yes-udios-output-resembles-copyrighted-music-too/

br00t4c, to android
@br00t4c@mastodon.social avatar

Adobe Express Lets You Generate Some Truly Random Photos and Videos on Android and iOS

https://gizmodo.com/adobe-express-ios-android-ai-generate-1851420614

EricCarroll, to generativeAI
@EricCarroll@mastodon.acm.org avatar

$115B for a new HPC supercomputer for "AI" named Stargate.

Is it just me or does the current #GenerativeAI tech business model look like:

  1. Hoover up the public internet
  2. Build massive billion param transformer models (aka #LLM)
  3. ?
  4. Massive Profit!!!

Microsoft & OpenAI Will Spend $100 Billion to Wean Themselves Off Nvidia GPUs

https://www.extremetech.com/computing/microsoft-and-openai-will-spend-100-billion-to-wean-themselves-off-nvidia

> The companies are working on an audacious data center for AI that's expected to be operational in 2028.

#GenerativeAiIsGoingGreat

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "AI’s voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Holdings Plc Chief Executive Officer Rene Haas.

By 2030, the world’s data centers are on course to use more electricity than India, the world’s most populous country, Haas said. Finding ways to head off that projected tripling of energy use is paramount if artificial intelligence is going to achieve its promise, he said.

“We are still incredibly in the early days in terms of the capabilities,” Haas said in an interview. For AI systems to get better, they will need more training — a stage that involves bombarding the software with data — and that’s going to run up against the limits of energy capacity, he said.

Haas joins a growing number of people raising alarms about the toll AI could take on the world’s infrastructure. But he also has an interest in the industry shifting more to Arm chips designs, which are gaining a bigger foothold in data centers. The company’s technology — already prevalent in smartphones — was developed to use energy more efficiently than traditional server chips." https://www.bloomberg.com/news/articles/2024-04-17/ai-computing-is-on-pace-to-consume-more-energy-than-india-arm-says

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs." https://www.citationneeded.news/ai-isnt-useless/

argv_minus_one, to ai
@argv_minus_one@mstdn.party avatar

The test is based on the assumption that humans are difficult to fool.

If you'll study the histories of commerce, politics, or religion, you'll find that this assumption is thoroughly unsound.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • kavyap
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • ngwrru68w68
  • Youngstown
  • everett
  • slotface
  • rosin
  • ethstaker
  • Durango
  • GTA5RPClips
  • megavids
  • cubers
  • modclub
  • mdbf
  • khanakhh
  • vwfavf
  • osvaldo12
  • cisconetworking
  • tester
  • Leos
  • tacticalgear
  • anitta
  • normalnudes
  • JUstTest
  • All magazines