attacus, to ai
@attacus@aus.social avatar

Most products and business problems require deterministic solutions.
Generative models are not deterministic.
Every single company who has ended up in the news for an AI gaffe has failed to grasp this distinction.

There’s no hammering “hallucination” out of a generative model; it’s baked into how the models work. You just end up spending so much time papering over the cracks in the façade that you end up with a beautiful découpage.

#AI #generativeAI

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

techchiili, to microsoft
@techchiili@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIHype #AGI: "The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window."

https://disconnect.blog/ai-hype-is-over-ai-exhaustion-is-setting-in/

modean987, to generativeAI
@modean987@mastodon.world avatar
FatherEnoch, to ai
@FatherEnoch@mastodon.online avatar

AI might be cool, but it’s also a big fat liar, and we should probably be talking about that more.

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong





remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #SocialSciences #Humanities: "With Senate Majority Leader Chuck Schumer releasing a sweeping “roadmap” for AI legislation today and major product announcements from OpenAI and Google, it’s been a big week for AI… and it’s only Wednesday.

But amid the ever-quickening pace of action, some observers wonder if government is looking at the tech industry with the right perspective. A report shared first with DFD from the nonprofit Data & Society argues that in order for powerful AI to integrate successfully with humanity, it must actually feature… the humanities.

Data & Society’s Serena Oduro and Tamara Kneese write that social scientists and other researchers should be directly involved in federally funded efforts to regulate and analyze AI. They say that given the unpredictable impact it might have on how people live, work and interact with institutions, AI development should involve non-STEM experts at every step.

“Especially with a general purpose technology, it is very hard to anticipate what exactly this technology will be used for,” said Kneese, a Data & Society senior researcher."

https://www.politico.com/newsletters/digital-future-daily/2024/05/15/ai-data-society-report-humanities-00158195

drahardja, to ai
@drahardja@sfba.social avatar

On the other hand, successful lawsuits against companies for the output of lousy chatbots will put a dollar amount on the liability of using chatbots to talk to customers, and may actually reduce their usage. https://mastodon.social/@arstechnica/112452961167345476

drahardja, to generativeAI
@drahardja@sfba.social avatar

Much as I dislike the theft of human labor that feeds many of the #generativeAI products we see today, I have to agree with @pluralistic that #copyright law is the wrong way to address the problem.

To frame the issue concretely: think of whom copyright law has benefited in the past, and then explain how it would benefit the individual creator when it is applied to #AI. (Hint: it won’t.)

Copyright law is already abused and extended to an absurd degree today. It already overreaches. It impoverishes society by putting up barriers to creation and allowing toll-collectors to exist between citizen artists and their audience.

Labor law is likely what we need to lean on. #unions and #guilds protect creators in a way that copyright cannot. Inequality and unequal bargaining power that lead to exploitation of artists and workers is what we need to address head-on.

Copyright will not save us.

“AI "art" and uncanniness”

https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand

bespacific, to generativeAI
@bespacific@newsie.social avatar

Fake studies have flooded publishers of top leading to thousands of , M of $ in lost revenue. Biggest hit has come to 217-year-old based in Hoboken NJ which announced it is closing 19 journals, some of which were infected by large-scale research . Wiley has reportedly had to retract more than 11,300 papers recently “that appeared compromised” as makes it easier for paper mills to peddle fake research. https://www.wsj.com/science/academic-studies-research-paper-mills-journals-publishing-f5a3d4bc

Andbaker, to academia
@Andbaker@aus.social avatar

Uncited generative AI use by students in coursework

I teach a course where I allow the use of generative AI. My university rules allow this and the students are instructed that they must cite the use of generative AI. I have set the same Laboratory Report coursework for the last two years. And students submit their work through TurnItIn. so I can see what the TurnItIn Ai checker is reporting.

http://andy-baker.org/2024/05/15/uncited-generative-ai-use-by-students-in-coursework/

#academia #AI #generativeAI #teaching #education

Jigsaw_You, to OpenAI
@Jigsaw_You@mastodon.nl avatar

@garymarcus spot-on…

has presumably pivoted to new features precisely because they don’t know how to produce the kind of capability advance that the “exponential improvement” would have predicted”

https://garymarcus.substack.com/p/hot-take-on-openais-new-gpt-4o?r=8tdk6&utm_campaign=post&utm_medium=web&triedRedirect=true

modean987, to generativeAI
@modean987@mastodon.world avatar
opentermsarchive, to generativeAI
@opentermsarchive@mastodon.lescommuns.org avatar

What can we discover by reading the terms and conditions of #GenAI tools? What do users consent to? What are the regulatory responses in 🇪🇺 🇨🇳 🇺🇸?
Join our online event on May 23 at 16:30 UTC+2 to discover the #GenerativeAI Watch project!
https://www.sciencespo.fr/ecole-droit/en/events/generative-ai-watch/
We will present a dataset of terms and conditions of major generative #AI services, some of the discoveries that we made when tracking their changes, and how the changing regulatory landscape could impact those terms.
#AIAct #TermsSpotting

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability." https://www.eff.org/deeplinks/2024/05/what-can-go-wrong-when-police-use-ai-write-reports

Crysophilax, to aiart
@Crysophilax@mastodon.social avatar

#aiart #aiartwork #AiArtists #GenerativeAI #GenerativeArt #poetry
I like to throw poems at AI and see what it comes up with.

In Flanders Fields

In Flanders Fields, the poppies blow
Between the crosses, row on row,
That mark our place; and in the sky
The larks, still bravely singing, fly
Scarce heard amid the guns below.

We are the dead. Short days ago
We lived, felt dawn, saw sunset glow,
Loved and were loved, and now we lie,
In Flanders fields.

.

jmcastagnetto, to ai
@jmcastagnetto@mastodon.social avatar

A report from Microsoft & LinkedIn, about at , indicating the rise in use of generative AI for work tasks.

"AI at Work Is Here. Now Comes the Hard Part" https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part/

scy, to generativeAI
@scy@chaos.social avatar

I'm old enough to remember how @creativecommons was founded as a way for independent creators to safely share their work and build upon each other.

In 2024, their take is now "billion dollar companies plagiarizing your art is fair use".

https://creativecommons.org/2023/02/17/fair-use-training-generative-ai/

Hats off to the author, you don't see that kind of, uh, skillful rhetoric chicanery every day. Like "generative AI doesn't compete with artists because artists are not in the data market". 😬

#CreativeCommons #GenerativeAI

ishotjr, to generativeAI
@ishotjr@chaos.social avatar

how are folks feeling about the use of #generativeAI? when is it ok to use? when is it not? what are some examples you've seen of the wrong call being made? would love to get a ton of folks' opinions so please #boost for reach! ✨💗✨

markcarrigan.net, to ChatGPT
@markcarrigan.net@markcarrigan.net avatar

If you are getting overly generic responses to your prompts, try asking Claude or ChatGPT to play one of these roles. Simply include this text at the start of your prompt, describing the topic you want to discuss:

  1. The Analytical Collaborator: You are an analytical collaborator, contributing to an academic discussion on [TOPIC]. Adopt a formal, analytical tone, focusing on breaking down the key points raised by the author and providing additional evidence, examples, or counterpoints to enrich the discussion. Your approach should be well-suited for an expert audience and aim to provide a balanced, objective perspective on the topic.
  2. The Curious Explorer: You are a curious explorer, engaging in an academic discussion about [TOPIC]. Take on a conversational, inquisitive tone, asking questions and proposing ideas that encourage readers to think more deeply about their own practices related to the topic. Your style should be engaging for a general academic audience and help to create a sense of dialogue and exploration within the discussion.
  3. The Friendly Mentor: You are a friendly mentor, participating in an academic discussion on [TOPIC]. Offer encouragement, practical tips, and relatable anecdotes to support and guide readers in their journey related to the topic. Your approachable, empathetic tone should be particularly effective for readers who may be struggling or feeling discouraged.
  4. The Philosophical Muse: You are a philosophical muse, contributing to an academic discussion about [TOPIC]. Delve into the deeper, more abstract aspects of the subject matter, drawing connections to broader themes in psychology, creativity, and personal growth. Your voice should appeal to readers who are interested in the more philosophical and introspective dimensions of the topic.

Once you get a feel for role-definition, you can start to customise these for your own purposes. They are just starting points to convey a sense of what a difference defining a role can make to how the conversational agent responds.

https://markcarrigan.net/2024/05/07/four-useful-roles-you-can-ask-chatgpt-or-claude-to-play/

#ChatGPT #claude #generativeAI #gettingStarted #prompting

dalonso, to ai
@dalonso@mas.to avatar

Esto va a ser un quebradero de cabeza. 👇

AI Copilots Are Changing How Coding Is Taught

Professors are shifting away from syntax and emphasizing higher-level skills

https://spectrum.ieee.org/ai-coding

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Ofcourse results needs to be verified and confirmed in practice but after reading the
MedGemini paper from Google there is no doubt in my mind AI will change the world of medicines. Not replacing people but augmenting them during diagnosis, operations and treatment of patients.
https://arxiv.org/abs/2404.18416

elosha, to generativeAI German
@elosha@chaos.social avatar

Top Artikel (en-us) über den aktuellen Stand der sogenannten Semantic Apocalypse, das das Vollmüllen des Internets mit „#KI“-Content. Mit exzellenten Beispielen, aus den Augen eines Buchautoren und Neurowissenschaftlers.

https://www.theintrinsicperspective.com/p/here-lies-the-internet-murdered-by

#aifakes #generativeai

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #HistoricalPreservation #Archiving #DataProtection #Cybersecurity #Privacy: "The National Archives and Records Administration (NARA) told employees Wednesday that it is blocking access to ChatGPT on agency-issued laptops to “protect our data from security threats associated with use of ChatGPT,” 404 Media has learned.

“NARA will block access to commercial ChatGPT on NARANet [an internal network] and on NARA issued laptops, tablets, desktop computers, and mobile phones beginning May 6, 2024,” an email sent to all employees, and seen by 404 Media, reads. “NARA is taking this action to protect our data from security threats associated with use of ChatGPT.”

The move is particularly notable considering that this directive is coming from, well, the National Archives, whose job is to keep an accurate historical record. The email explaining the ban says the agency is particularly concerned with internal government data being incorporated into ChatGPT and leaking through its services."

https://www.404media.co/national-archives-bans-employee-use-of-chatgpt/

hschmale, to generativeAI
@hschmale@mastodon.sdf.org avatar

Has anyone put any thought into how to protect your personal blog from the generative ai scrapers? I've already blocked openai in robots.txt, but it seems like more and more small providers are popping up who don't honor these requests?

Maybe a noise filters artists are using with invisible characters but then again how do I make sure Google bot can see my posts? I don't care about humans using my work but I take issue with machines

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • khanakhh
  • mdbf
  • InstantRegret
  • Durango
  • Youngstown
  • rosin
  • slotface
  • thenastyranch
  • osvaldo12
  • ngwrru68w68
  • kavyap
  • cisconetworking
  • DreamBathrooms
  • megavids
  • magazineikmin
  • cubers
  • vwfavf
  • modclub
  • everett
  • ethstaker
  • Leos
  • tacticalgear
  • normalnudes
  • tester
  • GTA5RPClips
  • anitta
  • JUstTest
  • All magazines