remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #BigTech #SiliconValley: "Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about. A separation letter on the termination documents, which you can read embedded below, says in plain language, “If you have any vested Units ... you are required to sign a release of claims agreement within 60 days in order to retain such Units.” It is signed by Kwon, along with OpenAI VP of people Diane Yoon (who departed OpenAI recently). The secret ultra-restrictive NDA, signed for only the “consideration” of already vested equity, is signed by COO Brad Lightcap.

Meanwhile, according to documents provided to Vox by ex-employees, the incorporation documents for the holding company that handles equity in OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.

Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI."

https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Now, I do see why Altman likes it so much; besides its treatment of AI as personified emotional pleasure dome, two other things happen that must appeal to the OpenAI CEO: 1. Human-AI relationships are socially normalized almost immediately (this is the most unrealistic thing in the movie, besides its vision of a near-future AI that has good public transit and walkable neighborhoods; in a matter of months everyone seems to find it normal that people are ‘dating’ voices in the earbuds they bought from Best Buy), and 2. the AIs meet a resurrected model of Alan Watts, band together, and quietly transcend, presumably achieving some version of what Altman imagines to be AGI. He professes to worrying that AI will destroy humanity, and has a survival bunker and guns to prove it, so this science fictional depiction of AGIification must be more soothing than the other one.

But the weirdest thing to me is that it’s only after the AIs are gone that the characters can be said to undergo any sort of personal growth; they spend some time looking at the sunset, feel a human connection, and Theo writes that long overdue handwritten apology letter to his ex. It’s hard to see how the AI wasn’t merely holding them back from all this, and why Altman would find this outcome inspiring in the context of running a company that is bent on inundating the world with AI. Maybe he just missed the subtext? It’s become something of a running joke that Altman is bad at understanding movies: he thought Oppenheimer should have been made in a way that inspired kids to become physicists, and that the Social Network was a great positive message for startup founders.

Finally, Altman’s admiration is also a bit puzzling in that the AIs don’t ever really do anything amazing for society, even while they’re here."

https://www.bloodinthemachine.com/p/why-is-sam-altman-so-obsessed-with

tomstoneham, to ai
@tomstoneham@dair-community.social avatar

"Yet again, LLMs show us that many of our tests for cognitive capacities are merely tracking proxies."

Some thoughts on genAI 'passing' theory of mind tests.

https://listed.to/@24601/51831/minds-and-theories-of-mind

lns, to generativeAI
@lns@fosstodon.org avatar

I wonder if generative AI will cause a real drop in motivation for organic human creativity.. "I'll just have AI make it for me."

CenturyAvocado, to ai
@CenturyAvocado@fosstodon.org avatar

Here comes the bullshit machine... @revk @bloor
Someone came into this evening leading to a confusing interaction until the cause was identified.

On a side note, I think I might be done with this internet and tech stuff. I wonder what manual work I can take up instead.

mheadd, to ai
@mheadd@mastodon.social avatar

This is a fundamental mistake that people make when trying to assess whether LLMs are an appropriate tool to use in optimizing a process, function, or service:

"LLMs are not search engines looking up facts; they are pattern-spotting engines that guess the next best option in a sequence."

This terrific article is a great explainer on how they work and their limitations.

https://ig.ft.com/generative-ai/

attacus, to ai
@attacus@aus.social avatar

Most products and business problems require deterministic solutions.
Generative models are not deterministic.
Every single company who has ended up in the news for an AI gaffe has failed to grasp this distinction.

There’s no hammering “hallucination” out of a generative model; it’s baked into how the models work. You just end up spending so much time papering over the cracks in the façade that you end up with a beautiful découpage.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

techchiili, to microsoft
@techchiili@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIHype #AGI: "The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window."

https://disconnect.blog/ai-hype-is-over-ai-exhaustion-is-setting-in/

modean987, to generativeAI
@modean987@mastodon.world avatar
FatherEnoch, to ai
@FatherEnoch@mastodon.online avatar

AI might be cool, but it’s also a big fat liar, and we should probably be talking about that more.

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong





remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "With Senate Majority Leader Chuck Schumer releasing a sweeping “roadmap” for AI legislation today and major product announcements from OpenAI and Google, it’s been a big week for AI… and it’s only Wednesday.

But amid the ever-quickening pace of action, some observers wonder if government is looking at the tech industry with the right perspective. A report shared first with DFD from the nonprofit Data & Society argues that in order for powerful AI to integrate successfully with humanity, it must actually feature… the humanities.

Data & Society’s Serena Oduro and Tamara Kneese write that social scientists and other researchers should be directly involved in federally funded efforts to regulate and analyze AI. They say that given the unpredictable impact it might have on how people live, work and interact with institutions, AI development should involve non-STEM experts at every step.

“Especially with a general purpose technology, it is very hard to anticipate what exactly this technology will be used for,” said Kneese, a Data & Society senior researcher."

https://www.politico.com/newsletters/digital-future-daily/2024/05/15/ai-data-society-report-humanities-00158195

drahardja, to ai
@drahardja@sfba.social avatar

On the other hand, successful lawsuits against companies for the output of lousy chatbots will put a dollar amount on the liability of using chatbots to talk to customers, and may actually reduce their usage. https://mastodon.social/@arstechnica/112452961167345476

drahardja, to generativeAI
@drahardja@sfba.social avatar

Much as I dislike the theft of human labor that feeds many of the products we see today, I have to agree with @pluralistic that law is the wrong way to address the problem.

To frame the issue concretely: think of whom copyright law has benefited in the past, and then explain how it would benefit the individual creator when it is applied to . (Hint: it won’t.)

Copyright law is already abused and extended to an absurd degree today. It already overreaches. It impoverishes society by putting up barriers to creation and allowing toll-collectors to exist between citizen artists and their audience.

Labor law is likely what we need to lean on. and protect creators in a way that copyright cannot. Inequality and unequal bargaining power that lead to exploitation of artists and workers is what we need to address head-on.

Copyright will not save us.

“AI "art" and uncanniness”

https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand

bespacific, to generativeAI
@bespacific@newsie.social avatar

Fake studies have flooded publishers of top leading to thousands of , M of $ in lost revenue. Biggest hit has come to 217-year-old based in Hoboken NJ which announced it is closing 19 journals, some of which were infected by large-scale research . Wiley has reportedly had to retract more than 11,300 papers recently “that appeared compromised” as makes it easier for paper mills to peddle fake research. https://www.wsj.com/science/academic-studies-research-paper-mills-journals-publishing-f5a3d4bc

Andbaker, to academia
@Andbaker@aus.social avatar

Uncited generative AI use by students in coursework

I teach a course where I allow the use of generative AI. My university rules allow this and the students are instructed that they must cite the use of generative AI. I have set the same Laboratory Report coursework for the last two years. And students submit their work through TurnItIn. so I can see what the TurnItIn Ai checker is reporting.

http://andy-baker.org/2024/05/15/uncited-generative-ai-use-by-students-in-coursework/

Jigsaw_You, to OpenAI
@Jigsaw_You@mastodon.nl avatar

@garymarcus spot-on…

has presumably pivoted to new features precisely because they don’t know how to produce the kind of capability advance that the “exponential improvement” would have predicted”

https://garymarcus.substack.com/p/hot-take-on-openais-new-gpt-4o?r=8tdk6&utm_campaign=post&utm_medium=web&triedRedirect=true

modean987, to generativeAI
@modean987@mastodon.world avatar
opentermsarchive, to generativeAI
@opentermsarchive@mastodon.lescommuns.org avatar

What can we discover by reading the terms and conditions of tools? What do users consent to? What are the regulatory responses in 🇪🇺 🇨🇳 🇺🇸?
Join our online event on May 23 at 16:30 UTC+2 to discover the Watch project!
https://www.sciencespo.fr/ecole-droit/en/events/generative-ai-watch/
We will present a dataset of terms and conditions of major generative services, some of the discoveries that we made when tracking their changes, and how the changing regulatory landscape could impact those terms.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability." https://www.eff.org/deeplinks/2024/05/what-can-go-wrong-when-police-use-ai-write-reports

Crysophilax, to aiart
@Crysophilax@mastodon.social avatar

#aiart #aiartwork #AiArtists #GenerativeAI #GenerativeArt #poetry
I like to throw poems at AI and see what it comes up with.

In Flanders Fields

In Flanders Fields, the poppies blow
Between the crosses, row on row,
That mark our place; and in the sky
The larks, still bravely singing, fly
Scarce heard amid the guns below.

We are the dead. Short days ago
We lived, felt dawn, saw sunset glow,
Loved and were loved, and now we lie,
In Flanders fields.

.

jmcastagnetto, to ai
@jmcastagnetto@mastodon.social avatar

A report from Microsoft & LinkedIn, about at , indicating the rise in use of generative AI for work tasks.

"AI at Work Is Here. Now Comes the Hard Part" https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part/

scy, to generativeAI
@scy@chaos.social avatar

I'm old enough to remember how @creativecommons was founded as a way for independent creators to safely share their work and build upon each other.

In 2024, their take is now "billion dollar companies plagiarizing your art is fair use".

https://creativecommons.org/2023/02/17/fair-use-training-generative-ai/

Hats off to the author, you don't see that kind of, uh, skillful rhetoric chicanery every day. Like "generative AI doesn't compete with artists because artists are not in the data market". 😬

ishotjr, to generativeAI
@ishotjr@chaos.social avatar

how are folks feeling about the use of #generativeAI? when is it ok to use? when is it not? what are some examples you've seen of the wrong call being made? would love to get a ton of folks' opinions so please #boost for reach! ✨💗✨

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • megavids
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines