smeg, to OpenAI
@smeg@assortedflotsam.com avatar

OpenAI putting ‘shiny products’ above safety, says departing researcher | Artificial intelligence (AI) | The Guardian
https://www.theguardian.com/technology/article/2024/may/18/openai-putting-shiny-products-above-safety-says-departing-researcher #openai #genai #ai #aisafety

Sevoris, to OpenAI

So is worried about "unaligned" AI.

Then they copy a woman‘s voice against her explicit wishes, because the CEO loves her performance in a movie.

Yeah, is going great. The abusive ethics, the sexism and disrespect are inside the fucking house. These people can‘t train a responsible being if they even made an intelligent being.

This is a litmus test for the entire academic AI safety bubble. And I can guess how many will respond to this, as well. They won‘t.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #AISafety #AIEthics: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

ai6yr, to ai
@ai6yr@m.ai6yr.org avatar

Axios: OpenAI CEO Sam Altman is one of a select group of AI leaders handpicked by Homeland Security Secretary Alejandro Mayorkas to join a new federal Artificial Intelligence Safety and Security Board. https://www.axios.com/2024/04/26/altman-mayorkas-dhs-ai-safety-board?utm_source=mastodon&utm_medium=social&utm_campaign=editorial

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "With a few exceptions, AI safety questions cannot be asked and answered at the levels of models alone. Safety depends to a large extent on the context and the environment in which the AI model or AI system is deployed. We have to specify a particular context before we can even meaningfully ask an AI safety question.

As a corollary, fixing AI safety at the model level alone is unlikely to be fruitful. Even if models themselves can somehow be made “safe”, they can easily be used for malicious purposes. That’s because an adversary can deploy a model without giving it access to the details of the context in which it is deployed. Therefore we cannot delegate safety questions to models — especially questions about misuse. The model will lack information that is necessary to make a correct decision.

Based on this perspective, we make four recommendations for safety and red teaming that would represent a major change to how things are done today." https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property

itnewsbot, to machinelearning

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating - Enlarge (credit: Jaap Arriens/NurPhoto/Getty Images)

Wikipedia... - https://arstechnica.com/?p=2007059 #largelanguagemodels #techpublications #machinelearning #aijournalism #aipublishing #aiarticles #journalism #wikipedia #aiethics #aisafety #chatgpt #chatgtp #biz#cnet #ai

itnewsbot, to machinelearning

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora - Enlarge / Tyler Perry in 2022. (credit: Getty Images)

In an in... - https://arstechnica.com/?p=2005529

itnewsbot, to machinelearning

Deepfake scammer walks off with $25 million in first-of-its-kind AI heist - Enlarge (credit: Getty Images / Benj Edwards)

On Sunday, a rep... - https://arstechnica.com/?p=2000988

itnewsbot, to machinelearning

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse - Enlarge (credit: Getty Images)

On Monday, OpenAI announced a p... - https://arstechnica.com/?p=1999788 #largelanguagemodels #commonsensemedia #machinelearning #textsynthesis #aireviews #samaltman #aiethics #aisafety #chatgpt #chatgtp #biz#openai #ai

strypey, to ai
@strypey@mastodon.nzoss.nz avatar

"AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom."

https://betterwithout.ai/pragmatic-AI-safety

I've posted a quote along these lines before, but I think it's a key point, worth reiterating.

itnewsbot, to machinelearning

Zuckerberg’s AGI remarks follow trend of downplaying AI dangers - Enlarge / Mark Zuckerberg, chief executive officer of Meta Platforms In... - https://arstechnica.com/?p=1997158 #largelanguagemodels #machinelearning #markzuckerberg #instagramreel #opensourceai #opensource #instagram #samaltman #aiethics #aisafety #facebook #chatgpt #chatgtp #biz#aihype #openai #meta #agi #ai

itnewsbot, to machinelearning

OpenAI opens the door for military uses but maintains AI weapons ban - Enlarge (credit: OpenAI / Getty Images / Benj Edwards)

On Tues... - https://arstechnica.com/?p=1996787 #largelanguagemodels #usdefensedepartment #suicideprevention #machinelearning #cybersecurity #aiweapons #microsoft #aiethics #aisafety #military #pentagon #chatgpt #chatgtp #biz#openai #ai

strypey, to ai
@strypey@mastodon.nzoss.nz avatar

I just discovered that one of my favourite philosophical writers, former AI researcher , has published a book on the current state and future risks of AI;

https://betterwithout.ai/

I've read standalone essays David wrote years ago exploring the philosophical underbelly of AI development, and I'm confident this timely book will be just as insightful.

David seems to be in the 'verse here;

@Meaningness

strypey,
@strypey@mastodon.nzoss.nz avatar

"This is the domain of , where systems are often imagined as moral... agents... AIs should align to human values, ideally by understanding and acting according to them, or at minimum by reliably recognizing and intending to respect them.

Attempts to specify what abstract values we want an to respect fail because we don’t have those. That’s not how human motivation works, nor are “values” a workable basis for an accurate ethical framework."

https://betterwithout.ai/AI-motivation

isomeme, to Meme
@isomeme@mastodon.sdf.org avatar

A fond hope for the new year.

davidaugust, to ai
@davidaugust@mastodon.online avatar

“…a deep truth about AI: that the story of AI being managed by a ‘human in the loop’ is a fantasy, because humans are neurologically incapable of maintaining vigilance in watching for rare occurrences.”

https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop

chrisoffner3d, to ai

“Me flaunting my insane wealth is good for AI safety, bro.” – Sam Altman

image/jpeg

chrisoffner3d, to llm

The goal of the LVE project is to create a hub for the community, to document, track and discuss language model vulnerabilities and exposures (LVEs).

https://lve.pages.dev/

williamgunn, to ai
@williamgunn@mastodon.social avatar

I'm a PhD biologist and I read @OpenAI's threat preparedness assessment plan for CBRN threats. It appears to be total nonsense designed without any input from a scientist. Here's why:

itnewsbot, to machinelearning
gmusser, to Futurology

When people fret that A.I.s will achieve superhuman general intelligence and take over the planet, they neglect the physical limits on these systems. This essay by Dan Roberts is a useful reality check. A.I. models are already resource-intensive and will probably top out at GPT-7. Roberts is one of the physicists I feature in my new book about physics, A.I., and neuroscience. #AIrisk #AIsafety #Singularity @danintheory https://www.sequoiacap.com/article/black-holes-perspective/

itnewsbot, to machinelearning

Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier - Enlarge (credit: Getty Images | Benj Edwards)

In an editorial ... - https://arstechnica.com/?p=1988745

chrisoffner3d, to ai

Using chatGPT’s knowledge cutoff date against it.

AI safety standards are such a joke, it’s like we’re back in the 90s of software security.

(via https://x.com/venturetwins/status/1710321733184667985)

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "Innovative companies like Be My Eyes, a Danish startup which leveraged GPT-4 to build an app helping the visually impaired navigate the world, rely on general-purpose AI models.

It is crucial that they know those models are safe and that they are not exposing themselves to unacceptable levels of regulatory and liability risk.

If a European startup has to meet safety standards of general-purpose AI models under the AI Act, they will only want to buy models from companies that can assure them that the final product will be safe.

But the information and guarantees that they need are not being offered.

All of this means that European startups have unsafe services that they will be asked to make safe under the AI Act, with limited resources to do so."

https://sifted.eu/articles/mistral-aleph-alpha-and-big-techs-lobbying-on-ai-safety-will-hurt-startups

analyticus, to Logic

More than argument, logic is the very structure of reality

The patterns of reality

Some have thought that logic will one day be completed and all its problems solved. Now we know it is an endless task

https://aeon.co/essays/more-than-argument-logic-is-the-very-structure-of-reality

#logic #reality #argument #arguments @philosophy #philosophy @philosophie @philosophyofmind

tetranomos,
chrisoffner3d, to ai

> The fixation on speculative harms is “almost like a caricature of the reality that we’re experiencing,” said Deborah Raji, an AI researcher at the University of California, Berkeley. She worries that the focus on existential dangers will steer lawmakers away from risks that AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections and weaken personal privacy.

https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines