gimulnautti, to ai
@gimulnautti@mastodon.green avatar

When a bot speaks, the screen needs to display ”AI GENERATED”.

At all times.

gimulnautti, to ai
@gimulnautti@mastodon.green avatar

I repeat:

It must be made ILLEGAL for a computer system to pass itself off as human.

ILLEGAL.

Otherwise we’ll be #%!$d by 2028

https://openai.com/index/hello-gpt-4o/

gimulnautti, (edited )
@gimulnautti@mastodon.green avatar
williamgunn, to ai
@williamgunn@mastodon.social avatar

I'm a PhD biologist and I read @OpenAI's threat preparedness assessment plan for CBRN threats. It appears to be total nonsense designed without any input from a scientist. Here's why:

gmusser, to Futurology

When people fret that A.I.s will achieve superhuman general intelligence and take over the planet, they neglect the physical limits on these systems. This essay by Dan Roberts is a useful reality check. A.I. models are already resource-intensive and will probably top out at GPT-7. Roberts is one of the physicists I feature in my new book about physics, A.I., and neuroscience. #AIrisk #AIsafety #Singularity @danintheory https://www.sequoiacap.com/article/black-holes-perspective/

williamgunn, to ai
@williamgunn@mastodon.social avatar

Car manufacturers generally aren't responsible for drivers running over people, and also they get asked to design cars to minimize harm. There's an lesson in here somewhere.
https://mastodon.social/@davidzipper/111409863226153578

kdkorte, to ArtificialIntelligence
@kdkorte@fosstodon.org avatar

The Guardian has a fantastic interview with Fei-Fei Li, one of the pioneers in AI. I especially like the discussion about the risk of AI and the fact that we should focus on the current problems and the people behind the risk.
My favorite quote: 'AI is “promising” nothing. It is people who are promising – or not promising. AI is a piece of software. It is made by people, deployed by people and governed by people.'
Check it out at: https://www.theguardian.com/technology/2023/nov/05/ai-pioneer-fei-fei-li-im-more-concerned-about-the-risks-that-are-here-and-now

williamgunn, to ai
@williamgunn@mastodon.social avatar

The US, UK, and China all signed an agreement. That it doesn't say much isn't the point. Many people predicted there could be no agreement and an arms race was inevitable and now they have to update. It's now not obviously true that we have to build it first or China will beat us.
https://techhub.social/@Techmeme/111358871909216452

williamgunn, to ai
@williamgunn@mastodon.social avatar

Re: the debate about releasing model weights for potentially dangerous models, never underestimate the ability of our species to cut off its nose to spite its face. https://www.latimes.com/archives/la-xpm-1989-12-03-me-577-story.html

williamgunn, to ai
@williamgunn@mastodon.social avatar

Wherein an AI trained on economics suggests that could lead to a revival of the Malthusian trap: https://econgoat.ai/en/chat

williamgunn, to ai
@williamgunn@mastodon.social avatar

What would a superintelligent AI think about creating an intelligence greater than it?

williamgunn, to journalism
@williamgunn@mastodon.social avatar

There are some causes that had a surge in awareness over the past few years, which have seen attention wane recently with the wars & greater salience of . Expect said cause advocates to try to grab the mic again soon. If you're doing anything that might get a you a bit of attention, be on alert for tactics to make your thing about them & their cause.

williamgunn, to ai
@williamgunn@mastodon.social avatar

Normally I would block out the name if I'm sharing something to comment negatively on it, but if you're going to unironically declare yourself a terrorist...

williamgunn, (edited ) to ai
@williamgunn@mastodon.social avatar

A taxonomy of positions against being an existential risk: https://www.lesswrong.com/posts/BvFJnyqsJzBCybDSD/taxonomy-of-ai-risk-counterarguments
What do you think? Please reshare!

gimulnautti, to writing
@gimulnautti@mastodon.green avatar

Hi all. I expanded my blogging to @medium ! It that’s your favourite platform for , you can follow me there.

I’m starting out with the article I wrote about involved with the automatisation of , and how that alone is a serious threat to .

The article then goes into deeper technical details on how to pre-proof our social discourse platforms against the attack of intelligent bots, which surely is just a question of time.

https://medium.com/@toni.k.aittoniemi/humans-have-freedom-of-expression-bots-dont-da2d2931c2b6

KathyReid, to ai
@KathyReid@aus.social avatar

A group of prominent and scientists signed a very simple statement on giving the possibilities of global catastrophe caused by AI more prominence.

https://www.safe.ai/statement-on-ai-risk

This is part of a broader movement of or . I don't disagree with everything this movement has to say; there are real, and tangible consequences to unfettered development of AI systems.

But the focus of this work is on possible futures. Right now, currently, there are people who experience discrimination, poorer outcomes, impeded life chances, and real, material harms because of the technologies we have in place right now.

And I wonder if this focus on possible futures is because the people warning about them don't feel the real and material harms already causes? Because they're predominantly male-identifying. Or white. Or socio-economically advantaged. Or well educated. Or articulate. Or powerful. Or intersectionally, many of these qualities.

It's hard to worry about a possible future when you're living a life of a thousand machine learning-triggered paper cuts in the one that exists already.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • tacticalgear
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • provamag3
  • ethstaker
  • GTA5RPClips
  • modclub
  • tester
  • Leos
  • osvaldo12
  • cisconetworking
  • everett
  • cubers
  • normalnudes
  • anitta
  • megavids
  • lostlight
  • All magazines