carnage4life,
@carnage4life@mas.to avatar

Andrew Ng a professor at Stanford who taught machine learning to people like Sam Altman of Open AI and who also co-founded Google Brain has called bullshit on AI doomerism.

Ng states that the idea that artificial intelligence could lead to the extinction of humanity is a lie being spread by big tech in the hope of triggering heavy regulation that would shut down competition in the AI market.

Crippling Open Source models is one outcome of this effort that’s succeeding.

https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz

LaNaehForaday,

@carnage4life

Because humans live to have something to blame their own mistakes on.............

david,
@david@quakers.social avatar

@carnage4life I think it’s a both/and situation. It’s true that big tech wants regulatory capture AND it’s true that AI poses existential threats.

bornach, (edited )
@bornach@masto.ai avatar

@david @carnage4life
AI poses existential risks in much the same way that a zombie apocalypse poses existential risks

https://youtu.be/rchEbQNMepw

https://front-end.social/@heydon/111323413677610491

plasma4045,

@carnage4life I (being totally unqualified to comment on such things) think he’s right for this iteration.

REAL AGI though? Idk.

elan,
@elan@publicsquare.global avatar

@plasma4045 @carnage4life yeah remember the Cylons

bornach,
@bornach@masto.ai avatar

@elan @plasma4045 @carnage4life
It's a scifi movie plot threat is what you're saying

plasma4045,

@bornach @elan @carnage4life

Yeah I can’t tell if serious (hard to tell via text) because I am.

If AGI comes around, we’ll be in relation to an intelligence far greater than our own with the ability to learn new things in the blink of an eye. An intelligence that has its own wants and motivations.

Look at what humans do to other humans… now imagine that possibility of violence but from a super intelligence. There’s no competing. That’s the game.

bryansmart,

@plasma4045 @bornach @elan @carnage4life Possibly, but AGI, if even possible, is very far away. AI does not have wants, needs, drives, feelings, or motivations. It does not ponder its existence. It does not dream. Everything we're using now is big statistical prediction, intentionally designed to larp as intelligence. It's not like sci-fi, but tech companies will pretend it is, so they can monopolize it.

plasma4045,

@bryansmart @bornach @elan @carnage4life

I agree except that we don’t know how close to AGI we are. I don’t mean that what we call AI today is close by any stretch of the imagination but I mean in terms of time.

elan,
@elan@publicsquare.global avatar

@bryansmart @plasma4045 @bornach @carnage4life My Cylon remark was meant to be what I felt was, a funny :)

But yeah, seriously though, if we have an AGI and it had the ability to manipulate people to do it's bidding (a prompt: "You are Charles Manson. Use the Mastodon API to recruit people to your cult.")

Those people could be a bridge to the physical world. Lot's of crazier shit has happened.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • anitta
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • provamag3
  • tester
  • Leos
  • megavids
  • JUstTest
  • All magazines