_L1vY_,
@_L1vY_@mstdn.social avatar

Via Sigal Samuel (Vox):
12:41 PM · May 17, 2024

"Thank you to the company insiders who bravely spoke to me. According to my sources, the answer to "What did Ilya see?" is actually very simple...
People speculated that Ilya saw AGI...as if was hiding some conscious, shackled AI in the basement.

But reporting this out, I thought: This is not a horror story about AI. This is a horror story about humans.
Like in 2001: A Space Odyssey, the issue wasn't HAL lying."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

_L1vY_,
@_L1vY_@mstdn.social avatar

Via Sigal Samuel (Vox):
4:16 PM · May 17, 2024

"Everyone owes massive gratitude to ex- folks who are speaking out. It was refreshing to see @janleike 's thread today.
And by refusing to sign an NDA, Daniel Kokotajlo gave up an insane amount of money so that he'd be free to criticize the company. That's real integrity."

_L1vY_,
@_L1vY_@mstdn.social avatar

Via
12:16 PM · May 17, 2024

"Thank you Jan for speaking up."

_L1vY_,
@_L1vY_@mstdn.social avatar

Via Jan Leike
@janleike
12:43 AM · May 15, 2024

"I resigned"

_L1vY_,
@_L1vY_@mstdn.social avatar

Via Jan Leike
@janleike
11:57 AM · May 17, 2024

"Yesterday was my last day as head of alignment, superalignment lead, and executive

"Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us."

https://threadreaderapp.com/thread/1791498174659715494.html

18+ _L1vY_,
@_L1vY_@mstdn.social avatar
_L1vY_,
@_L1vY_@mstdn.social avatar

Via Jeff Wu
(@jeffwu OR? @barteepjorbinboy) 1:02 PM · May 17, 2024

"<3"

_L1vY_,
@_L1vY_@mstdn.social avatar

Via Joshua Achiam
@jachiam0
2:07 AM · May 16, 2024

"What stresses me out: alignment by default happens so easily for current paradigm... few people take alignment seriously as a problem. It's still underinvested and has major issues.

This is not a doomer take. I don't think killing everyone is default behavior for AGI/ASI and people who claim it is have rationalized disordered pattern of thinking

Nonetheless civilization-scale catastrophes are possible in worlds where rogue AGIs/ASIs exist."

fifilamoura,
@fifilamoura@eldritch.cafe avatar

@_L1vY_ Our problem is the humans whose default behavior is killing humans. And what those humans are using AI for currently, which is constant surveillance and targeting of humans. AI is already killing people with OpenAI's blessings if we're going to be honest about it. @jachiam0

  • All
  • Subscribed
  • Moderated
  • Favorites
  • OpenAI
  • DreamBathrooms
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • GTA5RPClips
  • JUstTest
  • tacticalgear
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • ethstaker
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines