Evinceo, Nobody tell these guys that the control problem is just the halting problem and first year CS students already know the answer.
kuna, On a similar note, Yud’s decision theory that hinges on an AI (presumably a Turing Machine) predicting what a human (Turing-Complete at the least) does with 100% accuracy.
dgerard, remembering how Thiel paid Buterin to drop out of his comp sci course so he spent all of 2018 trying to implement plans for Ethereum that only required that P=NP
dgerard, :chefkiss:
Soyweiser, @sue_me_please Don't think this reply will properly show up on awful.systems, but I can't resist to sneer.
It amuses me that for a while the LW people saw Musk as a great example, and he just went 'I would solve the control problem by making them human friendly and making the robots have low grip strength. Easy peasy' Amazed that wasn't a crack ping moment for a lot of them.
sailor_sega_saturn, I remember role playing cops and robbers as a kid. I could point my finger and shout “bang bang I got you” but if my friend didn’t pretend to be mortally wounded and instead just kept running around there’s really nothing I could do.
bitofhope, td;lr
No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem?
While I agree this article is TL and I DR it, this is not an abstract. This is a redundant lede and attempted clickbait at that.
Oh wait I just noticed the L and D are swapped. Feel free not to tell me whether that’s a typo or some smarmy lesswrongism.
kuna, too dong, leedn’t read?
carlitoscohones, Word Count needs to be added to the crackpot index.
skillissuer, 2 points for every statement that is clearly vacuous.
3 points for every statement that is logically inconsistent.
this could be enough
gerikson, I didn’t read this but I confident it can be summarized as “how many hostile AGIs can we confine to the head of a pin?”
Add comment