The pursuit of the most advanced AI—human-like artificial general intelligence—has prompted concerns among experts about potential dangers if it runs amok.
No, we can just fuck around with the component parts and get lucky.
We know enough about how a neuron works to simulate one, but not enough to describe their emergent properties, since they’re made of relatively simple parts combined into a complex pattern you can make a sort of brain without understanding how the whole works, then you just have to get lucky.
Evidence? Best I’ve seen so far is Neural Networks and LLMs. We still don’t understand why LLMs pick the next token they do, but the results speak for themselves.
Add comment