"We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies."
"People, societies, and cultures produce intelligence, not brains. Brains are involved, as are (for example) stories. A brain would not be sufficient to produce intelligence, if one could somehow be disentangled from the person, society, and culture."
"Two dangerous falsehoods afflict decisions about artificial intelligence:
First, that neural networks are impossible to understand. Therefore, there is no point in trying.
Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."
"These myths contribute to the unreliability of AI systems, which both technical workers and powerful decision-makers shrug off as unavoidable and therefore acceptable."
One of the biggest problems with the phrase "artificial intelligence" is that decades of criti-hyping sci-fi has endowed it with the meaning "simulated mind". But human technology is no closer to creating that than we were in the 1950s. As AI experts like #DavidChapman tirelessly point out, humans haven't even developed a philosophy of mind accurate enough to tell us what a simulated mind would be simulating.
"So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities.
"'What sort of society and culture do we want, and how do we get that' is the topic of the AI-driven culture war. The culture war prevents us from thinking clearly about the future.
Mooglebook recommender AI does not hate you, but you are made out of emotionally-charged memes it can use for something else."
"Cargo cult science means conforming to misaligned incentives. For academics, it optimizes the proxy objective “publish journal articles,” which has increasingly diverged from the actual objective, understanding natural phenomena."
What's responsible for this? The managerialism that now grips most universities and other #research institutions. In which job security and promotion depends - at least in part - on the quantity of journal articles published.
"The Open Science and Replicability/Credibility movements, led by scientist-activists, have succeeded in changing some government, university, and academic journals’ policies.
...Alternatively, impediments may be so entrenched in academia that adequate improvement has become infeasible there. Accordingly, creating alternative, better scientific institutions—funding mechanisms, workplaces, communication channels, social norms—is now important and urgent."
"There’s a popular image of scientific geniuses figuring things out by thinking about math in an armchair. Newton and Einstein did that, but the kind of science they did is extremely atypical, and they are misleading as prototypes."
"The most powerful people, and notably the most monstrous, are not conspicuously intelligent, at least not in the sense measured by IQ...
Success in gaining power seems to depend instead on extreme Dark Tetrad traits (psychopathy, narcissism, machiavellianism, and sadism). That’s moral idiocy, not any sort of intelligence. Maybe we should be more concerned with AI developing superhuman dark tetrad traits than superintelligence."
"Anyway, superintelligent general AI would probably be bad, so let’s not go there, not if we can avoid it. Probably a 'narrow AI,' meaning one that just did science stuff—or, more likely, many narrow AIs with different specializations—would suffice. That seems safer. Narrow science AIs need not be similar to human scientists, engineers, or mathematicians. Mind-likeness seems unnecessary (and scary)."
"The seeming ability of text generators to perform multi-step commonsense reasoning is currently the only plausible stepping stone toward Scary AI. I do find it somewhat worrying. So far, there have been no published investigations of either the mechanism for this ability or its ultimate limits. To the extent that apparent reasoning seems worrying, that project seems urgent."
"The seeming ability of text generators to perform multi-step commonsense reasoning is currently the only plausible stepping stone toward Scary AI. I do find it somewhat worrying. So far, there have been no published investigations of either the mechanism for this ability or its ultimate limits. To the extent that apparent reasoning seems worrying, that project seems urgent."
Maybe for-profit companies should be banned from owning or otherwise controlling AI deployments? For the same reason we don't let them own or control nuclear weapon deployments?
"It is wise to especially mistrust AI systems, because they are extremely expensive to develop and are mainly owned and operated by unaccountable companies and government agencies. It is best to assume by default that they will act against you."
"Raji et al.’s 'The Fallacy of AI Functionality' points out that whether an AI system works reliably is ethically prior to the desirability of its intended purpose. They give dozens of examples of AI systems causing frequent, serious harms to specific people by acting in ways contrary to their designers’ goals."
"Most apocalyptic scenarios feature systems that are deceptive, incomprehensible, error-prone, enormously powerful, and which behave differently (and worse) after they are loosed on the world.
"AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom."
An example most people in the verse will identity with;
"Pervasive digital surveillance and inadequate cybersecurity feature both in extreme AI doom scenarios and in the medium-sized catastrophes I discussed in the previous chapter. They also empower bad human actors right now."
"There are compelling and urgent reasons to end internet surveillance that have nothing to do with AI...
Foreign adversaries have access to extensive personal information databases compiled by US corporations, which could help target military, political, and business leaders with individualized propaganda or blackmail; plus real-time location data that could be used for intimidation or assassination."
"There are compelling and urgent reasons to end internet surveillance that have nothing to do with AI...
Foreign adversaries have access to extensive personal information databases compiled by US corporations, which could help target military, political, and business leaders with individualized propaganda or blackmail; plus real-time location data that could be used for intimidation or assassination."
"Possessed by an ideology, you may feel that your speech is divinely inspired, or that the wisdom of the ancients is speaking through you. Or at minimum you can be absolutely certain of its truth without checking, because it unambiguously aligns with—it authentically expresses—the Higher Truth of the system itself."
"As a whatever-ist, you speak for the whatever system. You look for opportunities to preach, and to argue with unbelievers. That may leave you bruised. It may not be to your advantage; you sacrifice yourself for the honor of the system. You take any insult to it as an attack on your self and your community, and might fight even unto death."