strypey, to ai

"We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies."

https://betterwithout.ai/science-engineering-vs-AI

strypey, to Futurology

"People, societies, and cultures produce intelligence, not brains. Brains are involved, as are (for example) stories. A brain would not be sufficient to produce intelligence, if one could somehow be disentangled from the person, society, and culture."

https://betterwithout.ai/backpropaganda#fn_people_not_brains

strypey, to ai

"Two dangerous falsehoods afflict decisions about artificial intelligence:

  • First, that neural networks are impossible to understand. Therefore, there is no point in trying.

  • Second, that neural networks are the only and inevitable method for achieving advanced AI. Therefore, there is no reason to develop better alternatives."

https://betterwithout.ai/backpropaganda

strypey,

"These myths contribute to the unreliability of AI systems, which both technical workers and powerful decision-makers shrug off as unavoidable and therefore acceptable."

https://betterwithout.ai/backpropaganda

strypey, to ai

One of the biggest problems with the phrase "artificial intelligence" is that decades of criti-hyping sci-fi has endowed it with the meaning "simulated mind". But human technology is no closer to creating that than we were in the 1950s. As AI experts like tirelessly point out, humans haven't even developed a philosophy of mind accurate enough to tell us what a simulated mind would be simulating.

(1/2)

strypey, to ai

"So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities.

In short: they are bad."

, Gradient Dissent

strypey, to ai

"AI is about power and control. The technical details are interesting for some of us, but they’re a sideshow.

Superintelligence is a fantasy of power, not intelligence. Intelligence is just a technical detail."

https://betterwithout.ai/one-bit-future

I've already posted quotes from this book that make this point, but I think it's worth reiterating. Plus I just really like this quote.

strypey, to ai

"'What sort of society and culture do we want, and how do we get that' is the topic of the AI-driven culture war. The culture war prevents us from thinking clearly about the future.

Mooglebook recommender AI does not hate you, but you are made out of emotionally-charged memes it can use for something else."

https://betterwithout.ai/AI-destroyed-the-future

strypey, (edited ) to Futurology

"Cargo cult science means conforming to misaligned incentives. For academics, it optimizes the proxy objective “publish journal articles,” which has increasingly diverged from the actual objective, understanding natural phenomena."

https://betterwithout.ai/stop-obstructing-science

What's responsible for this? The managerialism that now grips most universities and other institutions. In which job security and promotion depends - at least in part - on the quantity of journal articles published.

strypey,

"The Open Science and Replicability/Credibility movements, led by scientist-activists, have succeeded in changing some government, university, and academic journals’ policies.

...Alternatively, impediments may be so entrenched in academia that adequate improvement has become infeasible there. Accordingly, creating alternative, better scientific institutions—funding mechanisms, workplaces, communication channels, social norms—is now important and urgent."

https://betterwithout.ai/stop-obstructing-science

strypey, to science

"There’s a popular image of scientific geniuses figuring things out by thinking about math in an armchair. Newton and Einstein did that, but the kind of science they did is extremely atypical, and they are misleading as prototypes."

#DavidChapman

https://betterwithout.ai/limits-to-AI-simulation

#science #genius

strypey, to random

"The most powerful people, and notably the most monstrous, are not conspicuously intelligent, at least not in the sense measured by IQ...

Success in gaining power seems to depend instead on extreme Dark Tetrad traits (psychopathy, narcissism, machiavellianism, and sadism). That’s moral idiocy, not any sort of intelligence. Maybe we should be more concerned with AI developing superhuman dark tetrad traits than superintelligence."

https://betterwithout.ai/what-intelligent-people-do

strypey, to ai

"Anyway, superintelligent general AI would probably be bad, so let’s not go there, not if we can avoid it. Probably a 'narrow AI,' meaning one that just did science stuff—or, more likely, many narrow AIs with different specializations—would suffice. That seems safer. Narrow science AIs need not be similar to human scientists, engineers, or mathematicians. Mind-likeness seems unnecessary (and scary)."

https://betterwithout.ai/what-AI-for-progress

strypey, to ai

"The seeming ability of text generators to perform multi-step commonsense reasoning is currently the only plausible stepping stone toward Scary AI. I do find it somewhat worrying. So far, there have been no published investigations of either the mechanism for this ability or its ultimate limits. To the extent that apparent reasoning seems worrying, that project seems urgent."

https://betterwithout.ai/fight-unsafe-AI

strypey, to ai

"The seeming ability of text generators to perform multi-step commonsense reasoning is currently the only plausible stepping stone toward Scary AI. I do find it somewhat worrying. So far, there have been no published investigations of either the mechanism for this ability or its ultimate limits. To the extent that apparent reasoning seems worrying, that project seems urgent."

https://betterwithout.ai/fight-unsafe-AI

strypey, to ai

Maybe for-profit companies should be banned from owning or otherwise controlling AI deployments? For the same reason we don't let them own or control nuclear weapon deployments?

strypey,

"It is wise to especially mistrust AI systems, because they are extremely expensive to develop and are mainly owned and operated by unaccountable companies and government agencies. It is best to assume by default that they will act against you."

https://betterwithout.ai/mistrust-machine-learning

strypey, to ai

"Raji et al.’s 'The Fallacy of AI Functionality' points out that whether an AI system works reliably is ethically prior to the desirability of its intended purpose. They give dozens of examples of AI systems causing frequent, serious harms to specific people by acting in ways contrary to their designers’ goals."

https://betterwithout.ai/mistrust-machine-learning

strypey,

"Most apocalyptic scenarios feature systems that are deceptive, incomprehensible, error-prone, enormously powerful, and which behave differently (and worse) after they are loosed on the world.

That is the kind of AI we’ve got now."

https://betterwithout.ai/mistrust-machine-learning

strypey, to security
strypey, to ai

"AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom."

https://betterwithout.ai/pragmatic-AI-safety

I've posted a quote along these lines before, but I think it's a key point, worth reiterating.

strypey,

An example most people in the verse will identity with;

"Pervasive digital surveillance and inadequate cybersecurity feature both in extreme AI doom scenarios and in the medium-sized catastrophes I discussed in the previous chapter. They also empower bad human actors right now."

https://betterwithout.ai/pragmatic-AI-safety

strypey,

"There are compelling and urgent reasons to end internet surveillance that have nothing to do with AI...

Foreign adversaries have access to extensive personal information databases compiled by US corporations, which could help target military, political, and business leaders with individualized propaganda or blackmail; plus real-time location data that could be used for intimidation or assassination."

https://betterwithout.ai/end-digital-surveillance

strypey,

"There are compelling and urgent reasons to end internet surveillance that have nothing to do with AI...

Foreign adversaries have access to extensive personal information databases compiled by US corporations, which could help target military, political, and business leaders with individualized propaganda or blackmail; plus real-time location data that could be used for intimidation or assassination."

https://betterwithout.ai/end-digital-surveillance

strypey, to random

"Possessed by an ideology, you may feel that your speech is divinely inspired, or that the wisdom of the ancients is speaking through you. Or at minimum you can be absolutely certain of its truth without checking, because it unambiguously aligns with—it authentically expresses—the Higher Truth of the system itself."

https://meaningness.com/vaster-than-ideology

strypey,

"As a whatever-ist, you speak for the whatever system. You look for opportunities to preach, and to argue with unbelievers. That may leave you bruised. It may not be to your advantage; you sacrifice yourself for the honor of the system. You take any insult to it as an attack on your self and your community, and might fight even unto death."

https://meaningness.com/vaster-than-ideology

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • megavids
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines