skarthik, to philosophy
@skarthik@neuromatch.social avatar

Good riddance to what was a colossal waste of money, energy, resources, and any sane person's time, intellect, and attention. To even call these as exploratory projects is a disservice to human endeavor.

"Future of humanity", it seems. These guys can't even predict their next bowel movement, but somehow prognosticate about the long term future of humanity, singularity blah blah. This is what "philosophy" has come to with silicon valley and its money power: demented behavior is incentivized, douchery is rationalized, while reason is jettisoned.

https://www.theguardian.com/technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-longtermism-affective-altruism

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ExistentialRisk #AGI: "Far-future, speculative concerns often articulated in calls to mitigate “existential risk” are typically focused on the extinction of humanity. If you believe there is even a small chance of that happening, it makes sense to focus some attention and resources on preventing that possibility. However, I am deeply sceptical about narratives that exclusively centre speculative rather than actual harm, and the ways these narratives occupy such an outsized place in our public imagination.

We need a more nuanced understanding of existential risk – one that sees present-day harms as their own type of catastrophe worthy of urgent intervention and sees today’s interventions as directly connected to bigger, more complex interventions that may be needed in the future.

Rather than treating these perspectives as though they are in opposition with one another, I hope we can accelerate a research agenda that rejects harm as an inevitable byproduct of technological progress. This gets us closer to a best-case scenario, in which powerful AI systems are developed and deployed in safe, ethical and transparent ways in the service of maximum public benefit – or else not at all."

https://www.theguardian.com/technology/2023/jul/07/five-ways-ai-might-destroy-the-world-everyone-on-earth-could-fall-over-dead-in-the-same-second?CMP=Share_AndroidApp_Other

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #ExistentialRisk #Nuclear #NuclearEnergy #NuclearPower: "At the moment, there is no hard scientific evidence of an existential and catastrophic risk posed by AI.

Many of the concerns remain hypothetical and are derailing public attention from the already-pressing ethical and legal risks stemming from AI and their subsequent harms.

This is not to say that AI risks do not exist: they do. A growing body of evidence documents the harm these technologies can pose, especially on those most at risk such as ethnic minorities, populations in developing countries, and other vulnerable groups.

Over-dependency on AI, especially for critical national infrastructure (CNI), could be a source of significant vulnerability – but this would not be catastrophic for the species.

Concerns over wider, existential AI risks do need to be considered, carefully step-by-step, as the evidence is gathered and analysed. But moving too fast to control could also do harm."

https://www.chathamhouse.org/2023/06/nuclear-governance-model-wont-work-ai

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AGI #AIDoomsterism #Tescreal #ExistentialRisk: "The problem, though, is that there’s no plausible account of how an AGI could realistically accomplish this, and claiming that it would employ “magic” that we just can’t understand essentially renders the whole conversation vacuous, since once we’ve entered the world of magic, anything goes. To repurpose a famous line from Ludwig Wittgenstein: “What we cannot speak about we must pass over in silence.”

This is why I’ve become very critical of the whole “AGI existential risk” debate, and why I find it unfortunate that computer scientists like Geoffrey Hinton and Yoshua Bengio have jumped on the “AI doomer” bandwagon. We should be very skeptical of the public conversation surrounding AGI “existential risks.” Even more, we should be critical of how these warnings have been picked up and propagated by the news, as they distract from the very real harms that AI companies are causing right now, especially to marginalized communities.

If anything poses a direct and immediate threat to humanity, it’s the TESCREAL bundle of ideologies that’s driving the race to build AGI, while simultaneously inspiring the backlash of AI doomers who, like Yudkowsky, claim that AGI must be stopped at all costs — even at the risk of triggering a thermonuclear war."

https://www.truthdig.com/articles/does-agi-really-threaten-the-survival-of-the-species/

CharlieMcHenry, to ai
@CharlieMcHenry@connectop.us avatar

For my non-technical mutuals who are struggling to understand how #ArtificialIntelligence can pose an existential threat; for those who wonder how a ‘program’ and algorithms might ultimately threaten their very lives, this video provides a simplistic, but plausible explanation scenario I’m sure you’ll find useful. It’s based on, of all things, ‘stamp collecting’ and that actual story starts about 3 minutes in, so stick with it. #AI #explanation #existentialrisk #aiethics #singularity
https://youtube.com/watch?v=tcdVC4e6EV4&feature=share

futurebird, to random
@futurebird@sauropods.win avatar

Despite my better efforts I overheard some conservatives on a show talking about… how “Gaia Theory and environmentalism” are “suppressing ambition” and somehow this destroys manliness and leads to the end of Civilization. And I just—

I really wish people would make an effort to understand concepts they don’t agree with with even a tiny bit of good faith.

What could be more ambitious (or manly) than taking responsibility for the climate? I don’t even know if we have what it takes tbh.

FeralRobots,
@FeralRobots@mastodon.social avatar

@futurebird
'Terraforming earth' would get a lot more people interested but it would also be like shit for flies to the / / / crowd. You'd almost immediately see it coopted to advocate for building a bunch of fusion-fueled domed cities populated by white people designign a brave new future filled with computronium-maximizing nanoassemblers & simulated humans living simulations of lives optimized for maximum neo-Utilitarian 'happiness.'

pluralistic, to random
@pluralistic@mamot.fr avatar

If you've followed my work for a long time, you've watched me transition from a "" who posts 5-15 short hits every day to an "essay-" who posts 5-7 long articles/week. I'm loving the new mode of working, but returning to linkblogging is also intensely, unexpectedly gratifying:

https://pluralistic.net/2023/05/02/wunderkammer/#jubillee

--

If you'd like an essay-formatted version to read/share, here's a link to pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2023/05/13/four-bar-linkage/#linkspittle

1/

pluralistic,
@pluralistic@mamot.fr avatar

Hinton joins a group of powerful, wealthy people who have made a lot of noise about the #ExistentialRisk of AI, while saying little or nothing about the ongoing risks of AI to people with disabilities, poor people, prisoners, workers, and other groups who are already being abused by automated decision-making and oversight systems.

22/

johnwehrle, to random

When I was a philosophy grad student longtermism hadn't been invented yet. Even now it is, long after I left the field, apparently a fringe area of research. But what I am now reading about it is frankly alarming.

This is an article from a recovered longtermist philosopher. I'll add some quotes and comments below.

https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

#longtermism #philosophy #eugenics

dredmorbius,

@johnwehrle I'm defending the notion of effective and fact-based criticism here, not longtermism ...

... but note that the term "existential risk" LONG predates the emergence of "longtermism", and through 2000 is also far more prevalent. See screenshot, and note that "longtermism" is multiplied 3x to scale equivalently to "existential risk".

I've strong concerns with any argument which leans heavily on such readily-refuted claims. The viewpoint may well be justified, but a bit less hyperventilating hyperbole and poor scholarship would greatly help the case.

The notion of "existential risk" was originally applied in a religious context (by Paul Tillich) and to nuclear weapons.

See:

#longtermism #ExistentialRisk #GoogleNgramViewer #Ngrams #WeakArguments #EmilePTorres

dredmorbius,

@johnwehrle Existential risk, global catastrophic risk, and doomsday prediction share some interesting characteristics.

Richard Posner fingered one aspect of it:

By definition, all but the last doomsday prediction is false. Yet it does not follow, as many seem to think, that all doomsday predictions must be false; what follows is only that all such predictions but one are false.

-Richard A. Posner, Catastrophe: Risk and Response, p. 13. https://archive.org/details/catastropheriskr00posn/page/13/mode/2up

If we're going to be more specific as to existential risk, what's being discussed is existence of some thing. And how you define that thing matters one hell of a lot. Yes, the multiplier is "infinity", but it's an infinity of that one thing.

And the probabilities attached to those risks, as well as the agency and influence humans have on them, also matters. Geological and astronomical events are all but entirely outside our capacity to influence, mitigate, or enhance them. Other risks of technological, social, political, economic, financial, and meterological systems ... we exert some agency, though how much and in what ways ... varies.

At the personal level, life is an existential risk, and in the end, fate is certain. You, I, and everyone now living, or and who has lived in the past, eventually dies. And in all likelihood, all who will live in the future. One major concern of life is when we die, and how.

Then there are collective levels: a community or culture, a political entity, "Western Civilisation"[1], technological society, sentient beings, species, life on Earth, etc., etc. All of these too are probably bounded.

The scope at which risk is considered also matters. Personally, you're roughly as likely to die in an auto accident as by a meteor impact --- the first are fairly common and frequent, though of localised effect, the latter can be a Really Bad Day for many beings over a very wide region of Earth, possibly all of it.

But the risk to all of humanity is quite different. It is exceedingly unlikely that everyone on Earth will be involved in an auto accident at the same time. But a sufficiently large meteor would have a global impact, so to speak.

That's one angle I'd take on the "existential risk" question.

Another is to look at both past history of life on Earth, with various extinctions (mass or otherwise), and to take a look at the Drake Equation. Both can be conceived of as putting bounds on species survival.

What extinctions tell us is that most biological species go extinct. Sometimes alone, sometimes in small groups, sometimes in mass global events. There are other factors as well, genetic drift, relative competitiveness, and the like. Humans' potential as a species is probably on the order of a few millions of years, based on evolutionary history alone.[2]

And another characteristic is that life on Earth tends to be bounded by environmental constraints, grossly estimated by biomass and carbon conversion rates. By both measures, humans (and our extended plant and animal symbiotes) are truly exceptional on a historical basis. Likelihood is that at least so long as we are confined to Earth, and most probably the Solar System and even galactic neighbourhood itself, the total potential human population is highly limited. Long-term carrying-capacity estimates for Earth range from a few hundreds of millions to perhaps a few trillions (this is the range of estimates, plausibility of any of these is a whole 'nother discussion). That is, within about three orders of magnitude of present values. And at higher values, overall survival risks increase tremendously.

The Drake Equation is constructed as a frame for considering the potential for sentient life in the Universe, but it can also be viewed through the lens of "what are the failure modes on the path to multi-stellar civilisations". It's pretty clear that a number of the failure modes are in our past, so-called "great filters", but others are likely in our future.

From that perspective, even accepting the Longtermist's overall argument, a principle concern should be in surviving the near future, either for as long as Human existence on Earth is itself viable (on the order of millions of years, and likely capped at 800 million years for the forms of life we know), or until interplanetary and interstellar travel of some sort become possible. With that latter being a huge "IF".

That is, so long as present modalities of existence dominate, even by the Longtermist argument, the argument should be on risk minimisation rather than quantitative maximisation. In the event humans (or our biological, mechanical, or cyborg successors) do reach a point of being able to grow maximally without over-reaching accessible resources, odds are very high we'll be able to do so with very few limitations. Humans are exceptionally fecund, particularly for higher-order mammals, and we'd likely have technological multipliers on base capacity as well. At astronomical timescales, this is not a limiting factor.

Again: I don't think the vision of humans expanding much beyond Earth is actually viable. But even if it is, avoiding any of the many Stupid Ways to Die is the only way we'll get there.

Or simply survive and enjoy living on this beautiful, rare, gorgeous world.

Notes:

  1. Q: What do you think of W.C.? Gandhi: I think it would be a good idea.
  2. And for technological civilisations ... to date human history suggests even shorter, as the most technologically complex civilisations seem to have lifespans on the order of a few centuries to millennia. It's one thing to argue what might be possible, but history tells us what has occurred, to date.

#Risk #GobalCatastrophicRisk #RichardPosner #ExistentialRisk #LongTermism #criticism #mortality

dredmorbius,

@johnwehrle Here's my earlier discussion of E.R. / GCR, and the Posner quote:

https://toot.cat/@dredmorbius/109712403433781694

I'm finding it holds up well.

That's actually an argument against another form of idiocy: panglossianism, particularly of the Steven Pinker variety. But the underlying mechanism is more or less the same: getting confounded with exhibited probabilities (nil, in the case of CGRs Not Yet Realised), or potential impacts (infinite, in the case of GCRs generally), and failing to consider other dimensions of the question:

  • Chains of causality in arriving at some specific risk.
  • Scope over which a given risk is "existential". (The "existence of what exactly?" question.)
  • Potential exogenous bounds on growth, capacity, and/or attainment regardless of the risks in question.

If humans are bad at addressing risk generally, we are globally, catastrophically, existentally BAD at dealing with global catastrophic and/or existential risks.

#Risk #GlobalCatastrophicRisk #ExistentialRisk #Pangloss #Panglossianism #StevenPinker #StevenPinkerIsAnIdiot #MyInfinityIsBiggerThanYours

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • osvaldo12
  • ngwrru68w68
  • GTA5RPClips
  • provamag3
  • InstantRegret
  • everett
  • Durango
  • cisconetworking
  • khanakhh
  • ethstaker
  • tester
  • anitta
  • Leos
  • normalnudes
  • modclub
  • megavids
  • lostlight
  • All magazines