remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "According to the FHI itself, its closure was a result of growing administrative tensions with Oxford’s faculty of philosophy. “Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed,” the final report stated.

But both Bostrom and the institute, which brought together philosophers, computer scientists, mathematicians and economists, have been subject to a number of controversies in recent years. Fifteen months ago Bostrom was forced to issue an apology for comments he’d made in a group email back in 1996, when he was a 23-year-old postgraduate student at the London School of Economics. In the retrieved message Bostrom used the N-word and argued that white people were more intelligent than black people.

The apology did little to placate Bostrom’s critics, not least because he conspicuously failed to withdraw his central contention regarding race and intelligence, and seemed to make a partial defence of eugenics. Although, after an investigation, Oxford University did accept that Bostrom was not a racist, the whole episode left a stain on the institute’s reputation at a time when issues of anti-racism and decolonisation have become critically important to many university departments." https://www.theguardian.com/technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-longtermism-affective-altruism

skarthik, to philosophy
@skarthik@neuromatch.social avatar

Good riddance to what was a colossal waste of money, energy, resources, and any sane person's time, intellect, and attention. To even call these as exploratory projects is a disservice to human endeavor.

"Future of humanity", it seems. These guys can't even predict their next bowel movement, but somehow prognosticate about the long term future of humanity, singularity blah blah. This is what "philosophy" has come to with silicon valley and its money power: demented behavior is incentivized, douchery is rationalized, while reason is jettisoned.

https://www.theguardian.com/technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-longtermism-affective-altruism

RL_Dane, to random
@RL_Dane@fosstodon.org avatar

: is tragically, ironically the best argument for human extinction.

If we value the welfare of our theoretical progeny over the human being next to us, humanity has nothing to offer the universe.

remixtures, to Futurology Portuguese
@remixtures@tldr.nettime.org avatar

: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636

drahardja, to random
@drahardja@sfba.social avatar

Any day when a proponent loses their funding and platform is a good day.

“Oxford shuts down institute run by Elon Musk-backed philosopher”

https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "Oxford University this week shut down an academic institute run by one of Elon Musk’s favorite philosophers. The Future of Humanity Institute, dedicated to the long-termism movement and other Silicon Valley-endorsed ideas such as effective altruism, closed this week after 19 years of operation. Musk had donated £1m to the FHI in 2015 through a sister organization to research the threat of artificial intelligence. He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter.

The center was run by Nick Bostrom, a Swedish-born philosopher whose writings about the long-term threat of AI replacing humanity turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers. Sam Altman of OpenAI, Bill Gates of Microsoft and Musk all wrote blurbs for his 2014 bestselling book Superintelligence.

“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” Musk tweeted in 2014.

Bostrom resigned from Oxford following the institute’s closure, he said." https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

jackofalltrades, to climate
@jackofalltrades@mas.to avatar

https://journals.sagepub.com/doi/10.1177/00368504231201372

Interesting paper. It correctly identifies the source of our problems: behavioral patterns, culture and power structures glorifying consumption and pronatalism. Authors recognize that targeting only symptoms of (like ) with incremental technological interventions is a losing strategy.

pvonhellermannn,
@pvonhellermannn@mastodon.green avatar

@mojala @jackofalltrades @fluffykittycat

Yes! It is pretty telling that Musk himself has 10 children and other tech bros/billionaires also tend to have many (at least one other has 10 himself I think). With all their stuff, it is literally them breeding their own lovely white offspring for the future whilst everyone else can die off.

But will read the piece! Work on palm oil so aware of global food needs etc. Just that, for above reasons, super wary of any overpopulation talk.

raymondpert, to australia
@raymondpert@mastodon.cloud avatar

Australia's Great Barrier Reef hit by record bleaching

> Australia's spectacular Great Barrier Reef is experiencing its worst event on record, the country's reef authority reported on Wednesday (Apr 17).

> Often dubbed the world's largest living structure, the Great Barrier Reef is a 2,300km-long expanse, home to a stunning array of biodiversity including more than 600 types of coral and 1,625 fish species.
https://www.channelnewsasia.com/world/australias-great-barrier-reef-hit-record-bleaching-4270881

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@Syulang @Seruko @raymondpert

|s

(2/2)

...was a species on earth less worthy of becoming a "," as and the other acolytes of () aspire to become, it is the human race.

s/:In fact, if were a crime punishable by the death penalty, homo sapiens deserves an extinction-level events.

R.I.P. homo sapiens./s

//

HistoPol,
@HistoPol@mastodon.social avatar

@Syulang

I do know where to begin (alas,) regarding --the "L" standing for :

https://mastodon.social/@HistoPol/110565890923413442

@Seruko @raymondpert

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "Most of us would say that human extinction would be rather bad, for one reason or another. But not everyone would agree.

What kind of person prefers human extinction over continued existence? There are a few obvious suspects. One is the “philosophical pessimist” who argues that the world is full of so much human suffering that the nonexistence of our species is better than continued existence. Another is a certain stripe of radical environmentalist who claims that humanity is so destructive to the biosphere, that only our extinction can save what remains of the natural world.

Then there is a third group of people who aren’t bothered by the possibility of human extinction, and indeed some hope to actively bring it about in the coming decades. They represent a more dangerous and extreme form of pro-extinctionist ideology that is fairly widespread within Silicon Valley. In fact, some of the most powerful people in the tech world are members of this group, such as the co-founder of Google, Larry Page." https://www.truthdig.com/articles/team-human-vs-team-posthuman-which-side-are-you-on/

mixmistressalice, to Podcast
@mixmistressalice@kolektiva.social avatar

"Everyone's talking about AI, how it will change the world, and even suggesting it might end humanity as we know it. Dave Troy is joined by Dr. Timnit Gebru and Émile Torres, two prominent critics of AI doomerism, to cut through the noise, and look at where these ideas really came from, and offer suggestions on how we might look at these problems differently. And they also offer a picture of the darker side of these ideas and how they connect to Eugenics and other ideologies historically.

Together Émile and Timnit coined an acronym called TESCREAL, which stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism — and yeah, that's a lot of -isms. But it ties into other topics that we have covered in this series, including Russian Cosmism and Longtermism.

Dr. Gebru came to prominence in 2020 after she was fired from Google for speaking up about the company's lack of ethical guardrails in its AI development work. Émile Torres studies existential risk and has been a critic of the "longtermist" movement for several years."—Dave Troy >

https://pod.co/dave-troy/understanding-tescreal-with-dr-timnit-gebru-and-mile-torres

gimulnautti, to random
@gimulnautti@mastodon.green avatar

I can just imagine Elon Musk’s face when he’s reading about like:

”Wait, so if the future contains so many more potential human lives than the present, that basically means I don’t have to care jack about anyone currently alive and I can still be the hero & saviour of humanity?

😈🤠🤑 YASSSS!!!!”

Peternimmo, to cryptocurrency
@Peternimmo@mastodon.scot avatar

Emily F. Gorcenski sums up beautifully. Just part of an amazing essay on the perverse of the . I think she's mostly correct on the parallels. A long read, but highly recommended









@emilygorcenski
https://emilygorcenski.com/post/making-god/

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "The problem with removing the messy, squishy, human part of decisionmaking is you can end up with an ideology like effective altruism: one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.

Take, for example, the widely held belief among EAs that it is more effective for a person to take an extremely high-paying job than to work for a non-profit, because the impact of donating lots of money is far higher than the impact of one individual’s work. (The hypothetical person described in this belief, I will note, tends to be a student at an elite university rather than an average person on the street — a detail I think is illuminating about effective altruism’s demographic makeup.) This is a useful way to justify working for a company that many others might view as ethically dubious: say, a defense contractor developing weapons, a technology firm building surveillance tools, or a company known to use child labor. It’s also an easy way to justify life’s luxuries: if every hour of my time is so precious that I must maximize the amount of it spent earning so I may later give, then it’s only logical to hire help to do my housework, or order takeout every night, or hire a car service instead of using public transit.

The philosophy has also justified other not-so-altruistic things: one of effective altruism’s ideological originators, William MacAskill, has urged people not to boycott sweatshops (“there is no question that sweatshops benefit those in poor countries“, he says). Taken to the extreme, someone could feasibly justify committing massive fraud or other types of wrongdoing in order to obtain billions of dollars that they could, maybe someday, donate to worthy causes. You know, hypothetically."

https://newsletter.mollywhite.net/p/effective-obfuscation

2ndStar, (edited ) to random German
@2ndStar@astronomy.social avatar

deleted_by_author

  • Loading...
  • HistoPol,
    @HistoPol@mastodon.social avatar

    @danimo @human_with_piano @2ndStar @mina

    (3/3)

    ...ist natürlich auch dabei, bei vielen vielleicht die Haupttriebfeder.
    Aber eben wohl "ohne Rücksicht auf Verluste" - ganz ähnlich wie bei (z. B. Safety bei und ). Bei ihm geht es jedoch um die Realisierung einer faschistoiden Philosophie seiner Sekte () .

    //

    pluralistic, to random
    @pluralistic@mamot.fr avatar

    "Silicon Valley ideology says safeguarding intelligence in the future is more important than its systems systematically crushing and killing black and brown people right now. Long-termism grabs attention back from people being harmed, who were beginning to make too much noise."

    brad262run, to tech
    @brad262run@mastodon.online avatar

    This article barely scratches the surface of the threat from ’s

    titans of helped create Donald Trump. Now they’re alienated from politics and searching for allies https://www.washingtonpost.com/technology/2023/11/12/silicon-valley-billionaire-donors-presidential-candidates/

    Read these by @anildash for more

    The tycoon charade https://www.anildash.com/2023/02/27/tycoon-martyrdom-charade/

    " qanon" and the of the tycoons https://www.anildash.com/2023/07/07/vc-qanon/

    brad262run,
    @brad262run@mastodon.online avatar

    "Did you ever wonder why the 21st century feels like we're living in a bad novel from the 1980s?” http://www.antipope.org/charlie/blog-static/2023/11/dont-create-the-torment-nexus.html by @cstross

    ideology quietly admits (its) is not compatible with (our)
    " grabs attention back from people being harmed, who were beginning to make too much noise” https://crookedtimber.org/2023/11/15/silicon-valleys-worldview-is-not-just-an-ideology-its-a-personality-disorder/ by @mariafarrell

    rticks, to solarpunk
    @rticks@mastodon.social avatar

    For what its worth, the future of humanity is very very bright long term. Granted, the road there is so dark that it seems impossible so I dont blame you at all for despair or choking idiots who think that means we have no obligation to the future.

    FuckElon, to twitter
    @FuckElon@mastodon.social avatar

    The Long Now Foundation Logo. Established in 1996.

    https://longnow.org/

    #X

    HistoPol,
    @HistoPol@mastodon.social avatar
    mina, to random
    @mina@berlin.social avatar

    Day 1

    I’ve been invited by @mikako6 to share one image (no posters, no titles, no explanations) from 10 films that impacted me.

    The premise is that every day a new person will be added: 10 days, 10 movie images, 10 friends.

    I'll tag @HistoPol (feel free to ignore if not interested or already tagged)

    #TenDaysTenFilmsTenFriends

    HistoPol, (edited )
    @HistoPol@mastodon.social avatar

    @mina @si_irini

    You two are 100% correct.
    and the other tech mogul nerds from their high-school and college days are actually trying to realize part of these novels within their framework of beliefs, e.g. .
    (If you listen to the two interviews/podcasts* in my TL, it becomes clear.)

    https://mastodon.social/@HistoPol/110565890923413442

    @Yeekasoose @mikako6 @pmj

    remixtures, to random Portuguese
    @remixtures@tldr.nettime.org avatar

    : "The manifesto is grounded in some eyebrow-raising associations, including fascists and reactionaries. Andreesen lists the "patron saints" of techno-optimism, and they include Nick Land, one of the chief architects of modern "accelerationism" who is better known as championing the anti-democratic Dark Enlightenment movement that casts liberal-multicultural-democratic thinking as embodying a nefarious "Cathedral." Andreessen also calls out Filippo Tommaso Marinetti as one of his patron saints. Marinetti is not only the author of the technology- and destruction-worshipping Futurist Manifesto from 1909, but also one of the architects of Italian fascism. Marinetti co-authored the Fascist Manifesto in 1919 and founded a futurist political party that merged with Mussolini's fascists. Other futurist thinkers and artists exist. To call Marinetti in particular a "saint" is a choice.

    None of this is new or shocking in itself. What is notable is that the rant is being posted unabridged on the blog of a major Silicon Valley venture capital firm, and parroted obediently and uncritically by the mainstream tech press. It gives further weight to viewing effective accelerationism—and its counterparts, “effective altruism” and “longtermism”—as the official ideology of Silicon Valley."

    https://www.vice.com/en/article/93kg5d/major-tech-investor-calls-architect-of-fascism-a-saint-in-unhinged-manifesto

    2ndStar, to random German
    @2ndStar@astronomy.social avatar

    deleted_by_author

  • Loading...
  • HistoPol, (edited )
    @HistoPol@mastodon.social avatar

    @2ndStar

    Niemand in Deutschland braucht #X. - Selbst zu Hochzeiten von war es in Deutschland die KLEINSTE Social-Media-Plattform der Top 6.

    Musk ist nicht faschistoid, er ist mit seinem -Gedankengut (genauer: ) Rassist und Faschist.

    JEDER, der weiter Content liefert oder auch nur auf X "liked" macht sich mitschuldig an der Rückkehr des globalen Faschismus.

    So einfach ist das. Es gibt keine Entschuldigungen.

    Firmen, die in letzter Zeit X verlassen, merken...

    maxleibman, (edited ) to random
    @maxleibman@mastodon.social avatar

    I don’t know who needs to hear this, but the happiness—or even the existence—of a trillion trillion trillion simulated beings some time in the future is not worth the life of even a single real, flesh-and-blood person who is alive today.

    Hell, square the number of future beings a trillion times, for that matter. There’s no number where that math pencils out.

    maxleibman,
    @maxleibman@mastodon.social avatar

    People who aren’t actively working to sustain and renew the one planet we have do not get to lecture the rest of us on “what we owe the future.”

    HistoPol, to TeslaMotors
    @HistoPol@mastodon.social avatar

    (1/n)

    via Prof. Timothy Snyder, University in TheGuardian@mstdn.social :

    "...’s actions have increased the chances of nuclear war.

    There is always some risk, which Russia increased by initiating a major conflict. Ukraine then decreased the probability by ignoring Russian nuclear blackmail. If Ukraine had surrendered, then the lesson for the rest of the world would have been clear: you must have nuclear weapons, either to blackmail...

    https://www.theguardian.com/commentisfree/2023/sep/17/elon-musk-likes-to-think-he-saved-us-from-armageddon-hes-just-brought-it-closer

    HistoPol,
    @HistoPol@mastodon.social avatar

    (3/n)

    ...grip on global communications and his wealth that dwarfs that of many countries.
    No, I am talking about his mental health and about his crazy beliefs based in , part of .

    In order to achieve "humanity's destiny" between the stars, some followers of these cults would do almost anything to achieve that goal.
    An singularity, that and his followers fear would be prevent such a destiny, would even warrant a , even it it meant...

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • magazineikmin
  • cisconetworking
  • khanakhh
  • Youngstown
  • rosin
  • mdbf
  • slotface
  • Durango
  • ngwrru68w68
  • thenastyranch
  • kavyap
  • DreamBathrooms
  • megavids
  • cubers
  • osvaldo12
  • normalnudes
  • GTA5RPClips
  • everett
  • ethstaker
  • modclub
  • tacticalgear
  • provamag3
  • Leos
  • anitta
  • tester
  • lostlight
  • All magazines