hankg, to random

I just realized that Vox's Future Perfect wrapped itself in the "Effective Altruism" EA language when they launched six years ago and continues to do so. Now that EA has proven to be just narcissistic billionaire's favorite retconning apologetics tool for whatever new sociopathic behaviors they engage in maybe it'd be better to ditch that language. Just a thought

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "According to the FHI itself, its closure was a result of growing administrative tensions with Oxford’s faculty of philosophy. “Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed,” the final report stated.

But both Bostrom and the institute, which brought together philosophers, computer scientists, mathematicians and economists, have been subject to a number of controversies in recent years. Fifteen months ago Bostrom was forced to issue an apology for comments he’d made in a group email back in 1996, when he was a 23-year-old postgraduate student at the London School of Economics. In the retrieved message Bostrom used the N-word and argued that white people were more intelligent than black people.

The apology did little to placate Bostrom’s critics, not least because he conspicuously failed to withdraw his central contention regarding race and intelligence, and seemed to make a partial defence of eugenics. Although, after an investigation, Oxford University did accept that Bostrom was not a racist, the whole episode left a stain on the institute’s reputation at a time when issues of anti-racism and decolonisation have become critically important to many university departments." https://www.theguardian.com/technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-longtermism-affective-altruism

skarthik, to philosophy
@skarthik@neuromatch.social avatar

Good riddance to what was a colossal waste of money, energy, resources, and any sane person's time, intellect, and attention. To even call these as exploratory projects is a disservice to human endeavor.

"Future of humanity", it seems. These guys can't even predict their next bowel movement, but somehow prognosticate about the long term future of humanity, singularity blah blah. This is what "philosophy" has come to with silicon valley and its money power: demented behavior is incentivized, douchery is rationalized, while reason is jettisoned.

https://www.theguardian.com/technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-longtermism-affective-altruism

remixtures, to Futurology Portuguese
@remixtures@tldr.nettime.org avatar

: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636

drahardja, to random
@drahardja@sfba.social avatar

Any day when a proponent loses their funding and platform is a good day.

“Oxford shuts down institute run by Elon Musk-backed philosopher”

https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "Oxford University this week shut down an academic institute run by one of Elon Musk’s favorite philosophers. The Future of Humanity Institute, dedicated to the long-termism movement and other Silicon Valley-endorsed ideas such as effective altruism, closed this week after 19 years of operation. Musk had donated £1m to the FHI in 2015 through a sister organization to research the threat of artificial intelligence. He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter.

The center was run by Nick Bostrom, a Swedish-born philosopher whose writings about the long-term threat of AI replacing humanity turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers. Sam Altman of OpenAI, Bill Gates of Microsoft and Musk all wrote blurbs for his 2014 bestselling book Superintelligence.

“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” Musk tweeted in 2014.

Bostrom resigned from Oxford following the institute’s closure, he said." https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

wjmaggos, to random
@wjmaggos@liberal.city avatar

What stops from arguing that almost any horrible act today could have a very positive effect long in the future? Maybe you're killing a baby Hitler. It's so dumb.

mpjgregoire,
@mpjgregoire@cosocial.ca avatar

@wjmaggos Consequentalist ethics seem to make a lot of sense in the abstract, but it's not hard to find problem cases. The focus of on many humans in the distant future made it particularly easy to justify sketchy actions in the here and now.

Some good criticisms here: https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game

RPBook, to random
@RPBook@historians.social avatar

This is a long, but well reasoned and thought out, takedown of and .

> "Many people in Silicon Valley and around the world now call themselves effective altruists. Is there any way they might become Responsible Adults?"

https://www.wired.com/story/deaths-of-effective-altruism/

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

: "C: So it sounds like the EA work was not just an excuse to sit around, but an excuse not to do laundry.

E: Pretty much. There’s something so strange about people like taking themselves and their time so seriously that they think about things like replaceability and is it better to have someone else doing this thing because my time is better spent doing that or like should I pursue this particular job if there’s someone else there who could do it just as well are better than I could have been, so I should do something else, just as like really intense emphasis on how you use your time on the that level is kind of weird and unhealthy.

C: Yeah, it sounds like it’s very economic talk, like very much competitive advantage, as in you want to optimize your competitive advantage versus optimizing your quality of life or your happiness or how much love you have.

E: Exactly. And yeah, I think there’s this intense emphasis on rationality and logic, which I think appealed to me at first, because I’m a very logical person, and I was a philosophy student. I mean, in some ways, a lot of the EA thinking is great. Like, for example, when you’re thinking about where to donate, I think it’s great that you put the emphasis on logic and on research, right? Like, let’s actually give money to the organizations that are cost-effective and where the money is gonna make the most difference instead of just the ones that have the biggest brand-new budget, so you’ve heard of them. And in that way, I think that the emphasis of logic is great, but then I think in a lot of ways they take it way too far to the point where you’re losing things like emotion and empathy and passion."

https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/

br00t4c, to random
@br00t4c@mastodon.social avatar
br00t4c, to ai
@br00t4c@mastodon.social avatar
wjmaggos, to random
@wjmaggos@liberal.city avatar

Have the crowd done the math on whether poor people might do more good by not working their ass off for greedy bosses and instead joining together to reallocate the wealthy's riches by force, in order to ensure more people have a better life right now and reduce the influence of elite assholes on our politics?

cc @robertwrighter

ljrk, to random
@ljrk@todon.eu avatar

Okay, the American Dialect Society is on fire:

> effective altruism: movement ostensibly to benefit humanity, used as an excuse for spending other people’s money

https://americandialect.org/wp-content/uploads/2024/01/2023-Word-of-the-Year-PRESS-RELEASE.pdf

The best definition of

treyhunner, (edited ) to random
@treyhunner@mastodon.social avatar

I'm sometimes troubled that all news and posts I see about recently are about how it's misguided (AI obsessed, far-future obsessed, heartless, etc.).

If you're curious about some EA topics I've thought about over the last year, listen to this 2023 recap episode of the 80,000 Hours podcast.

https://80000hours.org/2023/12/best-of-2023-podcast-highlights/

Feel free to skip the AI segments if you like.

80,000 Hours has inspire my thinking on animal welfare, the unequally distributed value of money, and more.

treyhunner,
@treyhunner@mastodon.social avatar

Some highlights:

• "punctuated equilibrium"
• "demand elasticity in the presence of complementarity"
• banning AI for electioneering
• objections to universal basic income
• "registered reports" in scientific journals to discourage "P hacking"
• universal versus means-tested wealth redistribution
• bringing down the cost curve for green tech
• encouraging political advocacy versus lifestyle changes
• moral intuitions versus moral actions in eating

https://80000hours.org/2023/12/best-of-2023-podcast-highlights/

gimulnautti, to Magic
@gimulnautti@mastodon.green avatar

Well worth a read: As our interconnectivity profilates and technological provess multiplies, so does belief in .

It’s not just new-age hippies & faith-healers anymore, it’s silicon valley and going along with the & , up to even rooting for Julius Evola’s fascist magic & (a favourite of Russia’s Aleksandr Dugin)

Sure, if you know it’s just self-hypnosis. But record shows that slips away easy😓

https://aeon.co/essays/how-the-internet-became-the-modern-purveyor-of-ancient-magic

frugalinch, to random
@frugalinch@swiss.social avatar

Maybe you’re thinking of making a donation to charity before the end of the year? May I suggest checking out the recommendations by The Life You can Save, Give Well or Giving What we Can? The recommended charities have been vetted for effectiveness. So you get the most good for your buck. BTW I can also recommend the book The Life You Can Save by Peter Singer

anubis2814, to random en-us

is the believe that doing harm now, in the quest for improving things that will cause greater good later evens itself out. Its not new believed that if he ran the workers into the ground in a few decades they have a fully automated utopia that Marx had talked about as the outcome of industrialization, and that included sending nearly 3% of the nation to the gulags. Its not , it doesn't save futre lives in the long term it just creates a cycle of trauma that ensures no one is mentally healthy and that's assuming the effective altruists actually does anything to save future lives.

juergen_hubert, (edited ) to random
@juergen_hubert@thefolklore.cafe avatar

And this is really the black, rotting heart of the movement: It is nothing more than excuse to plunder the present for a highly hypothetical distant future.

Look, I am not saying that humanity won't ever spread out among the stars, assuming that we survive the present crisis - though I bet it won't be during these bastards' lifetimes (since the needed tech is nowhere available). However, if the actual goal is "more Mozarts and more Einsteins", then they could do far more for this right now.

Both Mozart and Einstein grew up in relatively privileged families who made it possible for their scions to follow their passions. In contrast, the Capitalist system that both Bezos and Musk encourage grinds most of the people working in it - especially at the lower ends - into the dust, creating a vast overworked and underpaid underclass.

How is human ingenuity supposed to prosper under these situations? How can a new Mozart or Einstein arise if their parents struggle to make ends meet as a part of the vast supply chains Bezos and Musk are commanding?

If they don't take care of the people living right now, then why should we trust the ideology of these bastards to encourage the growth of the human spirit in the future?

They are NOT humanity's saviors - but merely an expression of the sickness of our times.


https://finance.yahoo.com/news/jeff-bezos-elon-musk-human-170119555.html

Selena, to random Dutch
@Selena@ivoor.eu avatar

I guess I missed the memo about scumbags all rebranding as
Trying to get WW3 started for no better reason than they think they can win, which seems to be enough reason for bullies. 😰😰😰
https://nebula.tv/videos/joescott-why-some-billionaires-are-actively-trying-to-destroy-the-world/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive:"

https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space

dangillmor, to random
@dangillmor@mastodon.social avatar

The brilliant Molly White (@molly0xfff) has given us the best coverage of the scam-ridden cryptocurrency "marketplace" -- a public service in a time when tech journalism resembles fanzines.

In a new piece, she turns her attention to deeply unattractive but central tenets of Silicon Valley's current mania, AI.

https://newsletter.mollywhite.net/p/effective-obfuscation

Savor every word.

Lazarou,
@Lazarou@mastodon.social avatar

@dangillmor @molly0xfff "Same as it always was"

Spot on, it's just rich people justifying their wealth, Objectivism for the 21st Century.

KFuentesGeorge, to random

Effective altruists are neither effective nor altruistic. Discuss.

pluralistic, to OpenAI
@pluralistic@mamot.fr avatar

Last week's spectacular soap-opera hijacked the attention of millions of normal, productive people and nonsensually crammed them full of the fine details of the debate between () and (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.

1/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • GTA5RPClips
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • everett
  • InstantRegret
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • anitta
  • kavyap
  • modclub
  • normalnudes
  • cubers
  • osvaldo12
  • tacticalgear
  • Durango
  • khanakhh
  • ngwrru68w68
  • provamag3
  • Leos
  • tester
  • cisconetworking
  • ethstaker
  • megavids
  • lostlight
  • All magazines