alper, to random
@alper@rls.social avatar

Who ever could have predicted that AI was going to be the technology of white supremacy?

With an advisory group like this it seems like bad outcomes are more or less guaranteed.

susankayequinn, to random
@susankayequinn@wandering.shop avatar

I'm truly, deeply alarmed at how the tech industry is trying to insert itself in every human interaction, getting between humans in every possible relationship, and they think that's "better" while absolutely destroying everything that makes society work.

The answer is MORE human-to-human interaction not LESS. FFS.

(screenshot from a substack that landed in my inbox, but you can see this same ethos everywhere, including strained attempts to portray chatbots with "theories of the mind")

NatureMC,
@NatureMC@mastodon.online avatar

@susankayequinn I sign it! (Some time ago I read about the disaster of a caregiving AI robot: the results were so bad).

It's interesting to look at the "philosophy" or non-ethics behind that. Often shortened as : It's deeply fascist eugenics thinking, therefore, always anti-life, anti-humanity. https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/

jeffjarvis, (edited ) to random
@jeffjarvis@mastodon.social avatar

Damnit. I wish journalists would do their homework on and AI. OpenAI people are all believers in the BS of AGI & so-called x-risk; the "safety" people are just more fervent believers. They're all full of it. They are the danger.
A Safety Check for OpenAI https://www.nytimes.com/2024/05/20/business/dealbook/openai-leike-safety-superalignment.html?smid=tw-share

paninid, to random
@paninid@mastodon.world avatar

There is a lot of alignment between the Dominionists and crowd.

jeffjarvis, to random
@jeffjarvis@mastodon.social avatar

It wasn't the safety team. It was the doom team. AI is a h all of mirrors....
OpenAI Reportedly Dissolves Its Existential AI Risk Team
https://gizmodo.com/openai-reportedly-dissolves-its-existential-ai-risk-tea-1851484827

jeffjarvis,
@jeffjarvis@mastodon.social avatar

The "safety" team were the more fanatical doomsters but the rest of OpenAI is still a cult building their BS god, AGI. Reporters aren't reading up on and so they are missing the real story here. At least Axios links to AGI skeptic Gary Marcus.

OpenAI's safety dance

https://www.axios.com/2024/05/20/openai-safety-jan-leike-sam-altman

urlyman, to random
@urlyman@mastodon.social avatar

Cloud seeding:

The rise of storage on the World Wide Web.

To fuel the training of AI.

To make it rain technofeudalism,
with the growing likelihood of hailstorms of eugenics

https://overcast.fm/+nh1Av4Rz4

urlyman,
@urlyman@mastodon.social avatar

@NatureMC It is the same as that video.

Wild covers quite a few angles but the ones that really struck me were the affinities those pursuing AGI (Artificial General Intelligence) apparently have with the ideas of:

This is part of the #TESCREAL bundle which @timnitGebru and @xriskology have written about https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "So there's this long tradition of consulting people who use technologies to find out what they need, and to find out why technology does or doesn't work for them. And the big message there was that technologists are probably more ill-equipped to understand that than average people, and to see the industry swing back towards tech authority and tech expertise as making decisions about everything, from how technology is built to what future is the best for all of us, is alarming in that sense.

So we can draw from things like user-centered research. This is how I concluded the paper, is just pointing to all the processes and practices we could start using. There's user-centered research, there's participatory processes, there's... Policy gets made often through consulting with groups that are affected by systems, by policies. There are ways of designing technology so that people can feed back straight into it, or we can just set in some regulations that say, in certain cases, it's not acceptable for technology to make a decision.

I think some of what we have to do is get outside of the United States, because some of the more human rights oriented or user-centered policymaking is happening elsewhere, especially in Europe."

https://www.techpolicy.press/podcast-resisting-ai-and-the-consolidation-of-power/

knittingknots2, to random
@knittingknots2@mstdn.social avatar

Who believes the most "taboo" conspiracy theories? It might not be who you think | Salon.com

https://www.salon.com/2024/05/05/believes-the-most-taboo-conspiracy-theories-it-might-not-be-you-think/

NatureMC,
@NatureMC@mastodon.online avatar

@knittingknots2 When these people call themselves "extremely liberal" it's interesting to look into the research on https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/

And I'm missing that point a bit here: how much of this do these ‘cheerleader’ types on the photo really believe and how much of it are they just faking to push their ideology to the masses?

I'll read the study to find out more. Thanks for the link!

shawnmjones, to Humanism
@shawnmjones@hachyderm.io avatar

I'm working through my thoughts on #TESCREAL, very concerned about the loss of #Humanism.

I've read "What We Owe the Future" by William MacAskill.

I'm a subscriber of the following Podcasts:

  • “The Tech Won’t Save Us"
  • "Mystery AI Hype Theater 3000”

I follow the (sometimes disturbing) subreddits:

  • r/artificial
  • r/ArtificialIntelligence
  • r/Futurology
  • r/singularity

I'm reading “God, Human, Animal, Machine” by Meghan O’Gieblyn.

Does anyone have any other reading/listening suggestions?

jbzfn, to ai
@jbzfn@mastodon.social avatar

🧠 The Babelian Tower Of AI Alignment
➥ NOEMA

「 A more imminent threat, he told the Times, is the one posed by American AI giants to cultures around the globe. “These models are producing content and shaping our cultural understanding of the world,” Mensch said. “And as it turns out, the values of France and the values of the United States differ in subtle but important ways.” 」

https://www.noemamag.com/the-babelian-tower-of-ai-alignment/

emilymbender, to random
@emilymbender@dair-community.social avatar
NatureMC,
@NatureMC@mastodon.online avatar

@emilymbender I can read only the second headline and what I read about this decision feels dystopic. It feels so wrong on so many levels.

xynthia, to technologie French
@xynthia@mastodon.tedomum.net avatar

Transhumanisme, long-termisme… comment les courants « TESCREAL » influent le développement de l’IA

@technologie
https://piaille.fr/@mart1oeil/112336068030361562
mart1oeil@piaille.fr -

https://next.ink/135681/transhumanisme-long-termisme-comment-les-courants-tescreal-influent-le-developpement-de-lia/

un deuxième article sur le sujet de @mathildesaliou

, , (moderne) .

parismarx, to tech
@parismarx@mastodon.online avatar

Transhumanism is all the rage with tech billionaires pushing mind uploading, AGI, and more. But where do those ideas come from?

On , I spoke with Meghan O’Gieblyn to discuss the religious roots of transhumanist visions of the future.

https://techwontsave.us/episode/218_the_religious_foundations_of_transhumanism_w_meghan_ogieblyn

mathildesaliou, to random French
@mathildesaliou@piaille.fr avatar
CultureDesk, (edited ) to books
@CultureDesk@flipboard.social avatar

Does sci-fi shape the future? Tech billionaires from Bill Gates to Elon Musk have often talked about the impact of novels they read as teens, from Neal Stephenson's "Snow Crash" to Iain M. Banks' "Culture" series. Big Think's Namir Khaliq spoke to authors including Andy Weir, Lois McMaster Bujold, @cstross and @pluralistic about how much impact they think science fiction has had, or can have.

https://flip.it/DmHzd2

@bookstodon

NatureMC,
@NatureMC@mastodon.online avatar

@CultureDesk It's a very topical question regarding the sub-genres or , and . The latter experiments with positive changes.

But it becomes extremely creepy when you take a closer look at how mix sci-fi with radical right-wing eugenics ideas and knit an anti-democratic ideology out of it. Named , a sort of tech fascism: https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/ and here: https://www.theatlantic.com/magazine/archive/2024/03/facebook-meta-silicon-valley-politics/677168/?gift=pNhm6V1nG5ZO8R8GWle1H01Kw4OvqWH8-6RE146aONg&utm_source=copy-link&utm_medium=social&utm_campaign=share

@pluralistic @bookstodon

yoginho, to Futurology
@yoginho@spore.social avatar

Wow. Mike Levin has finally come out as a full : https://noemamag.com/ai-could-be-a-bridge-toward-diverse-intelligence

I'm not sure "most of us think this way about the world we want for our kids"... at least I don't. Not at all. I find this toxic optimist "vision" utterly naive & disgusting. /1

remixtures, to Futurology Portuguese
@remixtures@tldr.nettime.org avatar

: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636

drahardja, to random
@drahardja@sfba.social avatar

Any day when a proponent loses their funding and platform is a good day.

“Oxford shuts down institute run by Elon Musk-backed philosopher”

https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

bojacobs, to random
@bojacobs@hcommons.social avatar

Good news for humans!

"Oxford shuts down institute run by Elon Musk-backed philosopher. Nick Bostrom’s Future of Humanity Institute closed this week"

https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes?CMP=oth_b-aplnews_d-1

NatureMC, (edited ) to ai
@NatureMC@mastodon.online avatar

Laws, ethical debates? It's all too late. The cat's out of the bag. Women over 60, 70 from our village share things like this on WhatsApp and laugh their heads off: https://mastodon.online/@asol@mastodon.social/112291529251522267
Can I take a photo of you? - Why? - I can do that with you too! - Oops!

The next scammer or dictator can do it, too. Perfect propaganda tools.
context: https://www.microsoft.com/en-us/research/project/vasa-1/

NatureMC,
@NatureMC@mastodon.online avatar

@pjakobs Why 19th century? I couldn't help but think of #TESCREAL when I read that article.

raymondpert, to australia
@raymondpert@mastodon.cloud avatar

Australia's Great Barrier Reef hit by record bleaching

> Australia's spectacular Great Barrier Reef is experiencing its worst event on record, the country's reef authority reported on Wednesday (Apr 17).

> Often dubbed the world's largest living structure, the Great Barrier Reef is a 2,300km-long expanse, home to a stunning array of biodiversity including more than 600 types of coral and 1,625 fish species.
https://www.channelnewsasia.com/world/australias-great-barrier-reef-hit-record-bleaching-4270881

HistoPol, (edited )
@HistoPol@mastodon.social avatar

@Syulang @Seruko @raymondpert

|s

(2/2)

...was a species on earth less worthy of becoming a "," as and the other acolytes of () aspire to become, it is the human race.

s/:In fact, if were a crime punishable by the death penalty, homo sapiens deserves an extinction-level events.

R.I.P. homo sapiens./s

//

kcarruthers, to random
@kcarruthers@mastodon.social avatar

The Bundle: including some other articles on, e.g., eugenics and statistics via Timnit Gebru

https://www.dair-institute.org/tescreal/

dominique, to tech French
@dominique@mastodon.zaclys.com avatar

Excellent papier de Maya Kandel dans @mediapart. Prolongement des papiers @oliviertesquet dans Télérama, de l'essai de Jen Schradie sur la droite ... qui est une menace réelle pour nos démocraties.

👉 [€€] https://www.mediapart.fr/journal/international/170324/la-droite-tech-contre-la-democratie-comment-la-silicon-valley-s-est-radicalisee

abucci, to ai
@abucci@buc.ci avatar

NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute

The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.

https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-safety-institute/

Good for them! / / people are cultists and have no place in government. They're obsessed with fantasies like that are disconnected from reality and distract from the actual harms is already causing here on Earth. It's precisely the same phenomenon as holding endless discussions about how many angels can dance on the head of a pin while ignoring that people are suffering. It sounds like Secretary of Commerce Gina Raimondo might be a Kool-aid drinker herself or is sympathetic to the viewpoints of the Kool-aid drinkers.

From her Wikipedia entry:

Gina Marie Raimondo...an American businesswoman, lawyer, politician, and venture capitalist

Emphasis mine.

It's alarming that this is even happening, and you know the fix is in because they tried to rush the appointment without informing staffers ahead of time. I hope staffers prevail.

cc: @timnitGebru @xriskology

DEDGirl, to random
@DEDGirl@mastodon.world avatar

Saw my first Cybertruck out in the wild. 🤣 Holy Hell is it ridiculous. It’s exactly the opposite of what someone would envision a future car would be. It looks like a primitive Mars rover. 🤦‍♀️ I’d rather drive the Pope’s glass box.

paninid,
@paninid@mastodon.world avatar

@DEDGirl @patmadigan

I’m convinced it’s designed for a desired #MadMax future, rusted out heaps, pieced together from other Cybertrucks.

That is the vision the #TESCREAL #AntiNormie bros want to manifest.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tacticalgear
  • magazineikmin
  • thenastyranch
  • Youngstown
  • mdbf
  • rosin
  • slotface
  • InstantRegret
  • khanakhh
  • Durango
  • kavyap
  • osvaldo12
  • DreamBathrooms
  • megavids
  • GTA5RPClips
  • ngwrru68w68
  • everett
  • tester
  • ethstaker
  • cisconetworking
  • cubers
  • modclub
  • provamag3
  • anitta
  • normalnudes
  • Leos
  • lostlight
  • All magazines