I'm truly, deeply alarmed at how the tech industry is trying to insert itself in every human interaction, getting between humans in every possible relationship, and they think that's "better" while absolutely destroying everything that makes society work.
The answer is MORE human-to-human interaction not LESS. FFS.
(screenshot from a substack that landed in my inbox, but you can see this same ethos everywhere, including strained attempts to portray chatbots with "theories of the mind")
The "safety" team were the more fanatical doomsters but the rest of OpenAI is still a cult building their BS god, AGI. Reporters aren't reading up on #TESCREAL and so they are missing the real story here. At least Axios links to AGI skeptic Gary Marcus.
Wild covers quite a few angles but the ones that really struck me were the affinities those pursuing AGI (Artificial General Intelligence) apparently have with the ideas of:
#AI#TESCREAL#SiliconValley#BigTech: "So there's this long tradition of consulting people who use technologies to find out what they need, and to find out why technology does or doesn't work for them. And the big message there was that technologists are probably more ill-equipped to understand that than average people, and to see the industry swing back towards tech authority and tech expertise as making decisions about everything, from how technology is built to what future is the best for all of us, is alarming in that sense.
So we can draw from things like user-centered research. This is how I concluded the paper, is just pointing to all the processes and practices we could start using. There's user-centered research, there's participatory processes, there's... Policy gets made often through consulting with groups that are affected by systems, by policies. There are ways of designing technology so that people can feed back straight into it, or we can just set in some regulations that say, in certain cases, it's not acceptable for technology to make a decision.
I think some of what we have to do is get outside of the United States, because some of the more human rights oriented or user-centered policymaking is happening elsewhere, especially in Europe."
And I'm missing that point a bit here: how much of this do these ‘cheerleader’ types on the photo really believe and how much of it are they just faking to push their ideology to the masses?
I'll read the study to find out more. Thanks for the link!
「 A more imminent threat, he told the Times, is the one posed by American AI giants to cultures around the globe. “These models are producing content and shaping our cultural understanding of the world,” Mensch said. “And as it turns out, the values of France and the values of the United States differ in subtle but important ways.” 」
Does sci-fi shape the future? Tech billionaires from Bill Gates to Elon Musk have often talked about the impact of novels they read as teens, from Neal Stephenson's "Snow Crash" to Iain M. Banks' "Culture" series. Big Think's Namir Khaliq spoke to authors including Andy Weir, Lois McMaster Bujold, @cstross and @pluralistic about how much impact they think science fiction has had, or can have.
I'm not sure "most of us think this way about the world we want for our kids"... at least I don't. Not at all. I find this toxic optimist "vision" utterly naive & disgusting. /1
#AGI#LongTermism#EffectiveAltruism#TESCREAL#Eugenics: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636
Laws, ethical debates? It's all too late. The cat's out of the bag. Women over 60, 70 from our village share things like this on WhatsApp and laugh their heads off: https://mastodon.online/@asol@mastodon.social/112291529251522267
Can I take a photo of you? - Why? - I can do that with you too! - Oops!
Australia's Great Barrier Reef hit by record bleaching
> Australia's spectacular Great Barrier Reef is experiencing its worst #bleaching event on record, the country's reef authority reported on Wednesday (Apr 17).
Excellent papier de Maya Kandel dans @mediapart. Prolongement des papiers @oliviertesquet dans Télérama, de l'essai de Jen Schradie sur la droite #tech... qui est une menace réelle pour nos démocraties.
NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute
The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.
Good for them! #longtermist / #EffectiveAltruist / #TESCREAL people are cultists and have no place in government. They're obsessed with fantasies like #xrisk that are disconnected from reality and distract from the actual harms #AI is already causing here on Earth. It's precisely the same phenomenon as holding endless discussions about how many angels can dance on the head of a pin while ignoring that people are suffering. It sounds like Secretary of Commerce Gina Raimondo might be a Kool-aid drinker herself or is sympathetic to the viewpoints of the Kool-aid drinkers.
From her Wikipedia entry:
Gina Marie Raimondo...an American businesswoman, lawyer, politician, and venture capitalist
Emphasis mine.
It's alarming that this is even happening, and you know the fix is in because they tried to rush the appointment without informing staffers ahead of time. I hope #NIST staffers prevail.
Saw my first Cybertruck out in the wild. 🤣 Holy Hell is it ridiculous. It’s exactly the opposite of what someone would envision a future car would be. It looks like a primitive Mars rover. 🤦♀️ I’d rather drive the Pope’s glass box.