JamesGleick,
@JamesGleick@zirk.us avatar

Once again Ted Chiang has it exactly right. The immediate danger from is not that it will become sentient and do whatever it wants. The danger is that it will do what it’s being designed to do: help rich corporations destroy the working class in pursuit of ever-greater profits and thus concentrate wealth in fewer and fewer hands.

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

gimulnautti,
@gimulnautti@mastodon.green avatar

@JamesGleick with the recent advances in , AI will be available to everyone. And if I know human nature at all it will be first applied by malicious actors, only one of them being capital as specified here.

The steps taken by society to mitigate should be of humans and verification of accurate reporting.

AI for every kind of social and political mischief imaginable will be available by 2025.

Governments need to act, stop looking at business.

ellent,

@gimulnautti @JamesGleick @ellent

Verificatie of je een mens bent gaat vast weer heel veel business voor verkopers opleveren

jdmcg,
@jdmcg@mastodon.nz avatar

@JamesGleick he's so good at cutting through the noise and finding the heart of the problem.

The comparison to "capital's willing executioner" was particularly apt.

rowat_c,
@rowat_c@mastodon.social avatar

@JamesGleick a somewhat orthogonal question, but connected in its motivation: why should we not apply the same ethical protocols that we do to animal & human research to 'large' AI research?

Qbitzerre,
@Qbitzerre@unbound.social avatar

@JamesGleick as tech always does. It might be refreshing and helpful for it to wrest control from our masters.

oldoldcojote,

@JamesGleick EXACTLY

avantgeared,

@JamesGleick
China will do it, anyway and not cooperate with the rest of the world; Russia will do the same. Single purpose AI, control and expansion, is just what they want.

scottmatter,
@scottmatter@aus.social avatar

@JamesGleick

This is why we should all be Luddites. It’s not about hating or fearing technology. It’s about resisting the use of technology to concentrate power and wealth in fewer hands at our expense.

delcatti,

@JamesGleick To add to your point, physically CAN'T become sentient, because AI's meant to imitate humans.

paulschoe,
@paulschoe@mastodon.world avatar

Interesting metaphor in the New Yorker article by Ted Chiang:
'AI is not so much a genie that you can ask many questions, but should be seen as a management-consulting firm, along the lines of McKinsey & Company.' With all the consequences of that.

Thanks for the post.
https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

@JamesGleick

TrevP,

@JamesGleick Will AI force us all to spend our money how it decides?. Are you saying that we are all unthinking people who have no free will? I spend/donate my money on what I want, not on what I'm told to spend it on, what is trending or what will impress others.

Threearrows78,

@JamesGleick I hope politicians are going to consider some serious legislation on AI. For safety it should be subject to same laws as humans.

soerenheim,

@JamesGleick
Wobei man durchaus beide Risiken bedenken sollte. Eine Killer-AI ist mE ein eher kleines, aber wenn es eintreten könnte durchaus wichtiges. Derzeit aber wohl vor allem eine Tech-Bro-Verdrängsungsleistung gegenüber realistischeren Untergangsszenarien (Klima usw).

jandevries12,

@JamesGleick And the problem with that way of thinking is: if there are less people who are able to buy stuff, the profits of big corporations will become very small. The secret behind todays relative wealth is the spread of wealth over a big number of people. But it’s a the question if those big greedy corporations can remind themselves of that…

kentbrew,
@kentbrew@xoxo.zone avatar

@JamesGleick “Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.” In just a little while this definition will also be true for AI.

cgervasi,
@cgervasi@fosstodon.org avatar

@JamesGleick I think this was a fear about every new technology, everything that increased production per unit of human effort.

lobstered,

@JamesGleick

Exactly so.

It is not AI but data sorting IT. Whoever controls the sorting hat, controls Hogworts.

The inevitable danger is not only corporations, state interests or other agenda pushers but also the next generation of freelancers, funded by unfindable finances. Dark money interests for example or ideological cults.

Fortunately those not needing or using technology will not be blinded by nerd and herd 'necessity' …

🏴‍☠️🏴🏳️📵

C0ppert0p,

@JamesGleick computers have become intelligent, before they becomes sentient.
Intelligence is a emergent property, of some biological systems. ChatGPT and other system are not intelligent. They are not even close to being intelligent.

IshmaelASoledad,

@JamesGleick I don't think there's any danger or risk ... like all technological advances this is exactly what will happen - increase the rich-poor gap, and redistribute wealth., The only question is the degree to which it will happen, and AI looks like a biggie

branden1,

@JamesGleick 😱😤

ZZiggy,
ZZiggy,

@JamesGleick You’re just NOW realizing that owners are going to automate everything they can?

DeanBaker13,
@DeanBaker13@econtwitter.net avatar

@JamesGleick Sorry, Ted Chiang has it completely wrong. It would be wonderful if AI led to a huge surge in productivity -- we need not worry about inflation for many decades -- but little reason to believe that will be the case. But,l this is the sort of stuff that excites New Yorker readers even if it has no basis in reality.

RustyRing,

@JamesGleick It's truly ironic that everything dystopian sci fi predicted when I was a kid is happening now, exactly as described, and nobody cares. It's an example of how we ask permission to care about things, and carefully ignore those that our cultural authorities tell us are impolite to care about.

GlennMG,
@GlennMG@mas.to avatar

@JamesGleick
Naive question.
Is it possible for AI systems to cycle themselves into a downward information quality spiral as they consume their own output in a refresh training cycle?

krakenmare,

@JamesGleick Wernher von Braun is credited with saying something like "I simply design the knife, and someone else decides if they will use it for surgery or murder" he lead the early German military rocket programs that produced the V-2's and would be taken over by the americans at the end of the war and subsequently developed into ballistic missiles. I don't know what the future of ai will hold, but the implementation of it will likely matter far more than it's technical capabilities, I mean just look at today's various learning algorithms used for everything from advanced scientific research at the better end to like facebook ath the worse end

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ethstaker
  • DreamBathrooms
  • GTA5RPClips
  • magazineikmin
  • InstantRegret
  • cubers
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • osvaldo12
  • ngwrru68w68
  • kavyap
  • everett
  • megavids
  • Durango
  • normalnudes
  • Leos
  • mdbf
  • khanakhh
  • tester
  • modclub
  • cisconetworking
  • anitta
  • tacticalgear
  • provamag3
  • JUstTest
  • lostlight
  • All magazines