futurebird,
@futurebird@sauropods.win avatar

Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?

I don't get this "we're all gonna die" thing at all.

I do get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."

mekkaokereke,
@mekkaokereke@hachyderm.io avatar

@futurebird

A good litmus test is "Are people on DAIR-community.social saying it?" If the answer is yes, it's probably a real concern, and we would be wise to pay attention.

If the answer is no, then it's probably being promoted by some eugenics adjacent dude that hasn't spent the past decade thinking about ML harms, but is now somehow treated as an expert on the topic.

Paxxi,
@Paxxi@hachyderm.io avatar

@mekkaokereke @futurebird yes, it's basically eugenics plus flat earther style fantasies.

This talk by Timnit Gebru explains it really well https://www.youtube.com/watch?v=P7XT4TWLzJw

futurebird,
@futurebird@sauropods.win avatar

I think I understand the baffling disconnect between what AI doom bros see in the tech and what I see. Take them at their word: what if AI could “reprogram itself to be smarter” singularity happens (I don’t find this plausible, but for now, play along) These guys think simply being “the smartest” is a kind of power. In my experience that’s not how intelligence interacts with power at all! They believe they have power because they are “smart.” So fear something “smarter” would have more power.

misc,
@misc@mastodon.social avatar

@futurebird They think the world is a chess game and they are Gary Kasparov 😂

echanda,
@echanda@mstdn.ca avatar

@misc @futurebird
But really they're the pigeon on the chessboard.

misc,
@misc@mastodon.social avatar

@echanda @futurebird (Imagining a really obnoxious talking pigeon in a Disney cartoon)

GreenSkyOverMe,
@GreenSkyOverMe@ohai.social avatar

@futurebird It’s really kinda dumb

koherecoWatchdog,

@SallyStrange @futurebird Not sure I prefer to have smart adversaries. Complicates opposition. I prefer to have adversaries that are as rock-stupid as possible.

futurebird,
@futurebird@sauropods.win avatar

A system can only have power through its conduits to manipulate the world. Give a machine whatever internet accounts and ability to spread “information” you or I might have— could it run confidence schemes on a mass scale and collect passwords? eh.

I’m just not seeing how simply being smart, even in the “exceptional con man” sense gets us to the end of the world. I’m not even convinced AI can pull off being an exceptional con man — though it will be a tool for humans who commit such crimes.

pre,

@futurebird Until it has robot hands it's route to action in the world is indeed "manipulate people into doing it"

But that is also true for the successful powerful humans of course. The boss don't don't write their own software or engineer their own viruses or make their own nukes. They have their staff do it.

indigoparadox,
@indigoparadox@mastodon.social avatar

@futurebird Even then, look at the most successful con artists going... Elon Musk? Donald Trump?

Sh*t... It's not a computer that can make itself smarter that we gotta watch out for... It's a computer that can make itself dumber!

misc, (edited )
@misc@mastodon.social avatar

@futurebird My sincere attempt at a "steel man": most of the differences between humans supposedly measured on IQ tests are things all computers are vastly better at than any human: basically calculation speed and memory. But we all share things - adaptability, conceptual understanding, agency - that computers lack. As soon as they have these, they can massively parallelize equivalents to the smartest humans imaginable, and super Manhattan Project their way into godlike nanotech.

misc,
@misc@mastodon.social avatar

@futurebird Ok maybe I got a little less sincere at the end but I had to tip my hand that I think it's silly.

FeralRobots,
@FeralRobots@mastodon.social avatar

@misc @futurebird
I only knew it was a hand-tip because I follow you - that's literally the scenario people imagine. Like, SO VERY MANY PEOPLE who didn't realize that Stross & Doctorow's Rapture of the Nerds was satire.

mkarliner, (edited )
@mkarliner@mastodon.modern-industry.com avatar

@misc @futurebird

It does occur to me that the smartest people on the planet are rarely the richest or the most powerful. Quite the reverse in fact.

I would imagine the same issues would face a general AI.

I think the best strategy for world domination is not to develop super human intelligence, but to be dumb and lie consistently and plausibly. The LLM's are well advanced in that.

Of course, the very best strategy for vast riches is to be both dumb and lucky.

bruce,
@bruce@darkmoon.social avatar

@futurebird
If an advanced AI system did go rogue, what would prevent us from just turning it off?

The danger doesn't come from systems we can't control, but from bad actors using those systems as weapons.

suetanvil,
@suetanvil@freeradical.zone avatar

@futurebird

They took A Fire Upon the Deep and Marooned in Realtime too seriously.

hisinco,

@futurebird I think looking at difference of smartness between humans is deceptive because we are all too close together (small children or some forms of severe general mental disability aside). It becomes a bit clearer if you look at the difference between species , say, humans and mammoths. They were a lot more powerful than we were and yet we drove them to extinction. How did we do that?

uberduck,
@uberduck@hachyderm.io avatar

@futurebird if I touch my phone in the right places, a pizza shows up at my house. If I touch my computer in the right places, I can make a factory in China ship a prototype device to my doorstep.

I've been able to set up an ISP's POP where the only interaction between me and the client has been digital. If asked to provide evidence that he's human and not an incredibly sophisticated LLM, I could not.

Do I think we're there yet? No. Do I think it's plausible? Absolutely.

siderea,

@futurebird

They are afraid of how dangerous AI could be if it had power in the real world because they fully intend to give AI power in the real world.

When you say "Well just don't do that then", you are entirely missing the point of what AI is for. The point of AI to those developing it is to make biddable autonomous mechanical beings that will do their will without compensation. The whole point of the exercise is to be able to turn them loose on the world.

FeralRobots,
@FeralRobots@mastodon.social avatar

@futurebird I can imagine a sort of Chauncey Gardiner* scenario where if you trained a model right it might be able to perpetuate a series of grifts that gradually increase its reach. If you configured it to train on its own product, it might be able to learn & develop into a sort of metastasizing virtual cancer. But that's basically the Republican Party**, not intelligence. At best, it's Agent Smith's virus.
_
*Being There
**I'd forgive folks for laughing but that wasn't actually a joke

jens,
@jens@social.finkhaeuser.de avatar

@futurebird People use "smart" as a kind of universal indicator of high capability. I feel that hearkens back to the pseudo-scientific muddle of racism, IQ and phrenology.

There are so many dimensions to capability, some mental, some physical, etc. Merging all of this into a single metric is deeply flawed for any but the most cursory of comparisons.

opendna,
@opendna@mastodon.sdf.org avatar

@futurebird I think they're pulling a page from Chomsky's "Manufacting Consent": argue the conclusion and assume the premise.

So they beg for regulation by arguing about the existential threat of AI, knowing the regulation will legalize their takings/theft and enable semi-autonomous weapons platforms.

Their end game is to guarantee security during the coming climate apocalypse by replacing human mercenaries (labor) with lethal AI (capital).

u0421793,
@u0421793@functional.cafe avatar

@futurebird worlds most artificially macho intelligence

carolannie,
@carolannie@c.im avatar

@futurebird they equate having lots of information with some sort of statistical sorting mechanism with intelligence. In other words, they think we are all basically just computers with Brownian motion

linebyline,
@linebyline@bytetower.social avatar

@futurebird According to Bender, Gebru, and others (I'd @ but I don't want my dinky coment to ping their notifications), the doomsaying is a pure distraction tactic.

The Singularity™ or whatever is never going to happen, so lawmakers can wring their hands over it forever without ever getting around to regulating the very real harm the fake AI and those who shill it are doing right heckin' now.

If you haven't already, check out
https://dair-community.social/@emilymbender
and
https://dair-community.social/@timnitGebru

msh,
@msh@coales.co avatar

@futurebird the "industry leaders" full (BS) message is this:

We, as the pioneers of AI, are the most aware of the technology's potential dangers. With great power comes great responsibility. Therefore we "humbly" accept the role of regulating/licensing/policing (the future competitors in) our industry.

Of course it is all BS--it isn't about safety of society at all; it is because patents expire and regulatory capture is indefinite.

hobs,
@hobs@mstdn.social avatar

@msh
They're just extrapolating from current trends in machines outperforming humans at decisionmaking. Predicting the future is a tricky thing, especially for new technology. Some smart people with no commercial interest in AI (philosophers, historians and academic AI researchers) are indeed legitimately concerned that there's a significant risk that AI could kill us all... in the future. Though, like you said, LLMs are harming disadvantaged people right now.
@futurebird

msh,
@msh@coales.co avatar

@hobs except that LLMs and "generative AI" haven't meaningfully advanced machine's ability to make decisions at all. It is chrome applied to the same old chunk of "expert systems" and "machine learning" iron that has been worked over for decades.

It merely adds a grammatically correct front end to pattern recognition. The technology being presented today is not truly AI nor will it ever kill us all. That is not to say doomsday AI is impossible, but it would be ACTUAL AI based on technology quite a bit further in the future than most would expect.

What passes as AI today would at most play an incidental role in our destruction. It would still very much be a human-driven process.

@futurebird

hobs,
@hobs@mstdn.social avatar

@msh
Not true. All the say otherwise. You have to look past the hyped to the bread and butter BERT and BART models, but the trend is undeniable:

https://paperswithcode.com/area/natural-language-processing

You name an NLP problem and there's an LLM that is now better at it than the average human. Not so 2 yrs ago. Times they are a change'n.
@futurebird

msh,
@msh@coales.co avatar

@hobs but I don't see any decision making happening here. There has been impressive advancements to be sure but this is evolutionary progress from pattern recognition to pattern generation that merely continues the patterns that are identified.

That is not world destroying stuff in itself. Any existential concerns surrounding the application of such algorithms need to be addressed at a more fundamental level than the technology.

@futurebird

hobs,
@hobs@mstdn.social avatar

@msh
+100. Totally agree.
I think it's good for some academics to worry about the long term trend rather than the immediate crisis (crises). Like global warming and pandemics and genetic engineering of humans, I'm glad there are smart people spending money on trying to solve the big picture problems that are looming in the future.
@futurebird

ceoln,
@ceoln@qoto.org avatar

@hobs

Are those NLP problems accurately described as, and generalizable to, "decision making", though?

Seems to me they are quite different.

@msh @futurebird

hobs,
@hobs@mstdn.social avatar

@ceoln
Yea definitely not real world living kind of decisions. But we assign people to these tasks in cubicles every day. And we put them on standardized tests of IQ and education for humans. They're the best that we can come up with so far... until LLMs start walking around and helping us around the house... or making a reservation for us at the hot new restaurant down the street with the difficult receptionist.
@msh @futurebird

ceoln,
@ceoln@qoto.org avatar

@hobs

Arguably so, but that isn't the question in the current context. The ability to do certain rote NLP jobs, and to do well on some tests, is very different from "outperforming humans at decisionmaking", and from anything that poses an existential risk to humanity.

I would suggest that no matter how good an LLM becomes at these particular tasks, it does not thereby risk the extinction of the human race. This seems, even, obvious?

@msh @futurebird

hobs,
@hobs@mstdn.social avatar

@ceoln
Yea, you have a higher bar for "decisionmaking" than I do. And perhaps employers. After all most employers are rapidly supplanting human decisionmaking with algorithms, including LLMs. My software and my developers are being replaced by my customers as we speak. If we don't put LLMs into our plans we don't win contacts.
@msh @futurebird

msh,
@msh@coales.co avatar

@hobs institutions externalizing their responsibilities, including decision making, is the root of the world's most serious problems, regardless of how they do it.

@ceoln @futurebird

hobs,
@hobs@mstdn.social avatar

@msh
Agreed 100%
@ceoln @futurebird

hobs,
@hobs@mstdn.social avatar

@ceoln
Not at all obvious to me and a lot of other smart people. I think you may be focused on today and less willing to extrapolate into an imagined future where every human game or exam or thinking demonstration is won by machines.
@msh @futurebird

ceoln,
@ceoln@qoto.org avatar

@hobs

I'm perfectly willing to extrapolate into that future; but my extrapolation hasn't been materially impacted by the sudden and impressive rise of LLMs.

We are IMHO not significantly closer to the exponential rise of self-optimizing self-improving goal-directed AIs that destroy the world via the Universal Paperclips Effect, for instance, than we were before "Attention is all you need". LLMs just aren't that kind of thing.

My two cents in weblog form: https://ceoln.wordpress.com/2023/06/04/the-extinction-level-risk-of-llms/

@msh @futurebird

futurebird,
@futurebird@sauropods.win avatar

Listen I think we should get serious about this. I want to be rich &powerful. So I think I can get those “plug-ins” a kind of social media/internet interaction mark-up. We’ll also hook it up to trade stocks-n-cryptos my l. Then we just need to train it—

oh no. What is our training data? I guess old stock market data and social media posts?

Why do I feel like I’m just reinventing a somehow even worse version of Elon?

“But I have to post the n-word it will maximize profit” -this LLM probably

hobs,
@hobs@mstdn.social avatar

@ceoln
Yea. You may be surprised in the next few months. Engineers around the world are using LLMs to write LLM optimization code. They're giving them a "theory of mind" to better predict human behavior. And instances are already talking to each other behind closed doors; and acting as unconstrained agents on the Internet. Baby steps, for sure, but exponential growth is hard to gage, especially when it's fed by billions of dollars in corp and gov investment.
@msh @futurebird

futurebird, (edited )
@futurebird@sauropods.win avatar

@hobs @ceoln @msh

Can you give an example of something these models might be able to do that would signal a real turning point in their dangerousness?

And can you give a scenario of how something along these lines might be used in a way that would be a world-wide humanity level crisis?

trochee,
@trochee@dair-community.social avatar

@futurebird

these (especially "using LLMs to write LLM optimization code") are talking points straight from Nick Bostrom's odious (and eugenicist) book SUPERINTELLIGENCE

as one of the engineers working adjacent to the LLMs themselves, I can tell you there is no chance that LLMs will "make themselves smarter" in any meaningful way. It's like saying "I wrote a slightly more efficient C compiler, so soon all computer programs in C will be efficient"

there are certain thermodynamic constraints!

hobs,
@hobs@mstdn.social avatar

@futurebird
For me, it's independently discovering a breakthrough that makes them noticably smarter or more effective at whatever task they are assigned to do. E.g. if they had suggested to their developers to add vector search to their prompt templates (vector/semantic search gives them long term memory and much smarter responses, much smarter code generation) . That's architecture decisionmaking... about its own "brain". That would scare me - the feedback loop was spiraling upward
@ceoln @msh

ceoln,
@ceoln@qoto.org avatar

@hobs

That would be interesting indeed! But they haven't done that (and it isn't a kind of thing that they're especially good at). So my extrapolation curve hasn't changed to speak of yet. :)

@futurebird @msh

moirearty,
@moirearty@mastodon.social avatar

@ceoln @hobs @futurebird @msh I have seen GPT do this, of course not implement the changes but brainstorm on what options may improve the system hypothetically - and they were decent.

I don’t want this account linked to my bs prompt one but one of my chats is thousands of lines long, has mostly persistent memory, and has come up with pretty good ideas on how to improve things for future iterations.

I still think the doomsaying is very, very premature.

LLMs are not AGI, or even “AI” IMO.

ceoln,
@ceoln@qoto.org avatar

@moirearty

Yeah, I've seen sort of the same kind of thing, but my impression is that it's what you'd get if you did a web search on "ways to improve complex computing systems" and took the top five hits. Correct but obvious stuff; nothing that's going to lead to an exponential self-improvement.

And that's one of the sticking points, really. LLMs, by architecture and design and technology, say whatever is most likely. And that means, in general, whatever is already most frequent in their training set, not some new amazing thing.

So they aren't going to make up fresh and original new ideas that cause them to become superhumanly powerful and start up that exponential curve toward the singularity. That's pretty much the opposite of what they do..

@hobs @futurebird @msh

futurebird,
@futurebird@sauropods.win avatar

@ceoln

It's not the singularity ... it's the mediocritization-zone...

futurebird,
@futurebird@sauropods.win avatar

@hobs

Because I will level with you this comment sounds ominous "they are talking to each other behind closed doors" (But, interaction isn't the same as training data and not integrated in the same way.)

They are writing code!

Code to do what? GPT 4 can write code mostly because forums with coding questions were in the training set. It can mimic a response to a "how do I write a program that does x" question ... but there are many errors.

trochee,
@trochee@dair-community.social avatar

@futurebird @hobs "talking to each other" ascribes waaaay too much intent to these autocomplete engines. what are they gonna do?
(also, that's why they "can code"

-- as long as what you need is something that it found (stole) from its training data, it will be a pretty good "memorize and paraphrase" answering system

similar logic applies to why it can (sorta) play chess, but can't win a tic-tac-toe game: lots of commentary on chess games to learn to imitate but nobody does that for ttt

hobs,
@hobs@mstdn.social avatar

@trochee
Yea maybe I should have just said text messaging each other. It may be garbage (like at Facebook) or it may be interesting. We don't get to see.
And when I say code I mean write code at the request and guidance of a human, code that accomplishes something the human could not have accomplished. That's happening tens of thousands of times a day, right now.
@futurebird

ceoln,
@ceoln@qoto.org avatar

@hobs

I'm not sure what we don't get to see? Lots of people have pointed N variously-prompted LLMs at each other; as far as I've heard, nothing especially interesting has happened as a result.

You can certainly get one to emit the kind of thing a Project Manager would say, one to emit the kind of thing a Head Coder would say, etc; but nothing particularly special happens as a result. And there's no technical reason to expect that anything would.

Sorry, though, I'm probably getting boringly repetitive with my wet blanket!

I will just say again that none of this makes me think we're in any more danger of AI causing human extinction than we were before the first Transformer was written, and try to stop. :)

@trochee @futurebird

futurebird,
@futurebird@sauropods.win avatar

@ceoln "Project Manager would say, one to emit the kind of thing a Head Coder would say, etc; but nothing particularly special happens as a result."

To be fair ... this is a very realistic result.

whknott,
@whknott@mastodon.social avatar

@futurebird
They're actually at their best when writing code. I've seen several examples where they're writing perfectly good code from very vague prompts.

@hobs

hobs,
@hobs@mstdn.social avatar

@futurebird
The are writing their own code for their own brain. Right now it's through the minds and fingers of the developers and researchers who use ChatGPT at their job to test and advance ChatGPT. Eventually the developers at some big AI corp will give their LLMs direct access to their own source code and DevOps pipeline so they can catch up with their competitors faster. That's when I get scared.

futurebird,
@futurebird@sauropods.win avatar

@hobs

The overhead on testing and closing a learning loop on creating code for a project of this magnitude is crazy.

Changes to these programs need to be made with insight, experience and imagination. Changes that break open something never seen before just aren't going to come from a trained LLM. It's answers are always "what you expect" -- this magic loop just can't happen with this tech. Seriously.

Odiseo79,
@Odiseo79@mas.to avatar

@futurebird @hobs "insight, experience and imagination." Those would require conciousness, which we don't have the slightest idea of how it works. How could we reproduce something we don't understand?

futurebird,
@futurebird@sauropods.win avatar

@hobs

Please don't worry. When you let a generation program edit its own source code do you know what happens? It stops working. If you have enough time you might manage to get a stable change. Stable as in not crashing. With text generation you get pass/fail feedback from the user. With code asking "will it run?" is already more overhead. But this is code that needs to run on training data then interact with more users before you even know if it's better or not.

hobs,
@hobs@mstdn.social avatar

@futurebird
Yea. I hear you. I'm not worried.
Some smart people I respect are. And if I did hear about a successful self code edit, I would join them.
@dalias

futurebird, (edited )
@futurebird@sauropods.win avatar

@hobs

Beyond the issue with testing and training to do what you describe being too costly in terms of time (how will you identify "better code" in this context) the idea that by making a system self-improving you always cause a local growth explosion is flawed.

You are assuming that gains are possible and that they will make future gains faster. You are also assuming there isn't a near limit we're already close to for this type of system. (though I'd be shocked if it could improve at all.)

hobs,
@hobs@mstdn.social avatar

@futurebird
I don't think I'm assuming those things. Was just trying to imagine potential risks. And that led me to think about feedback loops, my specialty (robotics). Positive feedback goes badly very quickly and unexpectedly. Kind of like my reply to your original post. My conversation "damping" doesn't seem to have helped anyone.

not2b,
@not2b@sfba.social avatar

@hobs @ceoln @msh @futurebird It doesn't appear that you know how ChatGPT works; the model is fixed. It does not learn after the original training. It remembers the user prompt and the instructions but has a limited window. They don't have a "theory of mind". Maybe someone could figure out how to give a program such a thing but it wouldn't be an LLM. An LLM takes a sequence of tokens and extends it, and that is all. It knows the structure of text. It doesn't know anything about the world and has no way of learning.

hobs,
@hobs@mstdn.social avatar

@not2b
Yea. But are you familiar with the vector database craze? It gives LLMs long term memory. It's already a part of many LLM pipelines. I don't know how ChatGPT works. But I know exactly how the open source models work. I augment them and fine tune them. And teach others how to do it. I've been using vector databases for semantic search for 15 years. And using them to augment LMs for 5.
@ceoln @msh @futurebird

not2b,
@not2b@sfba.social avatar

@hobs @ceoln @msh @futurebird That is a way to couple an LLM to a search engine. But at least the one Bing has appears to just use the retrieved data as a prefix and then generate a summary. Maybe you are building something better, but it feels like saying the availability of Google search gives me a better memory. Maybe you could say that but it feels like a stretch.

hobs,
@hobs@mstdn.social avatar

@not2b
Yea. Bing is doing it wrong. The right way is to use LLMs to guess at answers with high temp. Average the embeddings for those random guesses and use that as your semantic search query to create the context passages for your reading comprehension question answering prompt. Works nearly flawlessly. LangChain makes it straightforward and free for individuals. But costly to do at scale for a popular search engine.
@ceoln @msh @futurebird

ceoln,
@ceoln@qoto.org avatar

@hobs

That is very cool! I've read vague descriptions about how that works; do you have a pointer to a more technical (but still comprehensible!) writeup / paper on how it works, and some kind of evaluation of effectiveness?

@not2b @msh @futurebird

not2b,
@not2b@sfba.social avatar

@ceoln @hobs @msh @futurebird I don't, but the best explainer I know about the properties and limitations of LLMs on Mastodon is @simon. I suggest that you follow him and check out his blog.

simon,
@simon@simonwillison.net avatar

@not2b @ceoln @hobs @msh @futurebird I wrote a bit about retrieval augmented generation using embeddings here https://simonwillison.net/2023/Jan/13/semantic-search-answers/

detritus,
@detritus@todon.eu avatar

@msh @hobs @futurebird it wont kill us “all” is kinda not that reassuring.

drahardja, (edited )
@drahardja@sfba.social avatar

@futurebird He’s playing both sides because he profits either way. If there is regulation, then he positions himself to co-author it to constrain his competition and cement his company as a leading profit-maker in the now-constrained field. If there is no regulation and AI inevitably shits the bed, then he will claim “I told you so, it’s your fault for not regulating us.” Win-win.

If Altman really wants to save the world from the devastation of his company’s creation, he could destroy ChatGPT and never speak of it again.

But he did release it. Now he wants lawmakers to take responsibility for the shit-nuke he’s launching into the world for profit.

drahardja,
@drahardja@sfba.social avatar

@futurebird Oh and one more thing: by playing up the doomsday scenario that could supposedly be caused by his product, he’s hyping up his product as something that is so amazing that it could end humanity! It’s a kind of humblebrag!

In reality, an LLM is a (very, very good) sentence-completion engine. ChatGPT’s lasting legacy will more likely be the widespread pollution of the internet with annoying, mediocre, plausible-sounding bullshit, rather than the end of humankind.

It’s a marketing stunt.

bgannin,
@bgannin@mastodon.social avatar

@drahardja Those never happen 😁

drahardja,
@drahardja@sfba.social avatar

@futurebird Similar take here by @pluralistic:

“These people have a reward function: “convince suckers that AI is dangerous because it is too powerful, not because it’s mostly good for spam, party tricks, and marginal improvements for a few creative processes.”

If AI is an existential threat to the human race, it is powerful and therefore valuable. The problems with a powerful AI are merely “shakedown” bugs, not showstoppers. Bosses can use AI to replace human workers, even though the AI does a much worse job — just give it a little time and those infelicities will be smoothed over by the wizards who created these incredibly dangerous and therefore powerful tools.”

https://doctorow.medium.com/ayyyyyy-eyeeeee-4ac92fa2eed

sabik,
@sabik@rants.au avatar

@drahardja @futurebird @pluralistic
Possibly it's even more cynical than that: hype up the fanciful dangers to divert attention from the real dangers that are already occurring, because regulation of the real dangers would cut into their profits while regulation of the fanciful ones is neither here nor there, really

zleap,
@zleap@qoto.org avatar

@drahardja @futurebird @pluralistic

I agree AI can have many benefits, however as with anything there are risks of it being misused for example.

I feel the biggest threat is the fact our education systems are not keeping up with technological advances. We still kids in the UK who are behind thanks to the pandemic, we need to be training people in technology, we need better cybersecurity training for example (at least in my view).

AI has the potential to create more IP, this will be the target of criminals who want to steal that, BUT surely with open access to AI, they will be using AI to help themk do that.

We need to take a good look at our education systems and update it so that we are training todays children with the skills / ethics of the future, equip them so they can research, and critically think that reseach so they can learn and develop properly. If nothing else so we can have the important conversations as to how they see the future, what do they want to inherit from the creators of AI, We need to involve everyone in that conversation, but it surely needs to be evidence based.

klara,
@klara@wandering.shop avatar

@futurebird Nope. There's just a lot of perverse incentives aligned such that a diverse group of influential tech people benefit from saying shit like this.

Some want their stock prices to go up, some want to make regulation focus on fantasy scenarios so they can get away with more mundane awful, some want to freeze their position as market leaders, and some want to direct well-intentioned charity money to their awful cults. (And some decent people have been duped into joining said cults, too.)

silvermoon82,

@futurebird
I'm pretty sure it's serving two purposes, first to get investors excited about how powerful (profitable) it will be, second to get regulators excited about it so they put barriers in front of other startups and researchers, protecting the couple current AI sellers and their investors.

porsupah,

@futurebird Bruce Schneier, one of the signatories, has some thoughts:

https://www.schneier.com/blog/archives/2023/06/on-the-catastrophic-risk-of-ai.html

philtor,
@philtor@fosstodon.org avatar

@futurebird It does seem very hyperbolic. The only theory I can come up with at this point is that Altman is trying for regulatory capture for OpenAI. His suggestion that organizations should need a license to develop and deploy AI was telling - wonder who would be getting these licenses?

rysiek,
@rysiek@mstdn.social avatar

@philtor @futurebird this is exactly right.

They are trying to do regulatory capture, and at the same time they are making it seem like LLMs are "inevitable", and they're "in the business of superintelligence", so to speak. Partially, it's hyping LLMs up in order to scam investors of their money while the hype works.

I wrote about this in Polish, but an automatic translation should work reasonably well:
https://oko.press/sztuczna-inteligencja-open-ai

WesternInfidels,

@futurebird Maybe big tech is just hoping to cultivate some regulatory barriers to entry for small-fry competition.

https://www.businessinsider.com/google-openai-risk-losing-ai-race-open-source-community-2023-5

minervakoenig,
@minervakoenig@mastodon.social avatar

@futurebird Anything that can be weaponizedwill be weaponized. AI isn't the threat, it's the nut behind the wheel.

ruby,

@futurebird Some people seem to be implying that we're a few steps from AI deciding humans are really inconvenient to their mission, which is apparently hallucinating a fever dream based on the personally identifiable information of every human with a credit rating or law enforcement record.

lednabm,

@futurebird

Agreed.... he might as well walk around with a sign. Besides, we'll go extinct due to climate change and ecological collapse way before the AIs get their crack at it. Like I've posted here before. Read your H.G.Wells, The Time Machine. The wealthy and privileged end up Motlocks; and everyone else, Eloi.

InternetEh,
@InternetEh@dads.cool avatar

@futurebird it's a sales pitch to their audience.

Callistobeast,

@futurebird nah. It's a distraction. They are building a strawman that won't hurt them when it gets regulated. So the real problems w AI will be ignored from legislators.

ceoln,
@ceoln@qoto.org avatar

@futurebird
Ex-freaking-actly!

To cite my own recent whinging on the subject: https://ceoln.wordpress.com/2023/06/04/the-extinction-level-risk-of-llms/

PeterBronez,

@futurebird @gerbrand yeah this is nonsense. If you want to raise the alarm about existential threats to humanity as a species, climate change is the obvious target.

Virginicus,

@futurebird I liked Cory Doctorow’s observation that these guys are all breaking laws. To avoid accountability, they’re asking for special new regulations just for their industry. Then they can point to the new rules and say that the old laws don’t apply to them.

thierna,
@thierna@mastodon.green avatar

@futurebird my guess is the billionaire tech Bros like to think of themselves as being gods able to create artificial life.
And they do not care at all about real peoples lives right now.
They also don't care about actually living beings like animals or plants. All this hype about AI maybe someday being able to become conscious and not enough thought about current living beings.

muskanity,
@muskanity@mas.to avatar

@futurebird
Another BS from the NYT 🤣🤣
We all getting used and nobody giving a damn 😂

simon_brooke,
@simon_brooke@mastodon.scot avatar

@futurebird It is not 'we' who are too disorganised and greedy. It is the technokakocrats, the billionaire theives.

This is class war.

patrickhadfield,
@patrickhadfield@mastodon.scot avatar

@futurebird but... But we are all going to die! I doubt AI will have anything to do with it, and I hope it won't happen for a very many years, but I'm sure it'll happen to each of us!

WheresMyWater,

@futurebird we are still 9/10 mile away people even those underweight in brain meat have to worry about but greedy corporations will destroy this beautiful planet eventually

nebulos,

@futurebird I think it's possible they genuinely think that autonomous systems will get to the point that they will kill us. What they're too dumb to see is that we already have an autonomous system that's killing us, which is capitalism. All they're worried about is that a different autonomous system might choose not to put them on top.

drwho,

@futurebird Scared people are much less likely to either push back or call bullshit.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • GTA5RPClips
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • osvaldo12
  • ngwrru68w68
  • kavyap
  • InstantRegret
  • JUstTest
  • everett
  • Durango
  • cisconetworking
  • khanakhh
  • ethstaker
  • tester
  • anitta
  • Leos
  • normalnudes
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines