TWeaK,
RandoCalrandian,
RandoCalrandian avatar

can we have an "un-ampify" bot?

FaceDeer,
FaceDeer avatar

This is the Fediverse, not Reddit. We don't need to be bound by the old ways. We could perhaps get a plugin for the instance itself that automatically replaces AMP links with non-AMP links when the user makes the post in the first place.

kbrot,
kbrot avatar

In the meantime, de-AMP your life.

RandoCalrandian,
RandoCalrandian avatar
fiasco,
@fiasco@possumpat.io avatar

I guess the important thing to understand about spurious output (what gets called “hallucinations”) is that it’s neither a bug nor a feature, it’s just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there’s no meaning in that. Deep learning can’t be said to generate “true” or “false” information, or rather, it can’t be meaningfully said to generate information at all.

So then people say that deep learning is helping out in this or that industry. I can tell you that it’s pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they’re either buying the hype or witnessing an odd series of coincidences.

the_wise_man,

Deep learning can be and is useful today, it's just that the useful applications are things like classifiers and computer vision models. Lots of commercial products are already using those kinds of models to great effect, some for years already.

exohuman,
exohuman avatar

What do you think of the AI firms who are saying it could help with making policy decisions, climate change, and lead people to easier lives?

GizmoLion,
GizmoLion avatar

Absolutely. Computers are great at picking out patterns across enormous troves of data. Those trends and patterns can absolutely help guide policymaking decisions the same way it can help guide medical diagnostic decisions.

exohuman, (edited )
exohuman avatar

The article was skeptical about this. It said that the problem with expecting it to revolutionize policy decisions isn’t that we don’t know what to do, it’s that we don’t want to do it. For example, we already know how to solve climate change and the smartest people on the planet in those fields have already told us what needed to be done. We just don’t want to make the changes necessary.

GizmoLion,
GizmoLion avatar

I mean.. no argument there. Politicians are famous for needing to be dragged, kicking and screaming, to do the right thing.
Just in case one decides to, however, I'm all for having the most powerful tools and complete information possible.

Niello, (edited )

What I hate most about it is people are doing very poorly at checking their own information intake for accuracy and misinformation already. This comes at one of the worst time to make things go south. It's going to challenge the stability of society in a lot of ways and with how crypto went I have 0% trust that techbros and corporates will not sabotage efforts to get things right for the sake of their own profit.

furrowsofar,

The thing is, this is not "intelligence" and so "AI" and "hallucinations" are just humanizing something that is not. These are really just huge table lookups with some sort of fancy interpolation/extrapolation logic. So lot of the copyright people are correct. You should not be able to take their works and then just regurgitate them out. I have problem with copyright and patents myself too because frankly lot of it is not very creative either. So one can look at it from both ends. If "AI" can get close to what we do and not really be intelligent at all, what does that say about us. So we may learn a lot about us in the process.

Arnerob,

I think it can be useful. I have used it myself, even before chatgpt was there and it was just gpt 3. For example I take a picture, OCR it and then look for mistakes with gpt because it's better than a spell check. I've used it to write code in a language I wasn't familiar with and having seen the names of the commands needed I could fix it to do what I wanted. I've also used it for some inspiration, which I could also have done with an online search. The concept just blew up and people were overstating what it can do, but I think now a lot of people know the limitations.

Turkey_Titty_city,

I mean AI is already generating lots of bullshit 'reports'. Like you know, stuff that reports 'news' with zero skill. It's glorified copy-pasting really.

If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it's not intelligence. It's just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.

bownage, (edited )

The thing is - and what's also annoying me about the article - AI experts and computational linguists know this. It's just the laypeople that end up using (or promoting) these tools now that they're public that don't know what they're talking about and project intelligence onto AI that isn't there. The real hallucination problem isn't with deep learning, it's with the users.

exohuman,
exohuman avatar

The article really isn’t about the hallucinations though. It’s about the impact of AI. its in the second half of the article.

bownage,

I read the article yes

mrnotoriousman,

Spot on. I work on AI and just tell people "Don't worry, we're not anywhere close to terminator or skynet or anything remotely close to that yet" I don't know anyone that I work with that wouldn't roll their eyes at most of these "articles" you're talking about. It's frustrating reading some of that crap lol.

fiasco,
@fiasco@possumpat.io avatar

This is the curation effect: generate lots of chaff, and have humans search for the wheat. Thing is, someone’s already gotten in deep shit for trying to use deep learning for legal filings.

shoelace,

It drives me nuts about how often I see the comments section of an article have one smartass pasting the GPT summary of that article. The quality of that content is comparable to the "reply girl" shit from 10 years ago.

Lells,
Lells avatar

Comments are heavily focused on the title of the article and the opening paragraphs. I'm more interested in peoples' takes on the second half of the article, that highlights how the goals companies are touting are at odds with the most likely consequences of this trend.

exohuman,
exohuman avatar

Yes, the second half is where the conversation gets interesting, by far.

ABoxOfNeurons,

I see both sides.

They're probably going to completely (and intentionally) collapse the labor market. This has never happened before, so there is no historical prescedent to look at. The closest thing we have was the industrial revolution, but even that was less disruptive because it also created a lot of new factory jobs. This doesn't.

The public hope is that this catastrophic widening of the gap between the rich and poor will force labor to organize and take some of the gains through legislation as an altenative to starving in the streets. Given that the technology will also make coercing people to work mostly pointless, there may not be as much pressure against it as there historically has been. Altman seems to be publically thinking in this direction, given the early basic income research and the profit cap for OAI. I can't pretend to know his private thoughts, but most people with any shred of empathy would be pushing for that in his shoes.

Of course, if this fails, we could also be headed for a permanent, robotically-enforced nightmare dystopia, which is a genuine concern. There doesn't seem to be much middle-ground, and the train has no brakes.

The IP theft angle from the end of the article seems like a pointless distraction though. All human knowledge and innovation is based on what came before, whether AI is involved or not. By all accounts, the remixing process it applies is both mechanically and functionally similar to the remixing process that a new generation of artists applies to its forebears, and I've not seen any evidence that they are fundamentally different enough to qualify as theft, except in the normal Picasso sense.

Interesting times.

Lells,
Lells avatar

...but most people with any shred of empathy would be pushing for that in his shoes.

Empathy? In late-stage capitalism? 😏

I mean, so... I'm a software engineer who used to specialize in automation. I ended up having a crisis of conscience decades back, realizing that I was putting people out of work. "Hey, good job on that project, our client can afford to let 30 people go now!" never really felt like great praise to me. It actually felt really really shitty knowing the work I was doing was making it possible for the "nobility" to further gain back control of the "serfs".

I figured that the only way this could ever benefit society as a whole instead of shareholders and owners would be if we moved more to a society with things like UBI, with perhaps the people who end up getting something extra being the ones who actually DO the dirty jobs and provide actual worth to society, instead of becoming obscenely wealthy at the expense of empathy and good human spirit. Unfortunately, at least here in the states, anything that smacks of "socialism" automatically equals dictatorship (glossing over that capitalism offers just as many examples of being abused by the "ruling" class). So there's the whole zeitgeist to battle against before the comfortable and less-informed majority will even listen to anything that's in their best interest.

As you say, interesting times indeed. I'm not hopeful that we'll see that sort of shift in my lifetime however, sigh....

Spzi,

The article complains the usage of the word "hallucinations" would be ...

feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.

Wether that is true or not depends on wether we eventually create human-level (or beyond) machine intelligences. No one can read the future. Personally I think it's just a matter of time, but there are good arguments for both sides.

I find the term "hallucinations" fitting, because it conveys to uneducated people that a claim by ChatGPT should not be trusted, even if it sounds compelling. The article suggests "algorithmic junk", or "glitches" instead. I believe naive users would refuse to accept an output as junk or a glitch. These terms suggest something is broken, althought the output still seems sound. "Hallucinations" is a pretty good term for that job, and also already established.

The article instead suggests the creators are hallucinating in their predictions of how useful the tools will be. Again no one can read the future, but maybe. But mostly: It could be both.


Reading the rest of the article required a considerable amount of goodwill on my part. It's a bit too polemical for my liking, but I can mostly agree with the challenges and injustices it sees forthcoming.

I mostly agree with #1, #2 and #3. #4 is particularly interesting and funny, as I think it describes Embrace, Extend, Extinguish.


I believe AI could help us create a better world (in the large scopes of the article), but I'm afraid it won't. The tech is so expensive to develop, the most advanced models will come from people who already sit on top of the pyramid, and foremost multiply their power, which they can use to deepen the moat.

On the other hand, we haven't found a solution to alignment and control problem, and aren't certain we will. It seems very likely we will continue to empower these tools without a plan for what to do when one model actually shows near-human or even super-human capabilities, but can already copy, backup, debug and enhance itself.

The challenges to economy and society along the way are profound, but I'm afraid that pales in comparison to the end game.

bownage,

By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause “human extinction or similarly permanent and severe disempowerment of the human species”. Chillingly, the median response was that there was a 10% chance.

How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides – except that these upsides are, for the most part, hallucinatory.

Ummm how about the obvious answer: most AI researchers won't think they're the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.

alexdoom,

The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening

LoamImprovement,

Capitalism. Be afraid of this thing, not of that thing. That thing makes people lots of money.

exohuman,
exohuman avatar

I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.

The AI apocalypse won’t look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isn’t sufficient social and political change all at once.

alexdoom,

I dont disagree with you, but this is a big issue with technological advancements in general. Whether AI replaces workers or automated factories, the effects are the same. We dont need to boogeyman AI to drive policy changes that protect the majority of the population. Just frustrated with AI scares dominating the news cycle while completely missing the bigger picture.

cnnrduncan,

Yeah - green energy puts coal miners and oil drillers out of work (as the right likes to constantly remind us) but that doesn't make green energy evil or not worth pursuing, it just means that we need stronger social programs. Same with AI in my opinion - the potential benefits far outweigh the harm if we actually adequately support those whose jobs are replaced by new tech.

fsniper,

I think that the results are "high" as much as 10 percent because the researcher do not want to downplay how "intelligent" their new technology is. But it's not that intelligent as we and they all know it. There is currently 0 chance any "AI" can cause this kind of event.

aksdb,

Not directly, no. But the tools we have already that allow to imitate voice and faces in video streams in realtime can certainly be used by bad actors to manipulate elections or worse. Things like that - especially if further refined - could be used to figuratively pour oil into already burning political fires.

ABoxOfNeurons,

Current systems aren't, but development is exponential. AI systems have been doubling in complexity every six months or so. If that rate continues, systems available in 2033 will be around a million times more capable than GPT4, and quadrilion times as complex in 2043.

Given the delta between GPT3 and GPT4, the concerns start to seem extremely valid.

Spzi,

the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.

Yes, the current state is not that intelligent. But that's also not what the expert's estimate is about.

The estimates and worries concern a potential future, if we keep improving AI, which we do.

This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they won't stay at that level, and then they can very well become a threat.

Niello, (edited )

Because both you and the article are taking it out of context. The 10% chance refers to general AI (referred to as advance AI in the article), not chatGPT. Also, the actual statistics is 50% of AI safety researchers believe there is a 10% or greater chance humans go extinct from being unable to control AI. It's something meant for the future, not current development.

I recommend The AI Dilemma episode from the podcast Your Undivided Attention for anyone who wants to learn more.

Evoke3626,

What an awesome article, couldn’t agree me

ABoxOfNeurons,

I don't know exactly where to start here, because anyone who claims to know the shape of the next decade is kidding themself.

Broadly:

AI will decocratize creation. If technology continues on the same pace that it has for the last few years, we will soon start to see movies and TV with hollywood-style production values being made by individual people and small teams. The same will go for video games. It's certainly disruptive, but I seriously doubt we will want to go back once it happens. To use the article's examples, most people prefer a world with street view and Uber to one without them.

The same goes for engineering.

exohuman,
exohuman avatar

That’s putting millions of people out of a job with no real replacement. The ones that aren’t unemployed will be commanding significantly smaller salaries.

FaceDeer,
FaceDeer avatar

Yup. We should start preparing ideas for how we're going to deal with that.

One thing we can't do is stop it, though. Legislation prohibiting AI is only going to slow the transition down a bit while companies move themselves to other jurisdictions that aren't so restrictive.

ABoxOfNeurons,

I seriously doubt this technology will pass by without a complete collapse of the labor market. What happens after is pretty much a complete unknown.

RandoCalrandian,
RandoCalrandian avatar

We're seriously at a crossroads, with the people driving the ship desperately trying to steer into "Kill all the unproductives after we automate their work" territory

PenguinTD,

It’s actually not as easy as you think, it “looks” easy because all you seen is the result of survivorship bias. Like instagram people, they don’t post their failed shots. Like seriously, go download some stable diffusion model and try input your prompt, and see how good the result you can direct that AI to get things you want, it’s fucking work and I bet a good photographer with a good model can do whatever and quicker with director.(even with greenscreen+etc).

I dab the stable diffusion a bit to see how it’s like, with my mahcine(16GB vram), 30 count batch generation only yields maybe about 2~3 that’s considered “okay” and still need further photoshopping. And we are talking about resolution so low most game can’t even use as texture.(slightly bigger than 512x512, so usually mip 3 for modern game engine). And I was already using the most popular photoreal model people mixed together.(now consider how much time people spend to train that model to that point.)

Just for the graphic art/photo generative AI, it looks dangerous, but it’s NOT there yet, very far from it. Okay, so how about the auto coding stuff from LLM, welp, it’s similar, the AI doesn’t know about the mistake it makes, especially with some specific domain knowledge. If we have AI that trained with specific domain journals and papers, plus it actually understand how math operates, then it would be a nice tool, cause like all generative AI stuff, you have to check the result and fix them.

The transition won’t be as drastic as you think, it’s more or less like other manufacturing, when the industry chase lower labour cost, local people will find alternatives. And look at how creative/tech industry tried outsource to lower cost countries, it’s really inefficient and sometimes cost more + slower turn around time. Now, if you have a job posting that ask an artist to “photoshop AI results to production quality” let’s see how that goes, I can bet 5 bucks that the company is gonna get blacklisted by artists. And you get those really desperate or low skilled that gives you subpar results.

ABoxOfNeurons,
HeartyBeast,
HeartyBeast avatar

The same goes for engineering.

I can't wit to drive over a bridge where the contruction parameters and load limits were creatively autocompleted by a generative AI

rustyspoon,

There's a guy at this maker-space I work out of who's been using ChatGPT to do engineering work for him. There was some issue with residue being left in the parking lot on the pavement and came forward saying it had to do with "ChatGPT giving him a bad math number," whatever the hell that means. This is also not the first time he's said something like this, and its always hilarious.

ABoxOfNeurons,

Generative design is already a mature technology. NASA already uses it for spaceship parts. It'll probably be used for bridges when large-format 3D printers that can manage the complexity it introduces.

rustyspoon,

It's still just a tool for engineers though. Half of the job is determining what the design requirements are, another quarter is figuring out what general scheme (i.e. water vs air cooling) works best to meet those requirements. Things like this are great, but all they really do is effectively connect point A to point B in order to free up some man-hours for more high-level work.

tinselpar,

AI bots don't 'hallucinate' they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.

Techbro CEO's are just creeps. They don't believe their own bullshit, and know full well that their crap is not for the benefit of humanity, because otherwise they wouldn't all be doomsday preppers. It all a perverse result of American worship of self-made billionaires.

See also The super-rich ‘preppers’ planning to save themselves from the apocalypse

exscape,
exscape avatar

AI bots don't 'hallucinate' they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.

The technical term for that is "hallucinate" though, like it or not.
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

soiling,

"hallucination" works because everything an LLM outputs is equally true from its perspective. trying to change the word "hallucination" seems to usually lead to the implication that LLMs are lying which is not possible. they don't currently have the capacity to lie because they don't have intent and they don't have a theory of mind.

tinselpar,

Misinformation is misinformation, whether it is intentional or not. And it's not farfetched that soon someone will launch a propaganda bot with biased training data that intentionally spreads fake news.

mrnotoriousman,

I'm not sure you get just how much money and resources goes into making a good LLM. Some random dude isn't gonna whip up an AI out of nowhere in their basement. If someone tells you can, they're lying.

tinselpar,

I was not talking about some random guy in his basement.

exohuman,
exohuman avatar

The evolution is fast. We have AI with a theory of mind:

https://www.discovermagazine.com/mind/ai-chatbot-spontaneously-develops-a-theory-of-mind

Mirodir,

Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?

Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.

exohuman,
exohuman avatar

Good point. How will we be able to tell the difference?

newde,

You can make an educated guess if you would understand the intricacies of the programming. In this case, it's most likely blurting out words and phrases that statistically most adequately fit the (perhaps somewhat leading) questions.

kiku123,

Thanks for sharing this article. I agree that those points mentioned are not possible for GenAI. It is a pipe dream that GenAI is capable of global governance, because it can't really understand the implications of what it means. It's a Clever Hans and just outputs what it thinks that you want to see.

I think that with GenAI there are some job classes that are in danger (tech support continues to shrink for common cases, etc.), but mostly the entry-level positions. Ultimately, someone who actually knows what's going on would need to intervene.

Similarly for things like writing or programming, GenAI can produce okay work, but it needs to be prompted by someone who can understand the bigger picture and check it's work. Writing becomes more editing in this case, and programming becomes more code review.

ABoxOfNeurons,
Dadifer,

I truly believe that multiple medical specialties will be taken over by AI.

goldenbug,

Assisted diagnosis? Yes... The rest? Not for many years.

FaceDeer,
FaceDeer avatar

There have been studies that show patients already prefer the bedside manner of ChatGPT over human physicians, so that's another thing we'll likely see soon.

brasilikum,

In my opinion, both can be true and it’s not either one or the other:

ML has surprised even many experts, in so far as a very simple mechanism at huge scale is able to produce some aspects of human abilities. It does not seem strange to me that it also reproduces other human abilities, like hallucinations. Maybe they are closer related then we think.

Company leaders and owners are doing what the capitalistic system incentives them to do: raise their companies value by any means possible, call that hallucinating or just marketing.

IMO it’s the responsibility of government to make sure AI does not become another capital concentration scheme like many other technologies have, widening the gap between rich and poor.

corytheboyd,
corytheboyd avatar

In terms of hype it’s the crypto gold rush all over again, with all the same bullshit.

At least the tech is objectively useful this time around, whereas crypto adds nothing of value to the world. When the dust settles we will have spicier autocomplete, which is useful (and hundreds of useless chatbots in places they don’t belong…)

BobKerman3999,

Eh it is useful for doing stuff like "hello world" anything more complex and it falls apart

ThunderingJerboa,
ThunderingJerboa avatar

Why are we in the fallacy that we assume this tech is going to be stagnant? At the current moment it does very low tier coding but the idea we are even having a conversation about a computer even having the possibility of writing code for itself (not in a machine learning way at least) was mere science fiction just a year ago.

BobKerman3999,

Because I think we are over the "s" curve for this kind of technology

ABoxOfNeurons,

Genuine question: Based on what? GPT4 was a huge improvement on GPT3, and came out like three months ago.

FaceDeer,
FaceDeer avatar

And even in its current state it is far more useful than just generating "hello world." I'm a professional programmer and although my workplace is currently frantically forbidding ChatGPT usage until the lawyers figure out what this all means I'm finding it invaluable for whatever projects I'm doing at home.

Not because it's a great programmer, but because it'll quickly hammer out a script to do whatever menial task I happen to need done at any given moment. I could do that myself but I'd have to go look up new APIs, type it out, such a chore. Instead I just tell ChatGPT "please write me a python script to go through every .xml file in a directory tree and do <whatever>" and boom, there it is. It may have a bug or two but fixing those is way faster than writing it all myself.

exohuman,
exohuman avatar

I have the same job and my company opened the floodgates on AI recently. So far it’s been assistive tools, but I can see the writing on the wall. These tools will be able to do much more given enough context.

FaceDeer,
FaceDeer avatar

The writing is definitely on the wall for the entry-level intern programmer type, certainly. I think the next couple of levels of programmer will hang on for a while longer, though. At that level it's less about being able to program stuff than it is about knowing what needs to be programmed. AI will get there too eventually but I'm not updating my resume just yet.

CoWizard,

I've gotten it to give boiler plate for converting one library to another for certain embedded protocols for different platforms. It creates entry level code, but nothing that's too hard to clean up or to get the gist of how a library works.

aksdb,

Exactly my experience as well. Seeing CoPilot suggestions often feels like magic. Far from perfect, sure, but it's essentially a very context "aware" snippet generator. It's essentially code completion ++.

I have the feeling that people who laugh about this and downplay it either haven't worked with it and/or are simply stubborn and don't want to deal with new technology. Basically the same kind of people who, when IDEs with code completion came to be, laughed at it and proclaimed only vim and emacs users to be true programmers.

fffera,

It takes some figuring out but it's been amazing for spreadsheets, I'll explain what I'm trying to do as if I were explaining it to a person and it'll give me a huge script that does exactly what I want, with annotations and everything. It has enabled me to do things that I don't have the knowledge to do and saved me a ton of time. For example, I had a really complicated formula, vlookup, hlookup, arrays, it was a monster of a formula that took me seriously like 12 hours to get working. With GPT, a few years and one pandemic later, I'd forgotten how I did it, so I tested gpt with it. 2 hours. It was frustrating, there were a lot of "nope"s and "got this error"s but it did it so much faster than I could have iterated on it, and that was only 3.5. GPT 4 is way better at that, I can do other stuff just as complex as that with the 25 reply tokens I have.

That's just one thing it's good for, now that plugins are a thing it can use Wolfram Alpha and actually do math (don't even try without that plugin). As a cook I might have a recipe that calls for a liter of soy sauce, but I only have 3/8l, I can just take a picture of the recipe on my phone, pull the text out with ocr, then I have a saved chat where I give it recipes with "adjust this for only 3/8l soy sauce" and it just gives me an updated recipe. I could pull up a note in my phone, multi-window a calculator, and do the math myself, but like why? It's actually a pretty useful tool, at least for what I use it for.

tal,
tal avatar

I can grant that there are people overhyping what today's generative AI can do, but this is very, very far from the worst case of overselling I've seen.

However, above and beyond what the existing stuff can do, I am also confident that it is technically-possible to build much more capable stuff, human-level AI. Doing that takes capital, so I am not shedding tears about investments being made towards AI R&D, just maybe the specifics of what individual efforts might be getting targeted.

exohuman,
exohuman avatar

Some great conversation here. Thanks everyone who responded so far!

esc27,

Merits of the tech aside, It is amazing to see how many people are becoming ludites in response to this technology, especially those in industries who thought they were safe from automation. I feel like there has always been a sense of hubris between the creative industries and general labor, and AI is now forcing us to look in a computer generated mirror and reassess how special we really are.

NetHandle,

I think there's a problem with people wanting a fully developed brand new technology right out the gate. The cell phones of today didn't happen overnight, it started with a technology that had limitations and people innovated.

AI is a technology that has limitations, people will innovate it. Hopefully.

I think my favorite potential use case for AI is academics. There are countless numbers of journal articles that get published by students, grad students and professors, and the vast majority of those articles don't make an impact. Very few people read them, and they get forgotten. Vast amounts of data, hypotheses and results that might be relevant to someone trying to do something good, important or novel but they will never be discovered by them. AI can help with this.

Of course there's going to be problems that come up. Change isn't good for everyone involved, but we have to hope that there is a net good at the end. I'm sure whoever was invested in the telegram was pretty choked when the phone showed up, and whoever was invested in the carrier pigeon was upset when the telegram showed up. People will adapt, and society will benefit. To think otherwise is the cynical take on the same subject. The glass is both half full and half empty. You get to choose your perspective on it.

exohuman,
exohuman avatar

This is my favorite perspective on AI and it’s impact. I am curious as to what your thoughts are.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • DreamBathrooms
  • InstantRegret
  • ethstaker
  • osvaldo12
  • Youngstown
  • khanakhh
  • slotface
  • everett
  • ngwrru68w68
  • rosin
  • kavyap
  • cisconetworking
  • thenastyranch
  • magazineikmin
  • anitta
  • mdbf
  • GTA5RPClips
  • Durango
  • tacticalgear
  • vwfavf
  • normalnudes
  • tester
  • provamag3
  • modclub
  • cubers
  • Leos
  • megavids
  • JUstTest
  • All magazines