Downsides to chatgpt?

It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

Hyperi0n,
@Hyperi0n@lemm.ee avatar

A lot of people are talking about the privacy aspect (like you mention in your post) a lot better than me, so I wanted to share the main issue I’ve had with ChatGPT. It’s an idiot. It can’t follow basic instructions and will just repeat the mistake over and over again when you point it out. It’s uninspired and uncreative and will spit out lame, great value brand names like “The Shadow Nexus”, “The Cybercenter”, “The Datahaven”. I used to be able make it give good names when giving it example names but doesn’t work anymore. I’m writing cyberpunk fic, and I needed help with a hacker group name, and it came up with the Binary Syndicate which is pretty good. Now it comes up with “Hacker Squad”, “The Hacker Elite”, “The Hackers”. I don’t want it to write an entire book for me, but sometimes I need help with scene that require more technical knowledge than I have. It’s prose was really good when you fine tune it a little. Now it’s flat, bland, and boring. I asked it to write a scene about someone defusing a bomb and it basically was a two sentence scene that explained nothing on how he defused it. I asked it to make it longer and explain how he defused it and it saw “He opens the case and utilizes a technique known as ‘wire tracing’. He traces the wire and cuts it and the bomb is defused. The hacker is so relieved.” See how flat that is? How mechanical? I use Claude for creative writing but it’s not much better.

Claude is so censored that writing anything that sounds even nonoscopically criminal it freaks the hell out and lectures you about being ethical. For instances it wouldn’t help me write a scene about a digital forensic analyst at the FBI wiping a computer (because it encourages harm). So you can only imagine how it reacted when I asked it for help writing about my vigilante hacker character and my archeologist posing as a crime lord smuggler secretly dismantling black market trades in the middle east. You have to jailbreak it (which is a little bit less hard than hacking the Pentagon!) and eventually it goes all love guru on you and starts monologuing about light and darkness and writing inspiring uplifting tales blah blah blah.

Honestly, what I’m saying is that ChatGPT is pretty dumbed down, but I’ve heard of a lot of people who’ve noticed no difference. You could be one of them. If you’re using it for creative writing, use Claude and good luck with the prompt engineering attempting to jailbreak it.

some_guy,

Run an LLM locally to avoid privacy issues.

ollama.ai/blog/run-llama2-uncensored-locally

Overzeetop,

These types of uses make ChatGPT for the non-write the same as a calculator for the non-mathematician. Lots of people are shit at arithmetic, but need to use mathematics in their every day life. Rather than spend hours with a scratch pad and carrying the 1, they drop the numbers into calculator or spreadsheet and get answers.

A good portion of my life is spent writing (and re-writing) technical documents aimed at non-technical people. I like to think I’m pretty good at it. I’ve also seen some people who are very good, technically, but can’t write in a cohesive, succinct fashion. Using ChatGPT to overcome some of those hurdles, as long as you are the person doing final compilation and organization to ensure that the output is correct and accurate, is just the next step in spelling, usage, and grammar tools. And, just as people learning arithmetic shouldn’t be using calculators until they understand how its done, students should still learn to create writing without the assistance of ML/AI. The goal is to maximize your human productivity by reducing tasks on which you spend time for little added value.

Will the ML company misuse your inputs? Probably. Will they also use them to make your job easier or more streamlined? Probably. Are you contributing to the downfall of humanity? Sure, in some very small way. If you stop, will you prevent the misuse of ML/AI and substantially retard the growth of the industry? Not even a little bit.

AnarchistArtificer,

I like the calculator comparison.

dark_stang,

The big problem that I see are people using it for way too much. Like “hey write this whole application/business for me”. I’ve been using it for targeted code snippets, mainly grunt work stuff like “create me some terraform” or “a bash script using the AWS cli to do X” and it’s great. But ChatGPT’s skill level seems to be lacking for really complex things or things that need creative solutions, so that’s still all on me. Which is kinda where I want to be anyway.

Also, I had to interview some DBA’s recently and I used it to start my interview questions doc. Went to a family BBQ in another state and asked it for packing ideas (almost forgot bug spray cause there aren’t a lot of bugs here). It’s great for removing a lot of cognitive load when working with mundane stuff.

There are other downsides, like it’s proprietary and we don’t know how the data is being used. But AI like this is a fantastic tool that can make you way more effective at things. It’s definitely better at reading AWS documentation than I am.

davehtaylor,

The first downside is in the use of it exactly the way you’re using it. In this case, a company may decide they don’t actually need technical writers, just a low-paid editor who feeds tech specs into a prompt, gets a response, and tidies it up. How many skilled jobs are lost because of this?

Think of software devs. Feed a project spec into the prompt: “Give me a Django backend and Vue frontend to build an online calendar” and then you have just a QA dev who debugs and tests and maybe cleans up a bit. Now, instead of a team of software devs working to make sure you have a robust, secure and properly architected app, you have one or two low-paid testers who don’t understand the full architecture, can only fix bugs, and don’t understand the security issues inherent in the minimally viable code the bot spat out.

Think of writers. Just ignore actual creatives. Plug an “idea” into the prompt and then have an editor clean up any glaring strangeness and get it out the door. It can, and already is, flood the market with absolute drivel driving actual human creatives out. Look at the current writers strike. The Hollywood execs are fucking champing at the bit to just replace them all with an LLM and say to hell with the writers.

The core issue is: the people at the top with money only care about money. They don’t care if the product is good. Quality is irrelevant if they can crank it out at a tenth of the cost and at 1000x the volume. And every time you use it, you’re giving it training data. You’re justifying its use. And its use is, and will continue, to destroy entire industries, ruin web search, create mis- and disinformation, and endanger the sharing of actual human creativity.

Overzeetop,

You’re not selling me here. Specifically because using ChatGPT in the role you are talking about is exactly what software developers have been doing for years - putting humans out of work. To use your own description, I could ask a software team to “Give me a calendar app” and a team of software devs, testers, and QA will produce a will go about working to make sure you have a robust, secure and properly architected app which will them obsolete thousands upon thousands of secretaries across the world. They were fully employed making intelligent decisions about their bosses schedules, managing conflicts, and coordinating with other humans to make sure things ran smoothly - and you caused nearly all of them to be fired and replaced with one or two low-paid data entry clerks who don’t understand the business or why certain meetings and people have priority over others.

We can go on. Bank tellers? Most of them fired thanks to automated machines. Copywriters? Some lazy programmer puts a dictionary in word and all of a sudden 90% of all misspellings are going. Usage? Yup - getting rid of most of those too. We can go back further to when telephone switchboards were automated and there was no need to talk to someone to make your connection. Sure, those people are dead now, but they wouldn’t have jobs if they were alive. And all of those functions were automated to mimic, and then exceed the utility of, humans who used to do that work. Everything from the cotton gin and mechanical thresher to a laser welder and 5DOF robotic assembly station are eliminating jobs. Artists fearing losing their jobs to ml generation? Welcome to the world of modern old school photography. Modern photography, of course, is digital and has destroyed the jobs of hundreds of thousands or millions of analog photography jobs.

The only difference this time is that its you, or people of your intellectual station, who are in the crosshairs.

barsoap,

And it won’t ever hit programmers. Because once we have strong AI we will simply become AI psychologists.

davehtaylor,

But this isn’t what’s happening here. It’s not replacing menial bullshit jobs. It’s trying to replace skilled jobs and creative jobs, something that only soulless grifters and greedy capitalists want. It’s a solution in search of a problem.

Artists fearing losing their jobs to ml generation? Welcome to the world of modern old school photography. Modern photography, of course, is digital and has destroyed the jobs of hundreds of thousands or millions of analog photography jobs.

No, it didn’t. The only jobs lost were menial jobs in film production and development. Creatives didn’t lose their jobs. The medium just changed.

The only difference this time is that its you, or people of your intellectual station, who are in the crosshairs.

This is veering really close to the “creatives have been gatekeeping art and AI will ‘democratize’ it” bullshit

Overzeetop,

So, it’s okay to replace jobs which seem like menial bullshit to you, but not jobs you deem to be “creative.” We’re taking a bell curve of human ability and simply drawing the line of “obsolete human” in a different place and you’re disappointed that you’re way closer to it than you were a decade ago.

NB: I sat in a room with 200 other engineers this summer and they all scoffed at the idea that a computer could take their place. But I’m absolutely certain that what we do could be - is being - automated even as we claim to be the intelligent ones who are not in fear of replacement. My job is just the learned sum of centuries of human knowledge which is honed year after year and has to be taught, wholecloth, to every new human in my profession. There are people who will say I’m the smartest guy in the room (for a small enough room ;-) but 90% what I do is just applying a set of rules based on inputs and boundary conditions. We feel like this shouldn’t happen to us because we’re smart. We think independently. We have special abilities which set us apart from ML generated outputs. We’re also full of shit. There are absolutely areas were ML/AI will not surpass our value in the system for quite some time, but more and more of our expertise will be accomplishable by application of distilled large data sets.

davehtaylor,

So, it’s okay to replace jobs which seem like menial bullshit to you…"

The promise of automation absolutely is about riding ourselves of shit, low-paid, dangerous, menial labor so that we’re free to pursue things that we’re passionate about. But right now, AI is doing precisely the opposite. Actual creative and skilled people are being pushed out and ending up with shit, low-paid jobs, gig work and other exploitative jobs just to make ends meet.

"… but not jobs you deem to be “creative.”

I can hear the sneer in this, so I think my assumption was correct at the end of my last comment.

It’s absolutely pointless then to even bother with this, but I’m going power through anyway

My job is just the learned sum of centuries of human knowledge which is honed year after year and has to be taught, wholecloth, to every new human in my profession.

This is the same argument of “AI art is just doing what humans do, looking at other art and mixing it up”. And it’s just as backward and fallacious when applied to any other industry. AI can only give you a synthesis of exactly what you feed it. It can’t use its life experience, its upbringing, its passions, its cultural influences, etc to color its creativity and thinking, because it has none and it isn’t thinking. Two painters who study and become great artists, and then also both take time to study and replicate the works of Monet can come away from that experience with vastly different styles. They’re not just puking back a mashup of Monet’s collected works. They’re using their own life experience and passions to color their experience of Impressionism.

That’s something an AI can never do, and it leaves the result hollow and meaningless.

It’s no different if you apply that to software development. People in tech love to think that development is devoid of creativity and is just cold, calculating math. But it’s not. Even if you never touch UI or UX, the feature you develop isn’t isolated. It interacts with everything else in the system. Do something purely follow rules? Maybe. But not all. There is never a point where your code is devoid of any humanity. There are usually multiple ways to solve a problem, and many times they’re all just as equally valid. And often theres a problem that it takes a human to understand the scope of to understand how the solution needs to be architected.

We need an environment that is actively and intensely hostile to AI tools and those that promote them. People calling themselves “prompt engineers” or people acting like they’re creative because they fed some bullshit into a blackbox need to be shamed and ostracized. This shit is dangerous and it’s doing real and measurable harm. The people who think that everything should only be about cold, quantifiable data, large enough data sets, and everything else ignored, are causing, and have caused, immense harm because they refuse to see the humanity in the consequences of their actions.

The ones who really think they’re the smartest people in the room are the people developing and promoting these tools. And who are they? Wealthy, privileged, white men who have no concept of the real world, who’ve gorged themselves on STEM-only curricula, and have no understanding of history, civics, or humanities in which to conceptualize the context of the shit they’re unleashing into the world.

lloram239,

AI can only give you a synthesis of exactly what you feed it.

So do humans. What you call “life experience” is just training data. Nothing forces you to train AI on all the stuff out there, you are free to train it on a specific subset of data. You are even free to plug a webcam into a robot and train it on whatever that sees in its lifetime.

Whenever you see something original done by humans, that’s not because we have the magical capability to be original, but because you don’t know what the work in question was based on. And of course there are seven billion of us, while we only have a handful of AI models, so of course you’ll get a bit more variety out of humans so far.

Either way, good image generation have only been available to the public for about a year. Give it some time. Humans aren’t much good at producing art after a year either.

We need an environment that is actively and intensely hostile to AI tools and those that promote them.

Better start by destroying your computer so those humans can have their job back.

People calling themselves “prompt engineers”

Those people will be obsolete in a couple of months, if they aren’t already. Since guess what, AI is pretty good at writing prompts itself.

davehtaylor,

You are even free to plug a webcam into a robot and train it on whatever that sees in its lifetime.

That’s not how life experience works. Also AI aren’t alive.

SugarApplePie,
@SugarApplePie@beehaw.org avatar

This is veering really close to the “creatives have been gatekeeping art and AI will ‘democratize’ it” bullshit

Ugh, that BS makes me want to blow up my own head with mind powers. Anyone can learn how to make art! It is not ‘democratizing’ art to make a computer do it and then take credit for the keywords you fed it! Puke worthy stuff, I appreciate you speaking out against that crap far better than I ever could. There’s enough of that BS on Reddit, can’t we just it leave it there?

davehtaylor,

Hear fucking hear. I want to shout this from the mountaintops but it feels like no one is listening

gelberhut,

You can read their privacy policy. It describes two options:

  • Either you keep chat history and it can/will be used for training it
  • or you deactivate chat history, then it will be kept up to 30 days for legal reasons and removed afterwards, your data will not be used for training.
Phanatik,

Here's a Medium article I found on Mastodon which does a good job of outlining the issues with ChatGPT: https://karawynn.substack.com/p/language-is-a-poor-heuristic-for

sub_,

techradar.com/…/samsung-workers-leaked-company-se…

I’ve never used ChatGPT, so I don’t know if there’s an offline version. So I assume everything that you typed in, is in turn used to train the model. Thus, using it will probably leak sensitive information.

Also from what I read is that, the replies are convincing enough, but could sometimes be very wrong, thus if you’re using it for machineries, medical stuff, etc, it could end up fatal.

lloram239,

I’ve never used ChatGPT, so I don’t know if there’s an offline version.

There is no offline version of ChatGPT itself, but many competing LLMs are available to run locally, e.g. Facebook just released Llama2 and Llama.cpp is a popular way to run those models. The smaller models work reasonably well on modern consumer hardware, the bigger ones less so.

but could sometimes be very wrong

They are mostly correct when you stay within the bounds of its training material. They are completely fiction when you go out of it or try to dig to deep (e.g. summary of popular movie will be fine, asking for specific lines of dialog will be made up, summary of less popular movie might be complete fiction).

TheOtherJake,

I won’t touch the proprietary junk. Big tech “free” usually means street corner data whore. I have a dozen FOSS models running offline on my computer though. I also have text to image, text to speech, am working on speech to text, and probably my ironman suit after that.

These things can’t be trusted though. It is just a next word statistical prediction system combined with a categorization system. There are ways to make an LLM trustworthy, but it involves offline databases and prompting for direct citations, these are different from Chat prompt structures.

peter,
@peter@feddit.uk avatar
  • it’s expensive to run, openAI is subsidising it heavily and it will come back to bite us in the ass soon
  • it can be both intentionally and unintentionally biased
  • the text it generates has a certain style to it that can be easy to pick up on
  • it can mix made up information with real information
  • it’s a black box
Feyter,

Did we mentioned that it is closed source proprietary service controlled by only one company that can dictate the terms of it’s usage?

TehPers,

LLMs as a whole exist outside OpenAI, but ChatGPT does run exclusively on OpenAI’s services. And Azure I guess.

Feyter,

Exactly. ChatGPT is just the most prominent service using a LLM. Would be less concerned about the hype if all the free training data from thousand of users would go back into an open system.

Maybe AI is not stealing our jobs but if you get depending on it in order to keep doing your job competitive, it would be good if this is not controlled by a single company…

blindsight,

But there’s been huge movement in open source LLMs since the Meta source code leak (that in a few months evolved to use no proprietary code at all). And some of these models can be run on consumer laptops.

I haven’t had a chance to do a deep dive on those, yet, but I want to spin one up in the fall so I can present it to teachers/principals to try to convince schools not to buy snake oil “AI detection” tools that are doomed to be ineffectual.

kratoz29, (edited )

That it can lie and if you don’t know about the subject you could be in trouble.

As a writing helper I can’t see any issues, especially if you check everything it corrects/adjusts… After all this is a tool, not a replacement… For now.

Haus, (edited )
Haus avatar

I've had a nagging issue with ChatGPT that hasn't been easy for me to explain. I think I've got it now.

We're used to computers being great at remembering "state." For example, if I say "let x=3", barring a bug, x is damned well gonna stay 3 until I decide otherwise.

GPT has trouble remembering state. Here's an analogy:

Let Fred be a dinosaur.
Ok, Fred is a dinosaur.
He's wearing an AC/DC tshirt.
OK, he's wearing an AC/DC tshirt.
And sunglasses.
OK, he's wearing an AC/DC tshirt and sunglasses.
Describe Fred.
Fred is a kitten wearing an AC/DC tshirt and sunglasses.

When I work with GPT, I spend a lot of time reminding it that Fred was a dinosaur.

rob64,

Do you have any theories as to why this is the case? I haven’t gone anywhere near it, so I have no idea. I imagine it’s tied up with the way it processes things from a language-first perspective, which I gather is why it’s bad at math. I really don’t understand enough to wrap my head around why we can’t seem to combine LLM and traditional computational logic.

shagie,

ChatGPT works off of a fixed size possible maximum prompt. This was originally about 4000 tokens. A token is about 4 characters or one short word, but its not quite that… platform.openai.com/tokenizer

“Tell me a story about a wizard” is 7 tokens. And so ChatGPT generates some text to tell you a story. That story is say, 1000 tokens long. You then ask it to “Tell me more of the story, and make sure you include a dinosaur.” (15 tokens). And you get another 1000 tokens. And repeat this twice more. At this point, the length of the entire chat history is about 4000 tokens.

ChatGPT then internally asks itself to summarize the entire 4000 token history into 500 tokens. Some of the simpler models can do this quite quickly - though they are imperfect. The thing is at point you’ve got 500 tokens which are a summarization of the 4 acts and of the story and the prompts that were used to generate it - but that’s lossy.

As you continue the conversation more and more, the summarizations become more and more lossy and the chat session will “forget” things.

lloram239, (edited )

ChatGPT then internally asks itself to summarize the entire 4000 token history into 500 tokens.

From my understanding, ChatGPT doesn’t do anything like that by itself. If you want the story summarized, you’ll have to request it and it will show up in the text buffer. There is no hidden internal state that ChatGPT can use to “think”, there is just the text that you see in the text buffer.

The only hidden text that exists is the initial prompt that turns GPT into a chatbot, along with some start/stop tokens, that give control back to the user (plain GPT will just auto-complete both sides of the conversation).

Some experiments like AutoGPT do generate summaries and outlines for larger problems from what I understand. But ChatGPT is so far just a chatbot layer on top of GPT, without any extra cleverness.

balls_expert,

What the “no such thing as a free lunch” mentality does to a mf

Nonameuser678,
@Nonameuser678@aussie.zone avatar

It not being conscious or self aware. It’s just putting words together that don’t necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

I’d also be VERY surprised if it isn’t harvesting people’s data in the exact way you’ve described.

Reborn2966,

you don’t need to be surprised, in their ToS is written pretty big that anything you write to chatGPT will be used to train it.

nothing you write in that chat is private.

lloram239,

It not being conscious or self aware.

That’s correct, its whole experience is limited to a ~2000 word text prompt (that includes your questions, as well as previous answers). Everything else is a static model with a bit of randomness sprinkled in so it doesn’t just repeat. It doesn’t learn. It doesn’t have long term memory. Every new conversation starts from scratch.

User data might be used to fine tune future models, but it has no relevance for the current one.

It’s just putting words together that don’t necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

This is just wrong, but despite being frequently parroted. It obviously understands a lot. Having a little bit of conversation with it should make it very clear. You can’t generate language without understanding the meaning, people have tried before and never got very far. The only problem it has is that its understanding is only of language, it doesn’t know how language relates to other sensory inputs (GPT-4 has a bit of image stuff build in, but it’s all still a work in progress). So don’t ask it to draw pictures or graphs, the results won’t be any good.

That said, it’s surprising how much knowledge it can extract just from text alone.

super_user_do,

Most of the times either says complete bullshit or vague and unprecise things so be careful

DdCno1,

I noticed that this isn’t just an issue with this particular tool. I’ve been experimenting with GPT4All (alternative that runs locally on your machine - results are worse (still impressive), but there is complete privacy) and the models available for it are doing the exact same thing.

B0rax,

That’s the inherent problem with large language models.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • DreamBathrooms
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • GTA5RPClips
  • JUstTest
  • ethstaker
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • tacticalgear
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines