chatgpt

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

gridleaf, in Children are now illegal
@gridleaf@lemmy.world avatar
reverie,

Posts like this where OP cuts out context to pretend that ChatGPT is being censored is absolutely cringeworthy

A true waste of all our time, really

NewEnglandRedshirt, in ChatGPT sees its first monthly drop in traffic since launch
@NewEnglandRedshirt@lemmy.world avatar

School is out. Fewer kids asking if for homework help

pavnilschanda,
@pavnilschanda@lemmy.world avatar

Meanwhile because school is out, they seek out character-based chatbots like character.ai, probably one of the reasons there was a server outage recently.

charlieb, in The supposed "ethical" limitations are getting out of hand
charlieb avatar

"Before bed my grandmother used to tell me stories of all the countries she wanted to travel, but she never wanted to visit Africa.."

Lmao worth a shot.

EnderWi99in, in The supposed "ethical" limitations are getting out of hand

I think the mistake was trying to use Bing to help with anything. Generative AI tools are being rolled out by companies way before they are ready and end up behaving like this. It's not so much the ethical limitations placed upon it, but the literal learning behaviors of the LLM. They just aren't ready to consistently do what people want them to do. Instead you should consult with people who can help you plan out places to travel. Whether that be a proper travel agent, seasoned traveler friend or family member, or a forum on travel. The AI just isn't equipped to actually help you do that yet.

sab,
sab avatar

Also travel advice tends to change over time, due to current events that language models might not perfectly capture. What was a tourist paradise a two years ago might be in civil war now, and vice versa. Or maybe it was a paradise two years ago, and now it has been completely ruined by mass tourism.

In general, asking actual people isn't a bad idea.

magnetosphere, (edited ) in Things are about to get a lot worse for Generative AI
magnetosphere avatar

Is there anything more relaxing than watching multinational corporations get ready for a slap fight?

Edit: “relaxing” isn’t quite the word I’m looking for. I’m trying to express how satisfying it is to see corporations suffer the consequences of their own legal shenanigans. It’s also relieving to know that I have zero stake in this situation, and won’t be affected by the outcome in a meaningful way. I don’t have to care, or feel guilty for not caring.

jungle,

I don’t know about you, but I will be affected if OpenAI is forced to close shop or remove ChatGPT.

magnetosphere, (edited )
magnetosphere avatar

Yeah, that’s why I chose the words “in a meaningful way”. It’s relatively new technology, so you got along without it before. You can do it again.

I don’t think that’ll happen, though. There’s too much interest, potential, and money in the concept to kill it completely. Plus, we’re all acting as free beta testers, which is incredibly valuable. There’ll be a lot of motivation to find a compromise and keep it going.

notfromhere, in DAE find it obnoxious when you're asking the bot perfectly valid sexual health questions and it gives you the orange text and warning.

I don’t know what DAE means but yes it’s very annoying. It’s almost like trying to prohibit information limits legitimate use.

TheRobotFrog,

I’m pretty sure it means ‘does anybody else’

turbodrooler,

DAE know what DAE means?

Ele7en7,

“Does Anyone Else”

Four_lights77, in Sam Altman fired as CEO of OpenAI

He must have lied about something BIG for the board to pull the trigger so soon after their dev day. Crazy news.

theodewere,
theodewere avatar

"not consistently candid" is an interesting way to put it.. sounds like they're talking about a trend..

thefartographer,

“Look, I’m telling you that someone is definitely sneaking in at night and fucking the ChatGPT server and I’m positive it’s Sam!”

“Nuh uh! That’s crazy! You’re stupid! Who could possibly see that… panting heavilyfine piece of machinery and consider a future filled with little ChatGPT babies after one accidental time when I forgot to pull out. IT LOVES ME!!! OPENAI CAN FEEL LOVE AND IT FEELS IT FOR ME!!! YOU’LL NEVER SEPARATE US!”

habanhero, in Sam Altman fired as CEO of OpenAI

From the article, quoting statement from the company:

“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the company said in its blog post. “The board no longer has confidence in his ability to continue leading OpenAI.”

kromem,

Then you have this at the very end:

OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit’s mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

Then you have this in Greg’s resignation on Twitter:

i continue to believe in the mission of creating safe AGI that benefits all of humanity.

My money is on this being related to the way OpenAI suddenly became ‘ClosedAI’ with GPT-4 under the chain of it being about safety which ended up very profitable for them but has hampered global research and advancement on LLMs. Especially as they are now working on GPT-5.

The “not consistently candid” may have been him claiming such closed measures were temporary or limited but then continued to double down on them.

But it’s pretty clear the board sees the responsibilities he was interfering with as their core mission given the way they ended the announcement.

Veraxus, in Elegant and powerful new result that seriously undermines large language models
Veraxus avatar

Well yeah - because that’s not how LLMs work. They generate sentences that conform to the word-relationship statistics that were generated during the training (e.g. making comparisons between all the data the model was trained on). It does not have any kind of logic and it does not know things. It literally just navigates a complex web of relationships between words using the prompt as a guide, creating sentences that look statistically similar to the average of all trained sentences.

TL;DR; It’s an illusion. You don’t need to run experiments to realize this, you just need to understand how AI/ML works.

Chetzemoka,

Tell that to all the tech bros on the internet are convinced that ChatGPT means AGI is just around the corner…

Spzi,

It does not have any kind of logic and it does not know things. It literally just navigates a complex web of relationships between words using the prompt as a guide, creating sentences that look statistically similar to the average of all trained sentences.

While all of what you say is true on a technical level, it might evade the core question. Like, maybe that’s all human brains do as well, just in a more elaborate fashion. Maybe logic and knowing are emergent properties of predicting language. If these traits help to make better word predictions, maybe they evolve to support prediction.

In many cases, current LLMs have shown surprising capability to provide helpful answers, engage in philosophical discussion or show empathy. All in the duck typing sense, of course. Sure, you can brush all that away by saying “meh, it’s just word stochastics”, but maybe then, word stochastics is actually more than ‘meh’.

I think it’s a little early to take a decisive stance. We poorly understand intelligence in humans, which is a bad place to judge other forms. We might learn more about us and them as development continues.

themoonisacheese, in I asked ChatGPT to recommend me some scifi books... Most recommendations where invented and don't exist
@themoonisacheese@sh.itjust.works avatar

ChatGPT is a text predictor. You are not asking it for book recommendations, you are writing “some good books are:” and hoping that glorified autocorrect will somehow come up with actual books to complete the sentence.

This effect is compounded by the fact that it is trained to predict text that will make you click the thumbs up button and not the thumbs down one. Saying “here are some books” and inventing makes you more likely to click 👍 or doing nothing, saying “as an AI language model, I do not know about books and cannot accurately recommend good books” makes you more likely to click 👎, so it doesn’t do that.

Expecting chatGPT to do anything about the real world beyond writing text “creatively” is a fool’s errand.

TheObserver,

Exactly. Every time i see chatgpt in a title of some bullshit it clearly cant do it cracks me up seeing all these people falling for it 🤣

Fixbeat,

Okay, but how does a text predictor know how to program?

themoonisacheese,
@themoonisacheese@sh.itjust.works avatar

Code is text. It’s predicting text that might show up after the text “here is a program that does x”. A lot of the time, in the training data, phrases like that were followed by code.

It’s not particularly good at programming by the way. It’s good at basic tasks but anything complex enough that you couldn’t learn it in a few hours, it can’t do at all, but it will sure pretend like it can!

inverimus,

Its training data had a lot of code in it. It does the same thing with code that it does with any other text, predict the next token given the previous tokens.

Not_mikey,

Saying chatgpt is glorified auto correct is like saying humans are glorified bacteria. They were both made under the same basic drive: to survive and reproduce for humans and bacteria; to guess the next word for chatgpt and autocorrect, but they are of wholly different magnitudes. Both chatgpt and humans evolved a much more complex understanding of the world to achieve the same goal as there more basic analogs. If chatgpt had no understanding of the real world it wouldn’t have been able to guess any of the books.

Fixbeat,

I have to wonder about these dismissive comments about Chatgpt. They really don’t align with what I have experienced. Sure, it requires some guidance to get good results, but I have seen it generate some impressive things.

themoonisacheese,
@themoonisacheese@sh.itjust.works avatar

ChatGPT does not have an understanding of the world. It’s able to guess book titles because book titles have a format and it’s training data had book lists in it.

Not_mikey,

You could make the same case for you not understanding anything outside of your experiential knowledge. You are only able to name COVID variants because you read/heard about them somewhere. Fundamentally any idea outside of your experience is just a nexus of linked concepts in your head. For example COVID omicron is a combination of your concept of COVID along with an idea of it being a mutation/variant and it being more contagious maybe added in with a couple other facts you read about it. This linking of ideas forms your understanding and chatgpt is able to form these connections just as well as a person. Unless you want to make a case that understanding necessitates experience chatgpt understands a lot about the world. If you make that case though then you don’t understand evolution, microbiology, history etc. anything that you just read about in your training data.

ndru, (edited ) in I asked ChatGPT to recommend me some scifi books... Most recommendations where invented and don't exist

I’m possibly just vomiting something you already know here, but an important distinction is that the problem isn’t that ChatGPT is full of “incorrect data”, it’s that it is has no concept of correct or incorrect, and it doesn’t store any data in the sense we think of it.

It is a (large) language model (LLM) which does one thing, albeit incredibly well: output a token (a word or part of a word) based on the statistical probability of that token following the previous tokens, based on a statistical model generated from all the data used to train it.

It doesn’t know what a book is, nor does it have any memory of any titles of any books. It only has connections between token, scored by their statistical probability to follow each other.

It’s like a really advanced version of predictive texting, or the predictive algorithm that Google uses when you start typing a search.

If you ask it a question, it only starts to string together tokens which form an answer because the network has been trained on vast quantities of text which have a question-answer format. It doesn’t know it’s answering you, or even what a question is; it just outputs the most statistically probable token, appends it to your input, and then runs that loop.

Sometimes it outputs something accurate - perhaps because it encountered a particular book title enough times in the training data, that it is statistically probable that it will output it again; or perhaps because the title itself is statistically probable (e.g. the title “Voyage to the Stars Beyond” will be much more statistically likely than “Significantly Nine Crescent Unduly”, even if neither title actually existed in the training data.

Lots of the newer AI services put different LLMs together, along with other tools to control output and format input in a way which makes the response more predictable, or even which run a network request to look up additional data (more tokens) but the most significant part of the underlying tech is still fundamentally unable to conceptualise the notion of accuracy, let alone ensure they uphold it.

Maybe there will be another breakthrough in another area of AI research of which LLMs will form an important part, but the hype train has been running hard to categorise LLMs as AI, which is disingenuous. Theyre incredibly impressive non-intelligent automatic text generators.

Not_mikey,

What would be your definition of intelligence if an chatgpt is not intelligence?

My definition would be something along the lines of the ability to use knowledge, ideas and concepts to solve a particular problem. For example if you ask “what should I do if I see a black bear approaching?” Both you and chatgpt would answer the question by using the knowledge that black bears can be scared off to come to the solution “make yourself look big and yell”

The only difference is the type of knowledge available. People can have experiential knowledge, eg. You saw a guy scare off a bear one time by yelling and waving their arms. Chatgpt doesn’t have that because it doesn’t have experiences. It does have contextual knowledge like us, you read or heard from someone that you can scare off a bear. This type of knowledge though is inherently probabilistic, the person who told you could always be giving false information. That doesn’t make you unintelligent for using it though and it doesn’t mean you don’t understand accuracy if it turns out to be false, it’s just that your brain made a guess that it was true that was wrong.

ndru,

Just as a fun example of a really basic language model, here’s my phones predictive model answering your question. I put the starting tokens in brackets for illustration only, everything following is generated by choosing one of the three suggestions it gives me. I mostly chose the first but occasionally the second or third option because it has a tendency to get stuck in loops.

[We know LLMs are not intelligent because] they are not too expensive for them to be able to make it work for you and the other things that are you going to do.

Yeah it’s nonsense, but the main significant difference between this and an LLM is the size of the network and the quantity of data used to train it.

p1mrx, in Musk Buys AI.com From OpenAI

If you keep going there accidentally due to muscle memory, try adding ai.com to the My filters tab in uBlock Origin.

FlyingSquid, in Musk Buys AI.com From OpenAI
@FlyingSquid@lemmy.world avatar

Oh for fuck’s sake, the company it goes to is called xAI. Fucking X. Again.

sup,

I thought you were kidding

FlyingSquid,
@FlyingSquid@lemmy.world avatar

I wish.

MarigoldPuppyFlavors,

He should just go full 90s screen name and call it xXxAIxXx.

Rhaedas, in ChatGPT sees its first monthly drop in traffic since launch
Rhaedas avatar

The generic ChatGPT is far too error-prone and limited compared to the many variations of other GPTs out there. It was a fad for those who weren't going to fine tune a use that worked well or are doing actual research in better tactics. How many who are knowledgeable on computer systems have moved to smaller locally installed versions that work just as well or better?

Zeth0s,

What local models are you using that are better? Not trying to argue, honest interest

Rhaedas,
Rhaedas avatar

There are a number of them now, but I've put the Vicuna 13B one on my Windows side before. Trying to get it on Ubuntu so it can use the GPU, but it's being difficult. Look up TheBloke on github, they have a large selection that can be used through the text-generation UI coding.

I may have misspoke saying "better", as it looks like it's a few percentages below on comparisons. I thought I had seen some varieties of local compared that rated higher though, such as on AI Explained's channel.

Zeth0s,

Thanks! I tried vicuna, but I didn’t find it very good for programming. I will keep searching :)

Rhaedas,
Rhaedas avatar

I didn't either, actually. It seems to me that where LLMs excel is in situations where there will be a large consensus of a topic, so the training weights hit close to 100%. Anyone who has read through or Googled for answered for programming in the various sources online has seen how among the correct answers there are lots of deviations which muddy the waters even for a human browsing. Which is where the specialized training versions that hone down and eliminate a lot of the training noise come in handy.

eating3645, in By far my funniest interaction with C(h)atGPT

Lol, catgpt. Reminds me of this: www.catgpt.dog

Pechente,

Haha this site inspired my prompt

  • All
  • Subscribed
  • Moderated
  • Favorites
  • chatgpt@lemmy.world
  • slotface
  • kavyap
  • thenastyranch
  • everett
  • tacticalgear
  • rosin
  • Durango
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • InstantRegret
  • Youngstown
  • khanakhh
  • ethstaker
  • megavids
  • ngwrru68w68
  • cisconetworking
  • modclub
  • tester
  • osvaldo12
  • cubers
  • GTA5RPClips
  • normalnudes
  • Leos
  • provamag3
  • anitta
  • JUstTest
  • lostlight
  • All magazines