SGE, ChatGPT and the likes are the stupidest thing to come from AI

This may be an unpopular opinnion… Let me get this straight. We get big tech corporations to read the articles of the web and then summarize to me, the user the info I am looking for. Sounds cool, right? Yeah, except that why in the everloving duck would I trust Google, Microsoft, Apple or Meta to give me the correct info, unbiased and not curated? The past experiences all show that they will not do the right thing. So why is everyone so OK with what’s going on? I just heard that Google may intend to remove sources. Great, so it’s like trust me bro.

ThrowawayPermanente,

Agreed. Show me your sources, I don’t trust your executive summary.

Kolanaki, (edited )
@Kolanaki@yiffit.net avatar

The only thing I have found actually useful with them, is that I can tabletop RPGs by myself and it’s functionally the same as playing with real people. Right down to arguing over the interpretation of the rules.

xenspidey,

I use LLM’s for two things mainly. First to help with small coding things that are tedious or I just need something to bounce ideas off of (hobiest coder) also for asking questions that Google and the like can’t answer. Like “if the unit is measure is toothpicks, how far is it from the earth to the moon” stuff like that. Or ballpark approximations of things.

SamB,

I don’t deny the usefulness aspect of AI. I used it recently to increase the resolution of a video. It’s awesome. But when it’s used to replace info search, art, music… Just why?

xenspidey,

I like it for the use of art, I like making wallpapers for my phone or logos. I have a side business that I’ll wait a logo for at some point. It makes way more sense to get it close with AI then give to an artist to tweak and give the final touches then all the back and forth and expense needed for a logo company.

eldesgraciado,

How are you sure of the correctness of the model’s answers? If I tell you the moon is 69.420 toothpicks away from earth, are you going to believe me?

xenspidey,

Sure maybe it’s wrong, but seems close enough to me.

The distance from Earth to the Moon is approximately 384,400 kilometers, which is about 9,760,000,000 toothpicks laid end to end.

RGB3x3,

You’re right about Google being trash at answering that.

https://lemmy.world/pictrs/image/eab41c6d-1a11-4332-b1e3-95c7a90258c5.png

It just completely ignores the question.

Piatro,

I’ve had this argument with friends a lot recently.

Them: it’s so cool that I can just ask chatgpt to summarise something and I can get a concise answer rather than googling a lot for the same thing.

Me: But it gets things wrong all the time.

Them: Oh I know so I Google it anyway.

Doesn’t make sense to me.

RunningInRVA,

We also get things wrong all the time. Would you double check info you got from a friend of coworker? Perhaps you should.

Feathercrown,

I know how my friends and coworkers are likely to think. An LLM is far less predictable.

barsquid,

People like AI because searches are full of SEO spam listicles. Eventually they will make LLMs as ad-riddled as everything else.

SamB,

Then why not use an ad-blocker? It’s not wise to think you’re getting the right information when you can’t verify the sources. Like I said, at least for me, the trust me bro aspect doesn’t cut it.

Cupcake1972,

Ad blockers won’t cut out SEO garbage.

SamB,

And the AI will? It will use all websites to give you the info. It doesn’t think, it spins.

Cupcake1972,

I didn’t say that it will, just saying that ad blockers won’t block it out.

Piatro,

My specific point here was about how this friend doesn’t trust the results AND still goes to Google/others to verify, so he’s effectively doubled his workload for every search.

MostlyGibberish,

This is why I do a lot of my Internet searches with perplexity.ai now. It tells me exactly what it searched to get the answer, and provides inline citations as well as a list of its sources at the end. I’ve never used it for anything in depth, but in my experience, the answer it gives me is typically consistent with the sources it cites.

far_university1990,

LLM just autocomplete on steroid. If they say it more, they lie.

If you want uncensored info, run local model. But most do not care or even know. Just how most people are with tech.

PenisWenisGenius,

Uncensored ais are the best. I can ask them all the immature sex questions I want and never get banned.

SamB,

I mean, sure, as long as you’re keeping the data locally. Otherwise, yikes.

BaroqueInMind,

Which model are you using?

ArtVandelay,
@ArtVandelay@lemmy.world avatar

If anyone wants a great source on exactly how chat GPT is essentially autocomplete on steroids, Steven Wolfram did a great write-up. It’s pretty technical. …stephenwolfram.com/…/what-is-chatgpt-doing-and-w…

lando55,

LLM just autocomplete on steroid.

Funny you should say this. I only have anecdotal evidence from me and a few friends, but the general consensus is that autocomplete and predictive text are much worse now than they used to be.

vrighter,

because of ai stuff. For these kinds of things, they are perfectly happy to advertise unprecedented 99% accuracy rates, when in reality, non ai tools are held to much higher standard (mainly that they are expected to work). If the code I wrote had a consistent, perpetual 1% failure rate (even after fixing it, multiple times), I’d have been fired long ago.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • slotface
  • kavyap
  • thenastyranch
  • everett
  • tacticalgear
  • rosin
  • Durango
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • InstantRegret
  • Youngstown
  • khanakhh
  • ethstaker
  • JUstTest
  • ngwrru68w68
  • cisconetworking
  • modclub
  • tester
  • osvaldo12
  • cubers
  • GTA5RPClips
  • normalnudes
  • Leos
  • provamag3
  • anitta
  • megavids
  • lostlight
  • All magazines