catreadingabook,
catreadingabook avatar

This has happened before, like 30 years ago. In Winter v. G.P. Putnam's Sons, 938 F.2d 1033 (9th Cir. 1991) it was ruled that the publisher can't be sued for selling a guide book that misled a reader into eating an extremely poisonous mushroom.

I can't find anything about the authors in that case (I think Colin Dickinson and John Lucas?) ever getting sued, probably because they were in Britain so the US courts couldn't get jurisdiction over them, unlike how it could against the publisher who did business in the US.

Might be time for a change in the law.

Chromebby,
@Chromebby@lemmy.world avatar

This is actually really scary and problematic

CADmonkey,

“Chat-gpt, how can I avoid psychedelic mushrooms?”

reev,

At least those won’t kill you

great_site_not,

Sure they will, if they’re actually something else.

Swedneck,
@Swedneck@discuss.tchncs.de avatar

You shouldn’t be trusting random online books anyways, you shouldn’t even be trusting random physical books.

Only trust books with a reputation, or that you can verify are authored by someone trustworthy.

Aganim,

And even with a good guide: always be very careful. In my country we are seeing a rise of cases where migrants end up dead or with severe organ damage, because they confuse our indigenous deadly mushrooms with edible mushrooms from their country. And it’s hard to blame them, turns out that some species look so much alike that even experts have trouble telling the difference.

So be careful out there and make sure your guide is suited for the area you’re in.

Nougat,

FFS it's not AI, it's large language modelling. There is nothing "intelligent" about that.

Mr_Blott,

Found the Linux user

SoNick,

I think you mean GNU/Linux

Nougat,

On what basis?

icesentry,

That ship has sailed unfortunately

snooggums,
snooggums avatar

And hover boards don't even hover!

CrayonRosary,

I’ve seen this same comment 100 times, and it’s less intelligent than an LLM.

If you talk to actual computer scientists studying these things, they call it intelligence. It doesn’t matter how stupid it is sometimes. All intelligences make mistakes, even confident mistakes. It doesn’t matter that its “just a language model”. Intelligence, when reasonably defined, includes algorithms that produce intelligent output. LLMs absolutely produce intelligent output. What LLMs are not is general artificial intelligence, and they’re not sapient, but they are still intelligent.

LoafyLemon,
LoafyLemon avatar

You are the second person I've met that understands the difference between AI and AGI. It might not mean much to you, but it means a lot to me. I found my people!

Dashmaybe,

The problem lies in how laymen interpret the phrase “intelligence”. I fully understand and agree with your argument in respect to the technical definition and usage, but laymen will not care enough to understand that, and they’re creating dangerous situations because of it.

Immersive_Matthew,

This and many more issues heading our way is why we will need our own locally run AI agent to vet and warn us.

platysalty,

Off topic, but wow, a community for foraging

Stinkywinks,

Be cool if it came with a few examples

average650,
@average650@lemmy.world avatar

I think this is simply a crime.

If not, then I’m certain they would lose a civil case if someone got hurt because of this.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • foraging@lemmy.world
  • DreamBathrooms
  • magazineikmin
  • ethstaker
  • khanakhh
  • rosin
  • Youngstown
  • everett
  • slotface
  • ngwrru68w68
  • mdbf
  • GTA5RPClips
  • kavyap
  • thenastyranch
  • cisconetworking
  • JUstTest
  • cubers
  • Leos
  • InstantRegret
  • Durango
  • tacticalgear
  • tester
  • osvaldo12
  • normalnudes
  • anitta
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines