Kara,
Kara avatar

I feel like a good lawyer should do a tiny bit of research before letting AI write a court brief. At least enough to read the 2 warnings on the ChatGPT site about how it can generate inaccurate information.

withersailor,

Or, they’ve done it before and gotten away with it.

sj_zero,

It’s impressive how well ChatGPT hallucinates citations.

I was asking it about a field of law I happen to be quite aware of (as a layman), and it came up with entire sections of laws that didn’t exist to support its conclusions.

Large Language Models like ChatGPT are in my view verisimilitude engines. Verisimilitude is the appearance of being true or real. You’ll note, however, that it is not being true or real, simply appearing so.

It’s trying to make an answer that looks right. If it happens to know the actual answer then that’s what it’ll go with, but if it doesn’t, it’ll go with what a correct answer might statistically look like. For fields with actual right and wrong answers like law and science and technology, its tendency to make things up is really harmful if the person using the tool doesn’t know it will lie.

lazyplayboy,

If you ask for a link to a YouTube video on a certain subject, for example, it will reply confidently with a genuine looking link and genuine looking title that is completely fake, although the channel name is usually real.

Once you realise that chatGPT doesn't have a link to the internet it makes sense that it can not possibly have a database of all possible internet links, but it's annoying that it's not been trained to appreciate this limitation of itself.

delawen,
delawen avatar

it's annoying that it's not been trained to appreciate this limitation of itself.

Well, ChatGPT (and the likes) are designed exclusively to pass the Turing Test. They have to appear human. That's the only goal. And that goal is clearly done.

MiddleWeigh,
@MiddleWeigh@lemmy.world avatar

I have dealt with extremely incompetent, or lazy lawyers. This checks out.

conciselyverbose,

$5k is a horseshit low fine for this.

joeygibson,
joeygibson avatar

Yeah, this seems like a massive ethics violation that should result in contempt, or referral for disbarment.

Catarinalina,
Catarinalina avatar

I'm imagining their lawyer pulling out their wallet at the hearing and thumbing through their singles to pay the fine.

$5k is absolutely nothing for a typical law firm, lunch money

Panko,
Panko avatar

There’s a great LegalEagle video on this, it’s hilarious the level of incompetence required to get to the point where they submitted brief.

lazyplayboy,
waspentalive,

The problem is, I don't think chat programs can detect when they are hallucinating. When I asked ChatGPT if it knew when it hallucinated it said "I don't hallucinate".

T156, (edited )

No, or else it would be a solved issue already, just keep retrying if it hallucinates. From the perspective of the program, it's just generating text. A hallucination is seemingly no different from it generating things in its trained model, or constructing sentences.

NetHandle,

It's wild how negligent the lawyer was. How did he manage to pass the bar with that level of work ethic?
You would think that any reasonable person having someone else do their work for them would at least have to good sense to look it over, let alone letting an AI do it for you and not fact checking it at all. What an utter nincompoop.

roo,
@roo@lemmy.one avatar

There should be a law that AI answers have a cryptic word hidden in plain sight to reveal itself if put to review. Something weird that only AI will pick up.

Flaky_Fish69,
Flaky_Fish69 avatar

The issue here is that most people will give it at least cursory read throughs, to make sure it passes the sniff test.

The reality is it’s not the ai that’s submitting it. The human is blindly cutting and pasting, but the moment you add “this text was generated by ai”… literally or just as tags… then they’re going to start clipping or adjusting it

Jon-H558,

I saw a YouTube video (sorry can't find the link now) on using hidden weighting to allow exam markers to detect ai. It was things like words with many synonyms always picking the third most popular or somthing so that over a 3 page essay if you happened to use the lightly off weighting on word choice the anti cheating software would pick up on it. The chances of a human weighing themselves that way would be rare as they would tend to work to one of its patterns where as AI was forced to use a few different patterns but in a deterministic way.

huntingrarebits,

Over time, wouldn't you expect as people see more examples of text generated by these systems, that the general usage of the "third most popular" synonym would eventually eclipse the second or first? If the ranking of the synonyms were based solely on written texts, proliferation of generated texts with weighted word choices would also skew usage.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • tech
  • Durango
  • DreamBathrooms
  • everett
  • magazineikmin
  • osvaldo12
  • Youngstown
  • khanakhh
  • slotface
  • mdbf
  • rosin
  • thenastyranch
  • kavyap
  • cubers
  • tester
  • JUstTest
  • InstantRegret
  • ethstaker
  • GTA5RPClips
  • tacticalgear
  • cisconetworking
  • ngwrru68w68
  • normalnudes
  • anitta
  • modclub
  • Leos
  • provamag3
  • megavids
  • lostlight
  • All magazines