shoq,
@shoq@mastodon.social avatar

I am quite confident that someone is designing an LLM right now that will be able to help patients boil down all the web advice into usable chunks. Right now it’s just maddening trying to do that manually. Opinions vary so widely, often because there’s just a lot of obsolete data out there.

CurtAdams,
@CurtAdams@urbanists.social avatar

@shoq Oh you sweet summer child. They're actually designing LLMs that will chop suey all those existing contradictory recommendations into thousands of NEW recommendations mixing fact, fiction, and errors with minimal to no source traceability, boosted by SEO to push any genuine recommendations, accurate or otherwise, to page 15 or below of search results.

We will soon be forced to go to primary sources to get anything meaningful.

shoq,
@shoq@mastodon.social avatar

@CurtAdams that’s certainly the popular take, and is certainly going to be true for a lot of thrashing about that’s about to happen, but I don’t think that is a fair description of the potential of AI to sort out a lot of information that doesn’t need a high degree accuracy to be useful

CurtAdams,
@CurtAdams@urbanists.social avatar

@shoq That's exactly the problem. Machine learning works by training the machine with the contrast of some low-quality data and high quality data. The problem with anything under study or with controversy is there is no high quality data. Sometimes there's no consensus, and even consensus is often wrong.

And insofar as you have accurate info to train the machine with, you'd be better off just releasing that rather than a complex AI that will produce somewhat less accurate paraphrases.

shoq,
@shoq@mastodon.social avatar

@CurtAdams yes but I use those paraphrases everyday and find them extremely useful. I’m not going to bet the farm on them or trust them to fly my space shuttle, but they sure are useful for outlining ideas, strategies, and summarizing existing approaches, no?

michaelgemar,
@michaelgemar@mstdn.ca avatar

@shoq I’m sure AI can help, but I’m dubious that large language models on their own are the appropriate approach.

shoq,
@shoq@mastodon.social avatar

@michaelgemar llms, MLs, whatever acronym you choose, AI is now attracting investment dollars like nothing since the PC revolution itself. That kind of money advances the state of the art pretty fast. There are already products out there like poe.com configuring chat GPT to do all kinds of useful things.

shoq,
@shoq@mastodon.social avatar

For example, my toe surgery is extremely common, but from even some of the best medical organizations in the world, the amount of time before you can bare weight on your toe ranges from immediately, to 4 to 6 weeks, and occasionally 8 to 12. LLMs should be able to synthesize these pretty effectively

grumpasaurus,
@grumpasaurus@fosstodon.org avatar

@shoq you may find yourself confronted with "5 in 8 dentists approve" type advertisements

shoq,
@shoq@mastodon.social avatar

@grumpasaurus no because that’s precisely what it would be designed to weed out

argv_minus_one,
@argv_minus_one@mstdn.party avatar

@shoq

LLMs do not weed out misinformation; they amplify it. They will not only not do what you want, but do the exact opposite of what you want.

@grumpasaurus

shoq,
@shoq@mastodon.social avatar

@argv_minus_one @grumpasaurus you could be right. this is certainly not my field. but just what I’ve witnessed and found useful tells me there at least some degree of pre-filtering that could end would be possible. Probably with a lot of hype at first, but over time they’ll get better and better

argv_minus_one,
@argv_minus_one@mstdn.party avatar

@shoq

Not without a dramatic paradigm shift, it won't.

Current “AI” only says what sounds like a plausible answer to your question. It's a bulls--t generator. It's shockingly good at that, but that's all it does. It will never be a useful tool for distinguishing truth from falsehood.

You know how pathological liars say whatever they think you want to hear? That's “AI”. You wouldn't rely on a pathological liar for medical advice, would you?

@grumpasaurus

argv_minus_one,
@argv_minus_one@mstdn.party avatar

@shoq

Note that expert systems are a thing https://en.wikipedia.org/wiki/Expert_system and they can be useful for solving problems like yours. They are not LLMs, though; they model knowledge (e.g. a big list of known health conditions, and a list of each one's symptoms), not language.

And even they are only as good as their input data, so scouring the web for data from which to build an expert system still won't work. You need high-quality data from actual doctors, and that's not cheap.

@grumpasaurus

shoq,
@shoq@mastodon.social avatar

@argv_minus_one @grumpasaurus that’s certainly the argument I hear a lot, and I really can’t argue against it, but my hunch is that it’s not recognizing the value of llms to make good use of such expert systems. I’m in no way suggesting that current llms would do this

shoq,
@shoq@mastodon.social avatar

@argv_minus_one @grumpasaurus, but that doesn’t mean responses like this are not as useful as any document you could find on the web after about 6 hours of trolling for one. It’s an instant outline of all the things to discuss with my doctor. Even if a little wrong in places? So What?

https://chat.openai.com/share/d90a4b39-1064-4430-abf2-969dff4dd963

argv_minus_one,
@argv_minus_one@mstdn.party avatar

@shoq

“A little wrong in places” is one hell of an understatement for what amounts to random noise. Unless you fact-check everything on your outline before you present it to your doctor, all you're doing is wasting your doctor's time and the money you paid for your appointment.

I can't see whatever your link points to, sorry. With JavaScript disabled, the page is blank.

@grumpasaurus

shoq,
@shoq@mastodon.social avatar

@argv_minus_one @grumpasaurus people still disable JavaScript?

shoq,
@shoq@mastodon.social avatar

@argv_minus_one @grumpasaurus I’ve spent two full days researching this very information, and there isn’t a single thing that did not line up with everything else I’ve read. So from my perspective, it’s a perfectly fine outline to discuss with my doctor. He doesn’t expect me to be an expert or have expert sources. I don’t see anything wrong with this so long as the user is fully aware that flaws can range from trivial to catastrophic.

grumpasaurus,
@grumpasaurus@fosstodon.org avatar

@argv_minus_one @shoq yeah for what it's worth I think LLMs work best when trained and aimed at very targeted sets of data like enterprise knowledge bases and some interfaces now also explicitly cite source data that may be most relevant to a response (Notion does this)

Like a specific subreddit for example, not ALL of reddit, and, knowing that, you know for certain that you're in the context of the subreddit. I think LLMs right now can do wonders with Stackexchange uh "stacks?".

shoq,
@shoq@mastodon.social avatar

@grumpasaurus @argv_minus_one that’s exactly my point. There is a huge market need for things that effectively filter very narrow contexts with a reasonable degree of accuracy.

mustaffaa440,
@mustaffaa440@mastodon.social avatar
grumpasaurus,
@grumpasaurus@fosstodon.org avatar

@shoq @argv_minus_one the problem is, for many use cases in which data is grabbed out in the wild, what is the criteria to determine what is true? as it stands, it's effectively a "5 dentists out of 8 say.
." In the end, what is the governance of truth on the Internet?

shoq,
@shoq@mastodon.social avatar

@grumpasaurus @argv_minus_one but people face that same conundrum with everything they have to ask about today ? You won’t convince me that a chat AI summarizing all the options of what might be right or wrong is not valuable and worthwhile.because it just is

grumpasaurus,
@grumpasaurus@fosstodon.org avatar

@shoq @argv_minus_one I mean. That's cool. Yeah people do treat things they see on the Internet as truth as much as people are treating LLMs as truth. It sucks.

meltedcheese,
@meltedcheese@c.im avatar

@grumpasaurus @shoq @argv_minus_one yes, and they cannot conceive why this is wrong. Their trust is misplaced. Making good decisions requires knowledge and the ability to reason about how to use that knowledge to solve real-world problems (borne of experience). These attributes are fundamental to expertise. All the LLMs, search engines, aggregators and data mining will not get there without the conceptual skills possessed by true experts.

shoq,
@shoq@mastodon.social avatar

@meltedcheese @grumpasaurus @argv_minus_one I guess all I’m arguing is not whether LLMs are anywhere close to perfect tools, or able to replace human intelligence, but whether someone needs a perfect tool if the aim is simply to parse reasonably good data from piles of utterly meaningless data. Does one need a perfect AI tool to give someone the 10 most common symptoms of toxic mold and a run down on the most common remedies for it? In under 1 second? Because that alone is a win, imo.

grumpasaurus,
@grumpasaurus@fosstodon.org avatar

@argv_minus_one @shoq but when it's like "ask me anything" and it's "I'm trained on everything never you mind" that's when LLMs fall apart

shoq,
@shoq@mastodon.social avatar

@grumpasaurus @argv_minus_one as my brother the Cleo winning copywriter and creative director put it “I’m not saying chat AI writes better copy than I do.” I’m saying it writes better than 90% of the copywriters I’ve ever worked with.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ngwrru68w68
  • GTA5RPClips
  • osvaldo12
  • everett
  • khanakhh
  • rosin
  • InstantRegret
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • Youngstown
  • mdbf
  • slotface
  • kavyap
  • JUstTest
  • modclub
  • cisconetworking
  • cubers
  • anitta
  • Leos
  • Durango
  • ethstaker
  • normalnudes
  • tester
  • megavids
  • tacticalgear
  • provamag3
  • lostlight
  • All magazines