ColesStreetPothole,
@ColesStreetPothole@weatherishappening.network avatar
wordshaper,
@wordshaper@weatherishappening.network avatar

@ColesStreetPothole this is actually not bad? It seems bad because of all the bullshit ChatGPT nonsense, and the fact that for “marketing reasons” anything that vaguely involves neural networks gets labeled “AI”, but using trained neural nets to identify diseases (both from symptoms and from radiology reports) is actually a thing that works pretty well. In some cases it works better than people and in this specific set of circumstances it’s only going to get better.

ColesStreetPothole,
@ColesStreetPothole@weatherishappening.network avatar

@wordshaper I thought Figure 1, risk vs. function, was interesting.

As for AI-generated diagnoses, if people are 90% accurate in making them, and the people who created the AI are only 90% accurate, then that renders to 81% accuracy. 😆

wordshaper,
@wordshaper@weatherishappening.network avatar

@ColesStreetPothole you would think that! But it turns out not to be the case — when people evaluate the results from the trained models they tend to be a lot more careful and pickier, and usually there are two separate people doing the evaluation. The end result is the models are usually better, and absolutely much faster.

The trade off is they don’t catch what they aren’t trained for. A radiologist might look at a chest xray and go “no cancer, but TB” while the model just goes “no cancer”

ColesStreetPothole,
@ColesStreetPothole@weatherishappening.network avatar

@wordshaper Well, that latter thing is kind of a big deal, isn't it? 🤔

In the end, I hope we never get to a point where AI results—including the raw images and test results—aren't also reviewed by a human.

wordshaper,
@wordshaper@weatherishappening.network avatar

@ColesStreetPothole oh, yeah, test results always need to be something an actual human doctor uses, not a replacement for a human doctor. (Radiologist? Maybe)

But honestly this is one of those cases where the tech can actually be a net positive—if we could integrate reliable image evaluation into the hardware can you imagine what that would do for small hospitals? Or in a lot of the world where maybe they could get an X-ray machine or other scanner, but not people to read the images.

kat,
@kat@weatherishappening.network avatar

@ColesStreetPothole @wordshaper yeah, machine learning for expert systems is good actually. the relevant concept here is the “human in the loop”. aka automation is good when we use it to augment human expertise rather than replace it

https://pluralistic.net/2024/04/23/maximal-plausibility/

wordshaper,
@wordshaper@weatherishappening.network avatar

@kat @ColesStreetPothole yeah, we want people in the loop somewhere for quite a while. And when trained neural nets fail they tend to fail differently than people do so people+image rec does better than either alone.

It’s just… people actually kinda suck at repetitive complex image recognition tasks. Last I checked radiologists average about 89% accurate and residents about 76%. Plus people get tired, or distracted, so accuracy decreases as the day goes on.

wordshaper,
@wordshaper@weatherishappening.network avatar

@kat @ColesStreetPothole we absolutely should be requiring serious, in depth evaluation of any diagnostic model. It’s just that in most cases doing better is doable. And honestly over time this is a spot where automation can generally beat people — while an image rec system may not recognize TB on now, neither will quite a few doctors, and it’s a lot easier to push out a TB model (or Ebola model, or mesothelioma model, or…) to all your machines than it is to push it out to all the doctors.

wordshaper,
@wordshaper@weatherishappening.network avatar

@ColesStreetPothole we aren’t too far off from X-ray/cat scan/MRI/ultrasound machines that spit out a diagnostic report along with the images, and that report will be more accurate than people. (Not perfectly accurate, but people are only in the low 90% range for accuracy, so at least better than people)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • tacticalgear
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • megavids
  • lostlight
  • All magazines