Inside the messy ethics of making war with machines

AI is making its way into decision-making in battle. Who’s to blame when something goes wrong?

With machine-learning-based decision tools, “you have more apparent competency, more breadth” than earlier tools afforded, says Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency. “And perhaps a tendency, as a result, to turn over more decision-making to them.”

silvercove,

War without AI was okay?

lasagna,
@lasagna@programming.dev avatar

Since when do we blame the responsible people when things go wrong?

Haui,
@Haui@discuss.tchncs.de avatar

Pretty easy. The organization that uses such technology, then the person/s who decided to use such technology and lastly the manufacturer of such technology.

Prison-, Death- or Financial sentences should be ascribed 3/2/1.

livus,
livus avatar

@stopthatgirl7 I stumbled on this because of your post earlier today about AI. You got me wondering how Palintir's war-waging AI is getting on.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • DreamBathrooms
  • magazineikmin
  • cubers
  • thenastyranch
  • Youngstown
  • slotface
  • osvaldo12
  • khanakhh
  • mdbf
  • rosin
  • kavyap
  • InstantRegret
  • Durango
  • Backrooms
  • JUstTest
  • normalnudes
  • modclub
  • GTA5RPClips
  • ethstaker
  • everett
  • tacticalgear
  • cisconetworking
  • provamag3
  • anitta
  • Leos
  • tester
  • lostlight
  • provamag4
  • All magazines