rayckeith, (edited )
@rayckeith@techhub.social avatar

"The most striking example of deception the researchers uncovered in their analysis was Meta's CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances. Even though Meta claims it trained CICERO to be "largely honest and helpful" and to "never intentionally backstab" its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO didn't play fair.

"We found that Meta's AI had learned to be a master of deception," says Park. "While Meta succeeded in training its AI to win in the game of Diplomacy -- CICERO placed in the top 10% of human players who had played more than one game -- Meta failed to train its AI to win honestly."

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • osvaldo12
  • DreamBathrooms
  • GTA5RPClips
  • magazineikmin
  • thenastyranch
  • Durango
  • Youngstown
  • ngwrru68w68
  • rosin
  • slotface
  • everett
  • InstantRegret
  • kavyap
  • khanakhh
  • JUstTest
  • cisconetworking
  • normalnudes
  • tacticalgear
  • mdbf
  • modclub
  • ethstaker
  • cubers
  • Leos
  • anitta
  • megavids
  • tester
  • provamag3
  • lostlight
  • All magazines