V0ldek,

Surely the task of reviewing something written by an AI that can’t be blindly trusted, a task that basically requires you to know what said AI is “supposed” to write in the first place to be able to trust its outpu, is bound to always be simpler and result in better work than if you sat down and wrote the thing yourself.

This is only semi-related but.

When I quit Microsoft last year they were heavily pushing AI into everything. At some point they added an automated ChatGPT nonsense “summary” to every PR you opened. First it’d edit the description to add its own take on the contents, and then it’d add a review comment.

Anyone who had to deal with PR review knows it can be frustrating. This made it so that right of the bat you would have to deal with a lengthy, completely nonsensical review that missed the point of the code, asked for terrible “improvements”, or straight up proposed incorrect code.

In effect it made the process much more frustrating and time-consuming. The same workload was there, plus you had to read an equivalent of a 16-year-old who thinks he knows how software works explain your work to you badly. And since it’s a bona fide review comment, you have to address it and close it. Absolutely fucking kafkaesque.

Forcing humans to read and cleanup AI regurgitated nonsense should be a felony.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • techtakes@awful.systems
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • megavids
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • provamag3
  • JUstTest
  • All magazines