waspentalive,

Making AI companies pay royalties would cause them to charge for any use of their AI image generators, putting such technology beyond the reach of people who could not justify paying. The rest of us will miss out on the interesting images they might have created.

Even_Adder,

Reminder that this is made by Ben Zhao, the University of Chicago professor who illegally stole open source code for his last data poisoning scheme.

DrRatso,

Ih, thats the same author as glaze? If so, I have heavy doubt about this one. Glaze made everything look like shit, and even then it did not really work, not to mention anyone who would actually use an artwork for training could remove glaze easily using A1111.

Treczoks,

Whoever invents sich a thing simply underestimates the target groups' ability to analyzes this and in a not-so-far future will filter such things out.

RobotToaster,
@RobotToaster@mander.xyz avatar

Luddites trying to smash machine looms

waspentalive,
kakes,

I don’t believe for a second that this works, and if it did, it would be trivial to get around.

It claims to “change the pixel values imperceptibly”. That just isn’t how these generative models work. These models are just looking at the colors, the same way a human would. If it’s imperceptible to a human, it won’t affect these models. They could subtly influence it, perhaps, but it would be nothing near the scale they claim.

My first thought was that they’re trying to cash in, but from what I can tell it seems to be free (for now, at least?). Is it for academic “cred”? Or do they somehow actually think this works?

It just seems to be such a direct appeal to non-tech-savvy people that I can’t help but question their motivations.

stoy,

This is the same article I have read before, and it covers technology that doesn’t work in reality.

The inventors need to read up on a new fringe technology called “anti aliasing”, which quickly and easily removes the protection.

BetaDoggo_,

That isn’t neccesarily true, though for now there’s no way to tell since they’ve yet to release their code. If the timeline is anything like their last paper it will be out around a month after publication, which will be Nov 20th.

There have been similar papers for confusing image classification models, not sure how successful they’ve been IRL.

kennismigrant,

MIT Technology Review got an exclusive preview of the research

The article was published 3 days after the arxiv release. How is this an “exclusive preview”?

Successfully tricking existing models by a few crafted samples doesn’t seem like a significant achievement. Can someone highlight what exactly is interesting here? Anything that can’t be resolved by routine adjustments to loss/evaluation functions?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.ml
  • rosin
  • magazineikmin
  • tacticalgear
  • khanakhh
  • InstantRegret
  • Youngstown
  • mdbf
  • slotface
  • thenastyranch
  • everett
  • osvaldo12
  • kavyap
  • GTA5RPClips
  • DreamBathrooms
  • provamag3
  • Durango
  • normalnudes
  • ngwrru68w68
  • vwfavf
  • ethstaker
  • modclub
  • cubers
  • cisconetworking
  • Leos
  • anitta
  • tester
  • megavids
  • JUstTest
  • All magazines