Renegade,

Purely speculation but, I wonder if this is a case of having some old, very low quality photos and trying to enhance and upscale them for the show.

Jimmycrackcrack,

I’ve done that for broadcast before. Sadly it barely made any difference, but I felt it was at least just a little better than nothing and made it at least possible to sorta see what was supposed to be going on in the low quality source images and those images were the only ones that seemed to exist of the thing we were showing.

stevedidwhat_infosec,

I’ve said this since day one - we need a reliable way to identify AI generated content

If we fail to separate the two, or create safeguards like this, we’re in a lot more trouble than the destruction of the job market would be. And that’s saying something.

“Put it back in the box” isn’t a solution.

Banning the technology isn’t a solution.

We must face it for what it is, put our heads together, and create the solution.

Like we always have.

CommanderCloon,

You don’t understand that tech; when making an AI model, you do code both a generator of whatever it is you want to make, as well as a “detector” which tells you whether or not the result is convincing.

Then you change the genertor slightly based of the results of the “detector”

You do that a few million times and then you have a correct AI model, the quality of which is dependant on both the quantity of training and the “detector”.

If someone comes up with a really strong “detector”, they will do work as intended for a few days/weeks, and then AIs will come on the market which will be able to fool the detector

stevedidwhat_infosec,

If trained and written several different kinds of AI including neural nets and LLMs.

This isn’t even close to how LLMs work, let alone how AI works.

You’re literally describing how to overfit model data which is the exact opposite of what you want to do.

Do everyone else a favor next time and don’t try to armchair.

CommanderCloon,

I don’t know which kinds of AIs you’ve worked on but my description (although using the incorrect terms) is certainly valid. I’ve described how GANs work, I’m not pulling this out of thin air 🤷‍♂️

The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network’s training objective is to increase the error rate of the discriminative network (i.e., “fool” the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution)).

Wikipedia

So yes, whichever method you design which allows the product of an AI to be detected can be used by a discriminative network for a GAN, which defeats the purpose of designing the method to begin with

stevedidwhat_infosec,

Apologies for the ignorant comment, while GANs have lost popularity in favor of Diffusion models, they’re still used more or less.

Been having a really shit day and I took it out on you - that wasn’t fair

CommanderCloon,

No worries, I appreciate your apology

GBU_28,

Unfortunately, an arms race has begun.

Said tool could be used to train new ai to avoid it.

SomeGuy69,

If you ever create a reliable tool to identify AI images, you automatically provide learning data for AI to generate images that get past the AI detection.

halcyoncmdr,
@halcyoncmdr@lemmy.world avatar

There’s no reason to even hide this. If there’s no photo to use there in the past they would have had an artist rendition, this is no difference.

A disclaimer that it is AI generated like an artist rendition would not deter from the impact at all.

adam_y, (edited )
@adam_y@lemmy.world avatar

I think it is a question of representation.

If they say what this is, then fine, if they don’t then its a problem.

The reason being that an artist rendition is almost clearly an artist rendition, whereas ai imagery can look cannily like an actual photograph, and therefore present itself as a primary document.

The problem with misrepresenting, whether deliberately or accidentally, primary documentation is that this is supposed to be a documentary, one of the few show types where fact and accuracy (should) matter.

antidote101,

To think this is what companies are trying to get away with whilst the technology is still flawed enough to be caught. As it gets more accurate in what it can create we’re going to have less of a realistic understanding of reality.

cloud_herder,

Now but what about the possibility that we are only noticing the bad ones and not the convincing ones?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • artificial_intel@lemmy.ml
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • JUstTest
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • megavids
  • lostlight
  • All magazines