@FaceDeer@fedia.io
@FaceDeer@fedia.io avatar

FaceDeer

@FaceDeer@fedia.io

Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and then some time on kbin.social.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

FaceDeer,
@FaceDeer@fedia.io avatar

I'm not overly concerned because I know how to use these things. I know what they do, and so when one of them is doing something concerning I turn it off.

People are frightened of things they don't understand, and it's apparent that lots of people don't understand AI.

FaceDeer,
@FaceDeer@fedia.io avatar

Yeah, it's not stopping me from commenting. I'm only noting the downvotes in this case because I was making a point elsewhere in the thread about the extremely anti-AI sentiment around here. In this case I'm not even saying something positive about it, merely speculating about the reason why Microsoft is doing this, and I guess that's still being interpreted as "justifying" AI and therefore something worthy of attack.

FaceDeer,
@FaceDeer@fedia.io avatar

I was asked what the reason for this function was, so I speculated on that reason in an attempt to answer the question, and I got downvoted for it.

I wasn't addressing the privacy concerns at all. That wasn't part of the question.

FaceDeer,
@FaceDeer@fedia.io avatar

That just so happens to describe me to a T. I'm a privacy-minded programmer who came here as part of the Reddit exodus. Because I'm a programmer and am aware of how these AIs function, I am not overly concerned about them and appreciate the capabilities they provide to me. I'm aware of the risks and how to manage them.

The comment I was responding to brought up "Linux is better" unprompted. But that's in line with the echo, so I guess that's fine.

FaceDeer,
@FaceDeer@fedia.io avatar

I don't know what specifically Microsoft is planning here, but in the past I've taken screenshots of my settings window and uploaded it to Copilot to ask it for help sorting out a problem. It was very useful for Copilot to be able to "see" what my settings were. Since the article describes a series of screenshots being taken over time it could perhaps be meant to provide context to an AI so that it knows what's been going on.

FaceDeer,
@FaceDeer@fedia.io avatar

Copilot has boosted my programming productivity significantly. Bing Chat has replaced Google when it comes to conceptual searches (ie, when I want to learn something, not when I want to find some specific website). I've been using Bing image creator extensively for illustrations for a tabletop roleplaying campaign I'm running. I still mostly use Gimp and Stable Diffusion locally for editing those images, but I've checked out Paint because of the AI integration and was seriously considering using it. Paint of all things, a program that's long been considered somewhat of a joke.

FaceDeer,
@FaceDeer@fedia.io avatar

This thread isn't about websites, it's about functions built into operating systems. Those are generally much more configurable. Microsoft wants corporations to run Windows, after all, and corporations tend to be very touchy about this sort of thing.

FaceDeer,
@FaceDeer@fedia.io avatar

Well good news, then, that's not what Microsoft is using AI for this case.

What is the Legal copyright on a Lemmy Post?

Most instances don’t have a specific copyright in their ToS, which is basically how copyright is handled on corporate social media (Meta/X/Reddit owns license rights to whatever you post on their platform when you click “Agree”). I’ve noticed some people including Copyright notices in posts (mostly to prevent AI use). Is...

FaceDeer,
@FaceDeer@fedia.io avatar

Yeah, it's unclear whether copyright is even relevant when it comes to training AI. It feels a lot like people who feel very strongly about intellectual property but have clearly confused trademarks, patents, copyright, and maybe even regular old property law - they've got an idea of what they think is "right" and "wrong" but it's not closely attached to any actual legal theory.

FaceDeer,
@FaceDeer@fedia.io avatar

We aren't disagreeing because that's not what I was addressing in the first place. The comment I'm responding to, from Dave, reads:

In that case probably the strongest argument is that if it were legal, many people would get off charges of real CSAM because the prosecuter can't prove that it wasn't AI generated.

Emphasis added. The premise of the scenario is that possession of such images (ie, AI-generated CSAM) is not illegal. Given that, for purposes of argument, it follows that this would indeed be a valid defense. You'd need to prove in court that the CSAM pictures that you're charging someone with possessing are not AI-generated, in that scenario.

If you want to have a wider discussion of whether AI-generated CSAM images should be illegal, that's a separate matter.

FaceDeer,
@FaceDeer@fedia.io avatar

But it doesn't fully understand young and "naked young person" isn't just a scaled down "naked adult".

Do you actually know that, or are you just assuming it?

Personally, I'm basing my assertions off of experience with related situations, where I've asked image AIs to generate images of things that I'm quite sure weren't in its training set and that require conceptual understanding to create "hybrids." It's done a decent job of those so I'm assuming that it can figure out this specific situation as well, since most of these models have a lot of examples of naked people and young people in their training sets. But I haven't actually asked any AIs to generate images of naked young people to test this one specific case.

FaceDeer,
@FaceDeer@fedia.io avatar

3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.

FaceDeer,
@FaceDeer@fedia.io avatar

3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.

FaceDeer,
@FaceDeer@fedia.io avatar

Do a Google Image search for "child" or "teenager" or other such innocent terms, you'll find plenty of such.

I think you're underestimating just how well AI is able to learn basic concepts from images. A lot of people imagine these AIs as being some sort of collage machine that pastes together little chunks of existing images, but that's not what's going on under the hood of modern generative art AIs. They learn the underlying concepts and characteristics of what things are, and are able to remix them conceptually.

FaceDeer,
@FaceDeer@fedia.io avatar

You obviously don't understand squat about AI.

Ha.

AI only knows what has gone through it's training data, both from the developers and the end users.

Yes, and as I've said repeatedly, it's able to synthesize novel images from the things it has learned.

If you train an AI with pictures of green cars and pictures of red apples, it'll be able to figure out how to generate images of red cars and green apples for you.

FaceDeer,
@FaceDeer@fedia.io avatar

3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.

FaceDeer,
@FaceDeer@fedia.io avatar

Yes. You're saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That's simply not the case.

You also implied earlier on that these AIs "act or respond on their own", which is also not true. They only generate images when prompted to by a user.

The fact that an AI is able to generate inappropriate material just means it's a versatile tool.

FaceDeer,
@FaceDeer@fedia.io avatar

This comment thread started with you implying that the AI was trained on illegal material, I'm really not sure how it's got to this point from that one.

FaceDeer,
@FaceDeer@fedia.io avatar

It's possible to legally photograph young people. Completely ordinary legal photographs of young people exist, from which an AI can learn the concept of what a young person looks like.

FaceDeer,
@FaceDeer@fedia.io avatar

Well, your philosophy runs counter to the fundamentals of Western justice systems, then.

FaceDeer, (edited )
@FaceDeer@fedia.io avatar

It's not the specific thing being made illegal, it's the underlying philosophy of "Better a dozen innocent men go to prison than one guilty man go free" I'm arguing against here. Most western justice systems operate under a principle of requiring guilt to be proven beyond a reasonable doubt, and if there is doubt then guilt cannot be considered proven and the person is not convicted.

The comment I'm responding to is proposing a situation where non-AI-generated images are illegal but AI-generated ones aren't, and that there's no way to tell the difference just by looking at the image itself. In that situation you couldn't convict someone merely based on the existence of the image because it could have been AI-generated. That's fundamental to the "innocent until proven guilty beyond all reasonable doubt" philosophy I'm talking about, to do otherwise would mean that innocent people could very easily be convicted of crimes they didn't do.

FaceDeer,
@FaceDeer@fedia.io avatar

Well, I haven't gone to any of my image AIs and actually asked them to generate naked pictures of young people. So unless you want to go there this will necessarily involve some degree of theoretical elements.

However, according to the article it's possible to generate this stuff with Stable Diffusion models, and Stable Diffusion models have a negligible amount of CSAM in the training set. So short of actually doing the experiment that would seem to settle it.

I think a lot of people don't appreciate just how surprisingly sophisticated the "world model" that these image AIs have learned is. There was a paper a while back where some researchers were trying to analyze how image generators were working internally, and they discovered that if you were to for example ask one to make a picture of a bicycle it will first come up with a depth map of the image before it starts doing anything to the visual output. That shows that the AI has figured out what the three-dimensional form of a bicycle is based entirely on a pile of two-dimensional training images, with no other clues telling it that the third dimension even exists in the first place.

FaceDeer,
@FaceDeer@fedia.io avatar

It understands young and old. That means it knows a kid is not just a 60% reduction by volume of an adult.

We know it understands these sorts of things because of the very things this whole kerfuffle is about - it's able to generate images of things that weren't explicitly in its training set.

FaceDeer,
@FaceDeer@fedia.io avatar

3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.

FaceDeer,
@FaceDeer@fedia.io avatar

Image-generating AI is capable of generating images that are not like anything that was in its training set.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • khanakhh
  • JUstTest
  • InstantRegret
  • GTA5RPClips
  • Durango
  • normalnudes
  • cubers
  • tacticalgear
  • cisconetworking
  • tester
  • modclub
  • provamag3
  • anitta
  • Leos
  • lostlight
  • All magazines