@FaceDeer@fedia.io
@FaceDeer@fedia.io avatar

FaceDeer

@FaceDeer@fedia.io

Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and then some time on kbin.social.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

FaceDeer,
@FaceDeer@fedia.io avatar

If AI has the means to generate inappropriate material, then that means the developers have allowed it to train from inappropriate material.

That's not how generative AI works. It's capable of creating images that include novel elements that weren't in the training set.

Go ahead and ask one to generate a bonkers image description that doesn't exist in its training data and there's a good chance it'll be able to make one for you. The classic example is an "avocado chair", which an early image generator was able to produce many plausible images of despite only having been trained on images of avocados and chairs. It understood the two general concepts and was able to figure out how to meld them into a common depiction.

FaceDeer,
@FaceDeer@fedia.io avatar

Image AIs also don't act or respond on their own. You have to prompt them.

FaceDeer,
@FaceDeer@fedia.io avatar

The trainers didn't train the image generator on images of Mr. Bean hugging Pennywise, and yet it's able to generate images of Mr. Bean hugging Pennywise. Yet you insist that it can't generate inappropriate images without having been specifically trained on inappropriate images? Why is that suddenly different?

FaceDeer,
@FaceDeer@fedia.io avatar

No, you keep repeating this but it remains untrue no matter how many times you say it. An image generator is able to create novel images that are not directly taken from its training data. That's the whole point of image AIs.

FaceDeer,
@FaceDeer@fedia.io avatar

The person who was charged was using Stable Diffusion to generate the images on their own computer, entirely with their own resources. So it's akin to a company that sells 3D printers selling a printer to someone, who then uses it to build a gun.

FaceDeer,
@FaceDeer@fedia.io avatar

Better a dozen innocent men go to prison than one guilty man go free?

FaceDeer,
@FaceDeer@fedia.io avatar

First, you need to figure out exactly what it is that the "blame" is for.

If the problem is the abuse of children, well, none of that actually happened in this case so there's no blame to begin with.

If the problem is possession of CSAM, then that's on the guy who generated them since they didn't exist at any point before then. The trainers wouldn't have needed to have any of that in the training set so if you want to blame them you're going to need to do a completely separate investigation into that, the ability of the AI to generate images like that doesn't prove anything.

If the problem is the creation of CSAM, then again, it's the guy who generated them.

If it's the provision of general-purpose art tools that were later used to create CSAM, then sure, the AI trainers are in trouble. As are the camera makers and the pencil makers, as I mentioned sarcastically in my first comment.

FaceDeer,
@FaceDeer@fedia.io avatar

You suggested a situation where "many people would get off charges of real CSAM because the prosecuter can't prove that it wasn't AI generated." That implies that in that situation AI-generated CSAM is legal. If it's not legal then what does it matter if it's AI-generated or not?

FaceDeer,
@FaceDeer@fedia.io avatar

You realize that there are perfectly legal photographs of female genitals out there? I've heard it's actually a rather popular photography subject on the Internet.

Do you see where I'm going with this? AI only knows what people allow it to learn...

Yes, but the point here is that the AI doesn't need to learn from any actually illegal images. You can train it on perfectly legal images of adults in pornographic situations, and also perfectly legal images of children in non-pornographic situations, and then when you ask it to generate child porn it has all the concepts it needs to generate novel images of child porn for you. The fact that it's capable of that does not in any way imply that the trainers fed it child porn in the training set, or had any intention of it being used in that specific way.

As others have analogized in this thread, if you murder someone with a hammer that doesn't make the people who manufactured the hammer guilty of anything. Hammers are perfectly legal. It's how you used it that is illegal.

FaceDeer,
@FaceDeer@fedia.io avatar

You obviously don't understand squat about AI.

Ha.

AI only knows what has gone through it's training data, both from the developers and the end users.

Yes, and as I've said repeatedly, it's able to synthesize novel images from the things it has learned.

If you train an AI with pictures of green cars and pictures of red apples, it'll be able to figure out how to generate images of red cars and green apples for you.

FaceDeer,
@FaceDeer@fedia.io avatar

This comment thread started with you implying that the AI was trained on illegal material, I'm really not sure how it's got to this point from that one.

FaceDeer,
@FaceDeer@fedia.io avatar

Yes. You're saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That's simply not the case.

You also implied earlier on that these AIs "act or respond on their own", which is also not true. They only generate images when prompted to by a user.

The fact that an AI is able to generate inappropriate material just means it's a versatile tool.

FaceDeer,
@FaceDeer@fedia.io avatar

Well, your philosophy runs counter to the fundamentals of Western justice systems, then.

FaceDeer,
@FaceDeer@fedia.io avatar

It's possible to legally photograph young people. Completely ordinary legal photographs of young people exist, from which an AI can learn the concept of what a young person looks like.

FaceDeer,
@FaceDeer@fedia.io avatar

Do a Google Image search for "child" or "teenager" or other such innocent terms, you'll find plenty of such.

I think you're underestimating just how well AI is able to learn basic concepts from images. A lot of people imagine these AIs as being some sort of collage machine that pastes together little chunks of existing images, but that's not what's going on under the hood of modern generative art AIs. They learn the underlying concepts and characteristics of what things are, and are able to remix them conceptually.

FaceDeer,
@FaceDeer@fedia.io avatar

3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.

FaceDeer,
@FaceDeer@fedia.io avatar

Whereas I'm enjoying many of the new AI-powered features that Microsoft has been coming up with lately.

But echo chambers gonna echo, I guess.

FaceDeer,
@FaceDeer@fedia.io avatar

Check the upvote/downvote counts on my comment vs. macattack's. It's nigh impossible to say anything positive about AI around here.

FaceDeer,
@FaceDeer@fedia.io avatar

No, I sound like someone who likes many of the new AI-powered features that Microsoft has been coming up with lately.

I don't use Linux. I don't think about it at all, it doesn't affect me.

FaceDeer,
@FaceDeer@fedia.io avatar

Well good news, then, that's not what Microsoft is using AI for this case.

FaceDeer,
@FaceDeer@fedia.io avatar

I don't know what specifically Microsoft is planning here, but in the past I've taken screenshots of my settings window and uploaded it to Copilot to ask it for help sorting out a problem. It was very useful for Copilot to be able to "see" what my settings were. Since the article describes a series of screenshots being taken over time it could perhaps be meant to provide context to an AI so that it knows what's been going on.

FaceDeer,
@FaceDeer@fedia.io avatar

That just so happens to describe me to a T. I'm a privacy-minded programmer who came here as part of the Reddit exodus. Because I'm a programmer and am aware of how these AIs function, I am not overly concerned about them and appreciate the capabilities they provide to me. I'm aware of the risks and how to manage them.

The comment I was responding to brought up "Linux is better" unprompted. But that's in line with the echo, so I guess that's fine.

FaceDeer,
@FaceDeer@fedia.io avatar

I was asked what the reason for this function was, so I speculated on that reason in an attempt to answer the question, and I got downvoted for it.

I wasn't addressing the privacy concerns at all. That wasn't part of the question.

FaceDeer,
@FaceDeer@fedia.io avatar

I'm not overly concerned because I know how to use these things. I know what they do, and so when one of them is doing something concerning I turn it off.

People are frightened of things they don't understand, and it's apparent that lots of people don't understand AI.

FaceDeer,
@FaceDeer@fedia.io avatar

The state of the art for small models is improving quite dramatically quite quickly. Microsoft just released the phi-3 model family under the MIT license, I haven't played with them myself yet but the comments are very positive.

Alternately, just turn that feature off.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tester
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • provamag3
  • cisconetworking
  • InstantRegret
  • khanakhh
  • cubers
  • everett
  • Durango
  • tacticalgear
  • Leos
  • modclub
  • normalnudes
  • megavids
  • anitta
  • lostlight
  • All magazines