@FaceDeer@fedia.io
@FaceDeer@fedia.io avatar

FaceDeer

@FaceDeer@fedia.io

Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and then some time on kbin.social.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

FaceDeer,
@FaceDeer@fedia.io avatar

The problem appears to be that the magazine deceptively portrayed it as an interview with the actual Michael Schumacher, rather than explaining that it was fictional. The lawsuit would probably be the same if the magazine had had a human writer come up with it all instead.

FaceDeer,
@FaceDeer@fedia.io avatar

Do a Google Image search for "child" or "teenager" or other such innocent terms, you'll find plenty of such.

I think you're underestimating just how well AI is able to learn basic concepts from images. A lot of people imagine these AIs as being some sort of collage machine that pastes together little chunks of existing images, but that's not what's going on under the hood of modern generative art AIs. They learn the underlying concepts and characteristics of what things are, and are able to remix them conceptually.

FaceDeer,
@FaceDeer@fedia.io avatar

Image AIs also don't act or respond on their own. You have to prompt them.

FaceDeer,
@FaceDeer@fedia.io avatar

Well, your philosophy runs counter to the fundamentals of Western justice systems, then.

FaceDeer,
@FaceDeer@fedia.io avatar

If AI has the means to generate inappropriate material, then that means the developers have allowed it to train from inappropriate material.

That's not how generative AI works. It's capable of creating images that include novel elements that weren't in the training set.

Go ahead and ask one to generate a bonkers image description that doesn't exist in its training data and there's a good chance it'll be able to make one for you. The classic example is an "avocado chair", which an early image generator was able to produce many plausible images of despite only having been trained on images of avocados and chairs. It understood the two general concepts and was able to figure out how to meld them into a common depiction.

FaceDeer,
@FaceDeer@fedia.io avatar

You suggested a situation where "many people would get off charges of real CSAM because the prosecuter can't prove that it wasn't AI generated." That implies that in that situation AI-generated CSAM is legal. If it's not legal then what does it matter if it's AI-generated or not?

FaceDeer,
@FaceDeer@fedia.io avatar

Camera-makers, too. And people who make pencils. Lock the whole lot up, the sickos.

FaceDeer,
@FaceDeer@fedia.io avatar

Better a dozen innocent men go to prison than one guilty man go free?

FaceDeer,
@FaceDeer@fedia.io avatar

Image-generating AI is capable of generating images that are not like anything that was in its training set.

FaceDeer,
@FaceDeer@fedia.io avatar

First, you need to figure out exactly what it is that the "blame" is for.

If the problem is the abuse of children, well, none of that actually happened in this case so there's no blame to begin with.

If the problem is possession of CSAM, then that's on the guy who generated them since they didn't exist at any point before then. The trainers wouldn't have needed to have any of that in the training set so if you want to blame them you're going to need to do a completely separate investigation into that, the ability of the AI to generate images like that doesn't prove anything.

If the problem is the creation of CSAM, then again, it's the guy who generated them.

If it's the provision of general-purpose art tools that were later used to create CSAM, then sure, the AI trainers are in trouble. As are the camera makers and the pencil makers, as I mentioned sarcastically in my first comment.

FaceDeer,
@FaceDeer@fedia.io avatar

The person who was charged was using Stable Diffusion to generate the images on their own computer, entirely with their own resources. So it's akin to a company that sells 3D printers selling a printer to someone, who then uses it to build a gun.

FaceDeer,
@FaceDeer@fedia.io avatar

It's possible to legally photograph young people. Completely ordinary legal photographs of young people exist, from which an AI can learn the concept of what a young person looks like.

FaceDeer,
@FaceDeer@fedia.io avatar

You obviously don't understand squat about AI.

Ha.

AI only knows what has gone through it's training data, both from the developers and the end users.

Yes, and as I've said repeatedly, it's able to synthesize novel images from the things it has learned.

If you train an AI with pictures of green cars and pictures of red apples, it'll be able to figure out how to generate images of red cars and green apples for you.

FaceDeer,
@FaceDeer@fedia.io avatar

It understands young and old. That means it knows a kid is not just a 60% reduction by volume of an adult.

We know it understands these sorts of things because of the very things this whole kerfuffle is about - it's able to generate images of things that weren't explicitly in its training set.

FaceDeer,
@FaceDeer@fedia.io avatar

No, you keep repeating this but it remains untrue no matter how many times you say it. An image generator is able to create novel images that are not directly taken from its training data. That's the whole point of image AIs.

FaceDeer,
@FaceDeer@fedia.io avatar

The trainers didn't train the image generator on images of Mr. Bean hugging Pennywise, and yet it's able to generate images of Mr. Bean hugging Pennywise. Yet you insist that it can't generate inappropriate images without having been specifically trained on inappropriate images? Why is that suddenly different?

FaceDeer,
@FaceDeer@fedia.io avatar

I'll never understand why all the TV producers think they can get away with cutting all the corners on writing.

My previous comment shows one possible reason - I said I didn't like one of her seasons and now I'm getting downvoted. There's a ready-made excuse to sling at anyone who criticizes the show.

I was actually excited to see a female doctor when it was announced, but I'm not going to like a show simply because of the gender or race or whatever of the actors. I saw that the quality of the writing was bad and so I left. The new season hasn't exactly enticed me back.

FaceDeer,
@FaceDeer@fedia.io avatar

Yeah, sadly. I've been reluctant to mention I'm not liking the current season because when I didn't like the previous season that made me a sexist. Now if I don't like the new season I'll be a racist too.

FaceDeer,
@FaceDeer@fedia.io avatar

I once had someone respond with astonishment; "but you're such a good person!" When they found out.

Thanks, I guess?

FaceDeer,
@FaceDeer@fedia.io avatar

But then you get that awkward situation where you go on vacation, open your luggage to get a fresh pair of socks or whatever, and find that you brought nothing but guns and ammo along with you on your trip.

FaceDeer,
@FaceDeer@fedia.io avatar

With comments like this he likely goes through new accounts on a very rapid pace.

FaceDeer,
@FaceDeer@fedia.io avatar

Anyone can download a torrent containing historical Reddit comments, Reddit surely has at least that if not a full edit/delete history of all the comments. The only people you are thwarting by deleting your comments are other humans who may stumble across your old threads in Google.

FaceDeer,
@FaceDeer@fedia.io avatar

The only way I can imagine this working is by twisting the definition of the words "search engine" enough that you can claim that there aren't search engines, but really there are still, just under a different name.

Search engines aren't actually the "problem" that OP is wanting to address, here, though. He just doesn't like the specific search engines that actually exist right now. What he should really be asking is how a search engine could be implemented that doesn't have the particular flaws that he's bothered by.

FaceDeer,
@FaceDeer@fedia.io avatar

Also pretty sure training LLMs after someone opts out is illegal?

Why? There have been a couple of lawsuits launched in various jurisdictions claiming LLM training is copyright violation but IMO they're pretty weak and none of them have reached a conclusion. The "opting" status of the writer doesn't seem relevant if copyright doesn't apply in the first place.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tacticalgear
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • Youngstown
  • everett
  • anitta
  • slotface
  • GTA5RPClips
  • rosin
  • thenastyranch
  • kavyap
  • mdbf
  • Leos
  • modclub
  • osvaldo12
  • Durango
  • khanakhh
  • provamag3
  • cisconetworking
  • ngwrru68w68
  • cubers
  • tester
  • ethstaker
  • megavids
  • normalnudes
  • lostlight
  • All magazines