The context you are missing is that these interactions aren’t limited to strangers or the internet, and typically form a pattern of regular behaviour vs just a one off comment.
A person is a victim of and suffers from the effects of their own traumatic experiences and instead of learning to deal with them and heal, they induce others to suffer some those effects as well; thus turning others into victims of that same trauma.
It’s not as big and dramatic as a murder, but it’s still victimization.
No. It’s not appropriate to take someone’s joyful conversation about their experiences and shift the focus to you and your past trauma. It’s an incredibly shitty thing to do.
To be clear; The previous comment was not a response to OP, it’s a response to people that overshadow/intentionally bring down other peoples happiness with their past traumas. Like the humanoid character in the image did.
An alternative to having lazers shoot from your eyes.
If you’re staying within city limits; the only speed signs you’d see much of the time are in parking lots/private property, explicitly slower than the public roadway speeds.
Not sure what you mean by demand charges. Additional cost for peak hours perhaps? Not really a thing where I live.
Energy is billed at the lower of the two numbers I gave for the first ~1.4MWh, then the rest is billed at the higher rate. (metered between two months) It doesn’t matter when you use the energy.
Aside from the energy costs, there’s a ~$0.22/day base charge and 5% gst. That’s it.
Apple’s got one, so does Google, and Microsoft. They’re common tools for scam baiters tracking down call centres and individual scammers. Pretty effective actually.
Given it’s public dataset not owned or maintained by the developers of Stable Diffusion; I wouldn’t consider that their fault either.
I think it’s reasonable to expect a dataset like that should have had screening measures to prevent that kind of data being imported in the first place. It shouldn’t be on users (here meaning the devs of Stable Diffusion) of that data to ensure there’s no illegal content within the billions of images in a public dataset.
That’s a different story now that users have been informed of the content within this particular data, but I don’t think it should have been assumed to be their responsibility from the beginning.
A person (the arrested software engineer from the article) acquired a tool (a copy of Stable Diffusion, available on github) and used it to commit crime (trained it to generate CSAM + used it to generate CSAM).
That has nothing to do with the developer of the AI, and everything to do with the person using it. (hence the arrest…)
Do… Do you really think the creators/developers of Stable Diffusion (the AI art tool in question here) trained it on CSAM before distributing it to the public?
Or are you arguing that we should be allowed to do what’s been done in the article? (arrest and charge the individual responsible for training their copy of an AI model to generate CSAM)
One, AI image generators can and will spit out content vastly different than anything in the training dataset (this ofc can be influenced greatly by user input). This can be fed back into the training data to push the model towards the desired outcome. Examples of the desired outcome are not required at all. (IE you don’t have to feed it CSAM to get CSAM, you just have to consistently push it more and more towards that goal)
Two, anyone can host an AI model; it’s not reserved for big corporations and their server farms. You can host your own copy and train it however you’d like on whatever material you’ve got. (that’s literally how Stable Diffusion is used) This kind of explicit material is being created by individuals using AI software they’ve downloaded/purchased/stolen and then trained themselves. They aren’t buying a CSAM generator ready to use off the open market… (nor are they getting this material from publicly operating AI models)
They are acquiring a tool and moulding it into a weapon of their own volition.
Some tools you can just use immediately, others have a setup process first. AI is just a tool, like a hammer. It can be used appropriately, or not. The developer isn’t responsible for how you decide to use it.