These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.
So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?
We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.
A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.
Even though the law can be circumvented, it nonetheless provides resistance. Traveling to another state, filling out paperwork, paying extra money, etc all provide additional obstacles to overcome. If someone was having an acute mental problem and felt compelled to eat a barrel, a simple few hours delay in acquiring a gun can make all the difference. For someone planning on using a gun for criminal activity, at some point they might just consider employment as an easier alternative if acquiring a gun is too much of a pain.
We have already seen this effect in reverse with regard to immigration. Legal immigration is such a painful crapshoot that people are willing to surrender their fate to cartels as an alternative.
I think where you are going wrong here is assuming that our internal perception is not also a hallucination by your definition. It absolutely is. But our minds are embodied, thus we are able check these hallucinations against some outside stimulus. Your gripe that current LLMs are unable to do that is really a criticism of the current implementations of AI, which are trained on some data, frozen, then restricted from further learning by design. Imagine if your mind was removed from all stimulus and then tested. That is what current LLMs are, and I doubt we could expect a human mind to behave much better in such a scenario. Just look at what happens to people cut off from social stimulus; their mental capacities degrade rapidly and that is just one type of stimulus.
Another problem with your analysis is that you expect the AI to do something that humans cannot do: cite sources without an external reference. Go ahead right now and from memory cite some source for something you know. Do not Google search, just remember where you got that knowledge. Now who is the one that cannot cite sources? The way we cite sources generally requires access to the source at that moment. Current LLMs do not have that by design. Once again, this is a gripe with implementation of a very new technology.
The main problem I have with so many of these “AI isn’t really able to…” arguments is that no one is offering a rigorous definition of knowledge, understanding, introspection, etc in a way that can be measured and tested. Further, we just assume that humans are able to do all these things without any tests to see if we can. Don’t even get me started on the free will vs illusory free will debate that remains unsettled after centuries. But the crux of many of these arguments is the assumption that humans can do it and are somehow uniquely able to do it. We had these same debates about levels of intelligence in animals long ago, and we found that there really isn’t any intelligent capability that is uniquely human.
My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.
How do hallucinations preclude an internal representation? Couldn’t hallucinations arise from a consistent internal representation that is not fully aligned with reality?
I think you are misunderstanding the role of tokens in LLMs and conflating them with internal representation. Tokens are used to generate a state, similar to external stimuli. The internal representation, assuming there is one, is the manner in which the tokens are processed. You could say the same thing about human minds, that the representation is not located anywhere like a piece of data; it is the manner in which we process stimuli.
You seem pretty confident that LLMs cannot have an internal representation simply because you cannot imagine how that capability could emerge from their architecture. Yet we have the same fundamental problem with the human brain and have no problem asserting that humans are capable of internal representation. LLMs adhere to grammar rules, present information with a logical flow, express relationships between different concepts. Is this not evidence of, at the very least, an internal representation of grammar?
We take in external stimuli and peform billions of operations on them. This is internal representation. An LLM takes in external stimuli and performs billions of operations on them. But the latter is incapable of internal representation?
And I don’t buy the idea that hallucinations are evidence that there is no internal representation. We hallucinate. An internal representation does not need to be “correct” to exist.
We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.
Read again. I have made no such claim, I simply scrutinized your assertions that LLMs lack any internal representations, and challenged that assertion with alternative hypotheses. You are the one that made the claim. I am perfectly comfortable with the conclusion that we simply do not know what is going on in LLMs with respect to human-like capabilities of the mind.
Sorry this is kinda political. Is there an asklemmypolitics group this would be better for? I’m hoping not to get into the libs vs progressives political debate we see everywhere on here… Just want to know what people are actually looking for.
For real, we’ve got the first openly pro-union president, we expanded NATO, student loan forgiveness, actual infrastructure funding, the first administration to openly push back against Israel during war time, all of that in only 4 years. He is the most effective president of my lifetime and I am happy to vote for him again.
It is so strange to say that identity should take a back seat to humanism when every historical example of discrimination and dehumanization is based on identity. Identity in those instances is not imposed on oneself, but is used to define the outgroup that is being dehumanized. Identity politics is simply an honest accounting of groups that being descriminated against. When the discrimination ends, we see the group identity evaporate. We need only look at the early 20th century definitions of Caucasian, and the identity politics of Irish and Italian Americans subsequently evaporating when that definition evolved to include all Americans of European decent, to see that identity politics is a reaction to injustice and not the other way around.
We have a number of subsidies for domestic EV production. That will all be a waste if China’s subsidized EVs undercut the domestic market. This is consistent with a broader effort to boost domestic manufacturing. While at odds with efforts to promote the adoption of green technologies, the administration is trying to strike a balance between competing interests, in this instance balancing consumer access to green tech with job growth, domestic manufacturing, and less reliance on China for critical technologies.
A recent study published in the Proceedings of the National Academy of Sciences reveals that across all political and social groups in the United States, there is a strong preference against living near AR-15 rifle owners and neighbors who store guns outside of locked safes. This surprising consensus suggests that when it comes...
Details like this are really just a distraction. Do you really think the average respondent understands these technical details, or have any good reason to memorize the specs of all rifles? The focus on the AR-15 is not because of any risk associated with that particular gun, but because most people understand that this is a semi-auto rifle. There is no other model of gun that will have that kind of widespread recognition.
Drawing up these very silly technical arguments is a willful ignorance of the underlying issue: What is the limit of deadly force we should allow one person to lawfully own? We don’t let people own tactical nukes. We don’t need to argue over thermonuclear or hydrogen nukes. We don’t need to understand quantum mechanics to regulate these devices. The technical details do not matter. The potential body count is what matters. And so it is with guns, which happen to occupy that grey area where reasonable people disagree on an acceptable level of lethality. You do not need to know all the different models of gun to be killed by one, so we should not require such technical knowledge when engaging in discourse around their regulation.
The government can already access the data with a warrant. The ownership of TikTok has literally 0 effect on the government’s ability to access user data. Not being owned by the Chinese government has a huge impact on China’s ability to access that data.
This is the real question. Is there a loophole that allows foreign governments to freely exercise mass surveillance and psyops if they allow US citizens to post on a blackboard outside their offices?
TikTok pushed a notifications to all US users with the phone numbers of their local congressmen to oppose the bill. So many calls came in that the phone lines were jammed.
Let me distill that for you: China attempted to directly influence legislation with a mass propaganda campaign targeted at its US user base.
Please explain to me why that isn’t a threat and why the US should allow hostile foreign powers to directly influence internal politics?
The real question you are asking is whether inaction is worse than inconsistency. Should we not put out a fire unless we can put out all fires? What you are suggesting is to let something burn for the sake of consistency.
I’m still gaming on my 1080ti. One of the fans has a mind of its own and accelerates/decelerates randomly, but I remain impressed with the settings it can run on modern games.
You hate the “man or bear” conversation. Imagine how much women must hate it, knowing that you and other “good men” will bemoan their feelings as soon as they express them. Think about how chilling that is to their concerns; how they have to walk on eggshells even around “good men” when they want voice legitimate concerns.
It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way...
The Chinese government froze meaningful efforts to trace the origins of the coronavirus pandemic, despite publicly declaring it supported an open scientific inquiry, an Associated Press investigation has found....
Where is the safety report for the Wuhan wetmarket? You know the one that unequivocally started one viral pandemic? Then, while closed down, we enjoyed a period with no new coronavirus pandemics? And then, shortly after reopening, there was another coronavirus pandemic originating in Wuhan? That wetmarket, you have a report on that one?
FBI Arrests Man For Generating AI Child Sexual Abuse Imagery (www.404media.co)
California is about to tax guns more like alcohol and tobacco − and that could put a dent in gun violence (theconversation.com)
A bit of a weird question: Can modern medicine be a threat to humanity long-term by greatly reducing effects of natural selection?
OK, I hope my question doesn’t get misunderstood, I can see how that could happen....
We have to stop ignoring AI’s hallucination problem (www.theverge.com)
US Lemmys what could Biden do in the next 6 months to EARN your vote? (other than just not being Trump)
Sorry this is kinda political. Is there an asklemmypolitics group this would be better for? I’m hoping not to get into the libs vs progressives political debate we see everywhere on here… Just want to know what people are actually looking for.
X now treats the term cisgender as a slur (www.engadget.com)
President Biden announces a series of tariffs on green energy products from China. (lemmy.ml)
Study reveals "widespread, bipartisan aversion" to neighbors owning AR-15 rifles (www.psypost.org)
A recent study published in the Proceedings of the National Academy of Sciences reveals that across all political and social groups in the United States, there is a strong preference against living near AR-15 rifle owners and neighbors who store guns outside of locked safes. This surprising consensus suggests that when it comes...
TikTok sues the US government over ban (www.theverge.com)
TikTok is taking the US government to court.
The Greatest GPU of All Time: NVIDIA GTX 1080 Ti & GTX 1080 2024 Revisit & History (gamersnexus.net)
About the bear...
So, I’m just assuming we’ve all seen the discussions about the bear....
ChatGPT provides false information about people, and OpenAI can’t correct it (noyb.eu)
It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way...
Takeaways from AP report on how the search for the coronavirus origins turned toxic (apnews.com)
The Chinese government froze meaningful efforts to trace the origins of the coronavirus pandemic, despite publicly declaring it supported an open scientific inquiry, an Associated Press investigation has found....