RickRussell_CA avatar

RickRussell_CA

@RickRussell_CA@kbin.social
RickRussell_CA,
RickRussell_CA avatar

There is a so-called "hard problem of consciousness", although I take exception with calling it a problem.

The general problem is that you can't really prove that you have subjective experience to others, and neither can you determine if others have it, or whether they merely act like they have it.

But, a somewhat obvious difference between AIs and humans is that AIs will never give you an answer that is not statistically derivable from their training dataset. You can give a human a book on a topic, and ask them about the topic, and they can give you answers that seem to be "their own conclusions" that are not explicitly from the book. Whether this is because humans have randomness injected into their reason, or they have imperfect reasoning, or some genuine animus of "free will" and consciousness, we cannot rightly say. But it is a consistent difference between the humans and the AIs.

The Monty Hall problem discussed in the article -- in which AIs are asked to answer the Monty Hall problem, but they are given explicit information that violate the assumptions of the Monty Hall problem -- is a good example of something where a human will tend to get it right, through creativity, while an AI will tend to get it wrong, due to statistical regression to the mean.

RickRussell_CA,
RickRussell_CA avatar

I hesitate to call it a problem because, by the way it's defined, subjective experience is innately personal.

I've gotten into this question with others, and when I began to propose thought problems (like, what if we could replicate sensory inputs? If you saw/heard/felt everything the same as someone else, would you have the same subjective conscious experience?), I'd get pushback: "that's not subjective experience, subjective experience is part of the MIND, you can't create it or observe it or measure it...".

When push comes to shove, people define consciousness or subjective experience as that aspect of experience that CANNOT be shown or demonstrated to others. It's baked into the definition. As soon as you venture into what can be shown or demonstrated, you're out of bounds.

So it's not a "problem", as such. It's a limitation of our ability to self-observe the operating state of our own minds. An interesting question, perhaps, but not a problem. Just a feature of the system.

RickRussell_CA,
RickRussell_CA avatar

To be clear, I don't think the fundamental issue is whether humans have a training dataset. We do. And it includes copyrighted work. It also includes our unique sensory perceptions and lots of stuff that is definitely NOT the result of someone else's work. I don't think anyone would dispute that copyrighted text, pictures, sounds are integrated into human consciousness.

The question is whether it is ethical, and should it be legal, to feed copyrighted works into an AI training dataset and use that AI to produce material that replaces, displaces, or competes with the copyrighted work used to train it. Should it be legal to distribute or publish that AI-produced material at all if the copyright holder objects to the use of their work in an AI training dataset? (I concede that these may be two separate, but closely related, questions.)

RickRussell_CA,
RickRussell_CA avatar

Well, that's the question at hand. Who? Definitely not, people have an innate right to think about what they observe, whether that thing was made by someone else, or not.

What? I'd argue that's a much different question.

Let's take an extreme case. Entertainment industry producers tried to write language into the SAG-AFTRA contract that said that, if an extra is hired for a production, they can use that extra's image -- including 3D spatial body scans -- in perpetuity, for any purpose, and that privilege of eternal image storage and re-use was included in the price of hiring an extra for 1 day of work.

The producers would make precisely the same argument you are -- how dare you tell them how they can use the images that they captured, even if it's to use and re-use a person's image and shape in visual media, forever. The actors argue that their physiognomy is part of their brand and copyright, and using their image without their express permission (and, should they require it, compensation) is a violation of their rights.

Or, I could just take pictures of somebody in public places without their consent and feed them into an AI to create pictures of the subject flashing children. They were my pictures, taken by me, and how dare anybody get to make rules about who or what experiences them, right?

The fact is, we have rules about the capture and re-use of created works that have applied to society for a very long time. I don't think we should give copyright holders eternal locks on their work, but neither is it clear that a 100% free use policy on created work is the right answer. It is reasonable to propose something in between.

RickRussell_CA,
RickRussell_CA avatar

But we make the laws, and have the privilege of making them pro-human. It may be important in the larger philosophical sense to meditate on the difference between AIs and human intelligence, but in the immediate term we have the problem that some people want AIs to be able to freely ingest and repeat what humans spent a lot of time collecting and authoring in copyrighted books. Often, without even paying for a copy of the book that was used to train the AI.

As humans, we can write the law to be pro-human and facilitate human creativity.

RickRussell_CA,
RickRussell_CA avatar

And yeah all the extra data that we humans fundamentally aquire in life does change everything we make.

I'd argue that it's the crucial difference. People on this thread are arguing like humans never make original observations, or observe anything new, or draw new conclusions or interpretations of new phenomena, so everything humans make must be derived from past creations.

Not only is that clearly wrong, but it also fails the test of infinite regress. If humans can only create from the work of other humans, how was anything ever created? It's a risible suggestion.

RickRussell_CA,
RickRussell_CA avatar

Right now our understanding of derivative works is mostly subjective. We look at the famous Obama "HOPE" image, and the connection to the original news photograph from which it was derived seems quite clear. We know it's derivative because it looks derivative. And we know it's a violation because the person who took the news photograph says that they never cleared the photo for re-use by the artist (and indeed, demanded and won compensation for that reason).

Should AI training be required to work from legally acquired data, and what level of abstraction from the source data constitutes freedom from derivative work? Is it purely a matter of the output being "different enough" from the input, or do we need to draw a line in the training data, or...?

All good questions.

RickRussell_CA,
RickRussell_CA avatar

Copyright and fair use are laws written for humans, to protect human creators and insure them the ability to profit from their creativity for a limited time, and to grant immunity to other humans for generally accepted uses of that work without compensation.

I agree that sentience is irrelevant, but whether the actors involved are human or not is absolutely relevant.

RickRussell_CA,
RickRussell_CA avatar

Well, it's a "problem" for philosophers. I don't think it's a "problem" for neurology or hard science, that's the only point I was trying to make.

RickRussell_CA,
RickRussell_CA avatar

But there are absolutely rules on whether Google -- or anything else -- can use that search index to create a product that competes with the original content creators.

For example, https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,_Inc.

Google indexing of copyrighted works was considered "fair use" only because they only offered a few preview pages associated with each work. Google's web page excerpts and image thumbnails are widely believed to pass fair use under the same concept.

Now, let's say Google wants to integrate the content of multiple copyrighted works into an AI, and then give away or sell access to that AI which can spit out the content (paraphrased, in some capacity) of any copyrighted work it's ever seen. You'll even be able to ask it questions, like "What did Jeff Guin say about David Koresh's religious beliefs in his 2023 book, Waco?" and in all likelihood it will cough up a summary of Mr. Guinn's uniquely discovered research and journalism.

I don't think the legal questions there are settled at all.

RickRussell_CA,
RickRussell_CA avatar

Caffeinated!

RickRussell_CA,
RickRussell_CA avatar

For desktop browsers, I like it better than regular lemmy. Admittedly, I'd like to use a client with it so I look forward to an API for mobile clients, but I'm pretty happy as a desktop user.

RickRussell_CA,
RickRussell_CA avatar

Everybody says that, but that's not really practical. It would be much better to merge those features into the main project, than to fork it and get stuck maintaining a separate codebase in perpetuity.

Now I will say that if someone thinks they can do a better job, they should sign up for the project and commit their changes to the main project, so all ernest has to do is approve it, rather than write it himself.

RickRussell_CA,
RickRussell_CA avatar

You've got Pneumail(tm)!

RickRussell_CA,
RickRussell_CA avatar

With respect, the question is not "what should happen?"

The question is, "what will the law allow?" The judge is not there to cater to preferences, but to use the force of state power to compel parties to act to comply with the law. For that, the judge absolutely needs a grasp on things.

RickRussell_CA, (edited )
RickRussell_CA avatar

I think the big challenge right now is sustaining growth. I don't think many reddit refugees are paying for their fediverse services.

I support dessalines on Patreon, but I don't really know what else I should be doing. I think that folks who want to run these services need to figure out how to charge money for it, or they won't be able to buy infrastructure or network bandwidth.

EDIT: OK I just bought 5 coffees for ernest: https://www.buymeacoffee.com/kbin

RickRussell_CA,
RickRussell_CA avatar

beehaw.org -- a lemmy instance specifically geared toward quality discussion and keeping everybody nice to each other -- has basically been told by Lemmy devs that the moderation tools they want and need just aren't in the roadmap, and they'll need to fork and develop their own version.

That's an incredibly disheartening attitude.

[Opinion] Trump's plans to become a dictator: It's time to get real about Project 2025 - Chauncey DeVega - Salon (www.salon.com)

Donald Trump is a dictator in waiting. Like other dictators, he is threatening to put his "enemies" in prison – and to do even worse things to them. These are not idle threats or empty acts of ideation: Donald Trump is a violent man who is a proven enemy of democracy and freedom....

RickRussell_CA,
RickRussell_CA avatar

On the other hand, "idle threats or empty acts of ideation" is pretty much Trump's brand. Everything he does is half-assed and never comes to fruition.

But I concede that he might hire underlings capable of sustaining a dictatorship.

RickRussell_CA,
RickRussell_CA avatar

The fact that it became an issue "on social media" only after a white journalist documented that they were refused admission sort of tells you the whole story here.

Nobody cared, until angry racists made a big deal about it. It's likely that, on balance, the vast majority of people don't care and aren't paying any attention to the racists. But if it involves angry racists, it leads, because that shit generates clicks and controversy. JOURNALISM.

RickRussell_CA,
RickRussell_CA avatar

This is the kind of balanced, nuanced take that will get you absolutely murderlated with downvotes.

RickRussell_CA,
RickRussell_CA avatar

My special needs kid is terrified of pets. And they react in kind -- animals that are sweet toward a smiling, calm person will lose it in his presence.

Is anybody "bad" here? This seems like the worst kind of judgment based on initial appearances, rather than understanding.

RickRussell_CA,
RickRussell_CA avatar

. Mitigating circumstances like a person's childcare situation are only mitigating circumstances because there was irresponsibility in the first place to mitigate. It's still irresponsibility.

I took the cart into the store to shop with my cognitively disabled child. This was a responsible decision.

Due to my child's medical disability and changing circumstances resulting in a behavior meltdown, I had to take him back to the car and stay with him, to prevent elopement that could put him and others at risk. This was a responsible decision. Due to the changing circumstances, I can't return the shopping cart to a particular location.

At no point do I abdicate responsibility. My first responsibility is to the safety of my child, and others who might suffer if he elopes. If you think I'm a bad person who "gives zero shits" because I put that first, then I call that error.

If you want to live in you self-righteous bubble and judge people from afar without knowing jack squat about their circumstances, I call that error. I'm sure my situation is not unique; issues must come up all the time with children, pets, the elderly that necessitate putting a shopping cart aside and attending to the needs of others, and it's not always possible to return the shopping cart.

I can't stop you from making an error, of course, but I'd hope than when the error is explained to you, you'd commit to avoiding it.

RickRussell_CA,
RickRussell_CA avatar

Then we should probably call it a "red flag" instead of a "dead giveaway" (per post title) :-)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • cisconetworking
  • mdbf
  • tacticalgear
  • magazineikmin
  • thenastyranch
  • rosin
  • everett
  • Youngstown
  • khanakhh
  • slotface
  • ngwrru68w68
  • kavyap
  • DreamBathrooms
  • JUstTest
  • InstantRegret
  • osvaldo12
  • GTA5RPClips
  • ethstaker
  • tester
  • Durango
  • normalnudes
  • anitta
  • modclub
  • cubers
  • Leos
  • provamag3
  • lostlight
  • All magazines