And it won’t need to exist locally on the phone anyway. Higher bandwidth cell and wifi signals mean more and more exotic AI processing can be offloaded onto cloud resources.
It’s great when you have an app that works well when not connected to a network, of course. But most phone buyers don’t really care.
But NYC is surrounded by places you can drive to. Singapore is not. The mainland city of Johor Bahru is a relatively poor city of only 500K people, and beyond that it’s farmland until you get to the Malaysian captial, more than 4 hours away. So I wouldn’t expect the two cities to have the same preferences for car ownership in any case.
LEESBURG, Va. — After two days of testimony, the man who shot a 21-year-old YouTuber inside Dulles Town Center on video in April has been found not guilty on two charges of malicious wounding....
If someone is making you feel like you might be in danger, that’s a threat. It doesn’t matter their intent
That’s a risible argument. The standard is what a “reasonable person” considers dangerous.
Whether an action is criminal can’t be based on each individual’s personal opinion of their own behavior. The perpetrator believing that they are right does not make it legal.
With the simultaneous rollout of restrictions on account sharing and price increases/addition of advertising, I’m cutting back severely on streaming services....
The Indiana Attorney General sued Indiana University Health, the state’s largest hospital system, claiming it violated patient privacy laws in the case of a 10-year-old Ohio girl who received an abortion in Indiana. Attorney General Todd Rokita has taken repeated legal actions targeting the doctor who discussed the girl’s...
There’s also a strong argument that the Indiana licensing board that censured Dr. Bernard were activists who were bowing to public pressure. Many external authorities who reviewed Dr. Bernard’s case do not believe that she committed any unethical disclosure.
Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.
No, I get it. I’m not really arguing that what separates humans from machines is “libertarian free will” or some such.
But we can properly argue that LLM output is derivative because we know it’s derivative, because we designed it. As humans, we have the privilege of recognizing transformative human creativity in our laws as a separate entity from derivative algorithmic output.
a derivative work is an expressive creation that includes major copyrightable elements of a first, previously created original work
What was fed into the algorithm? A human decided which major copyrighted elements of previously created original work would seed the algorithm. That’s how we know it’s derivative.
If I take somebody’s copyrighted artwork, and apply Photoshop filters that change the color of every single pixel, have I made an expressive creation that does not include copyrightable elements of a previously created original work? The courts have said “no”, and I think the burden is on AI proponents to show how they fed copyrighted work into an mechanical algorithm, and produced a new expressive creation free of copyrightable elements.
This issue is easily resolved. Create the AI that produces useful output without using copyrighted works, and we don’t have a problem.
If you take the copyrighted work out of the input training set, and the algorithm can no longer produce the output, then I’m confident saying that the output was derived from the inputs.
There is literally not one single piece of art that is not derived from prior art in the past thousand years.
This is false. Somebody who looks at a landscape, for example, and renders that scene in visual media is not deriving anything important from prior art. Taking a video of a cat is an original creation. This kind of creation happens every day.
Their output may seem similar to prior art, perhaps their methods were developed previously. But the inputs are original and clean. They’re not using some existing art as the sole inputs.
AI only uses existing art as sole inputs. This is a crucial distinction. I would have no problem at all with AI that worked exclusively from verified public domain/copyright not enforced and original inputs, although I don’t know if I’d consider the outputs themselves to be copyrightable (as that is a right attached to a human author).
Straight up copying someone else’s work directly
And that’s what the training set is. Verbatim copies, often including copyrighted works.
That’s ultimately the question that we’re faced with. If there is no useful output without the copyrighted inputs, how can the output be non-infringing? Copyright defines transformative work as the product of human creativity, so we have to make some decisions about AI.
It’s literally not possible to be exposed to the history of art and not have everything you output be derivative in some manner.
I respectfully disagree. You may learn methods from prior art, but there are plenty of ways to insure that content is generated only from new information. If you mean to argue that a rendering of landscape that a human is actually looking at is meaningfully derivative of someone else’s art, then I think you need to make a more compelling argument than “it just is”.
it’s basically impossible to tell where parts of the model came from
AIs are deterministic.
Train the AI on data without the copyrighted work.
Train the same AI on data with the copyrighted work.
Ask the two instances the same question.
The difference is the contribution of the copyrighted work.
There may be larger questions of precisely how an AI produces one answer when trained with a copyrighted work, and another answer when not trained with the copyrighted work. But we know why the answers are different, and we can show precisely what contribution the copyrighted work makes to the response to any prompt, just by running the AI twice.
girl in red - we fell in love in october (www.youtube.com)
Thousands of remote IT workers sent wages to North Korea to help fund weapons program, FBI says (abcnews.go.com)
Donald Trump's Israel intel leak under scrutiny after Hamas attack (www.newsweek.com)
Why am I not surprised?
UK could rent space in foreign jails to ease shortage of cells (www.theguardian.com)
7 years of software updates for the Pixel 8 series (blog.google)
In Singapore, a certificate to own a car now costs $106,000 (www.reuters.com)
Is there an indie games bubble? (roadmapmag.com)
Man who shot YouTuber on video at Dulles Town Center found not guilty by jury (www.wusa9.com)
LEESBURG, Va. — After two days of testimony, the man who shot a 21-year-old YouTuber inside Dulles Town Center on video in April has been found not guilty on two charges of malicious wounding....
Capcom President Says ‘Game Prices Are Too Low’ (kotaku.com)
lol. lmao.
Are you cancelling streaming services?
With the simultaneous rollout of restrictions on account sharing and price increases/addition of advertising, I’m cutting back severely on streaming services....
FOSS Image editor
Funtions I look for:...
Indiana attorney general sues hospital system over privacy of Ohio girl who traveled for abortion (www.nbcnews.com)
The Indiana Attorney General sued Indiana University Health, the state’s largest hospital system, claiming it violated patient privacy laws in the case of a 10-year-old Ohio girl who received an abortion in Indiana. Attorney General Todd Rokita has taken repeated legal actions targeting the doctor who discussed the girl’s...
'The Game Just Fundamentally Undermines Itself': Game Designer Breaks Down 'Baldur's Gate 3's Most Fatal Flaws (www.themarysue.com)
‘Baldur’s Gate 3’ can be a fantastic experience and a bad game at the same time.
AI Lie: Machines Don’t Learn Like Humans (And Don’t Have the Right To) (www.tomshardware.com)
Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.