The tech industry can’t agree on what open-source AI means. That’s a problem.
an AI model could require access to the trained model, its training data, the code used to preprocess this data, the code governing the training process, the underlying architecture of the model, or a host of other, more subtle details.
The risks around artificial intelligence might be better understood by following the money not the technology....
This suggests we need to be regulating (better) the economic & market uses of AI, rather than focussing on its technical capabilities.
Indeed, states have a long history of regulating markets, and whatever some economists seem to think, regulation is what has most often saved capitalism from eating itself!
“At first the plan was for the DAO to vote on whether or not to hire a writer & how much to pay them. But although the DAO was fine for making so-called smart contracts, it didn’t have a mechanism for signing regular old-fashioned dumb contracts, or for paying anybody in what crypto people derisively call ‘fiat currency,’ but which you and I call ‘money.’ […] In the end I was offered a contract from Mysterious Entity. I submitted my invoices to and was paid, in dollars, by Mysterious Entity, Inc. The Piper DAO was not a party to the contract.”
“bro they stole your entire game” — latest roundup from Jauwn, the Youtuber on a neverending quest to find and review an NFT game that’s actually good.
(*To be clear, crypto scams are still chugging right along. Web 3 Is Going Just Great has a steady influx of new posts! They’re just all variations on the same 3 or 4 themes. Even Amy and David’s blogging has occasionally thrown in an AI-scams roundup to fill space.)
Photo library Getty Images has entered into a deal with Nvidia to create AI tools trained on its copyright-protected stock images. Getty's CEO, Craig Peters, talked to the Hollywood Reporter about why he thinks this could be beneficial to creators, how the material created by this system will be labeled, copyright systems, and who gets paid.
Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.
Facing a lack of training data, OpenAI reportedly developed its Whisper audio transcription model to transcribe over a million hours of YouTube videos to train GPT-4. Unsurprisingly, @theverge writes, the training involved tactics that fall into the hazy gray area of AI copyright law. https://flip.it/sqabAF #Tech#AI#ArtificialIntelligence#ChatGPT
Does the world need another photo-sharing platform? Former Yahoo CEO Marissa Mayer thinks so.
Last week, her company launched the Shine app, which "automatically creates albums around gatherings of people you know and uses AI to remove duplicates, picking out the best ones."
Becky Chambers, A Closed & Common Orbit (2016), expands a plot element about AI from her first book into the central focus of this sequel. (implicitly?) picking up a plot from McCaffrey's classic The Ship that Sang, this is a compelling story of how an AI achieves it/her independence & their friendships that this entails. Once again Chambers has written an emotionally rich, unusual but highly timely SciFi.
The #Oversight Board, an outside group funded by #Meta, called on the company to extend the policy to address #altered audio as well as videos when they falsely depict people doing things.
“We agree w/the Oversight Board’s argument that our existing approach is too narrow” because it only applies to #fake speech & not altered actions, Meta VP of #Content Policy Monika Bickert said in a stmnt.
Following the #Oversight Board’s rec, #Meta also agreed to no longer remove digitally created #media if it doesn’t violate any other rules, but the company will attach a label saying that the #content has been #altered. Starting next month, the company will start to apply “#MadeWithAI” labels on content it determines is #AI or when people disclose they are uploading #AIgenerated content.
Meta’s AI image generator really struggles with the concept of interracial couples.
CNN reports: "Despite the many promises of generative AI’s future potential emanating from the tech industry, the gaffes from Meta’s AI image generator are the latest in a spate of incidents that show how generative AI tools still struggle immensely with the concept of race."