Amazon is filled with garbage ebooks, often a result of keyword scrapers finding trending topics, and then so-called publishers using AI and cheap ghostwriters to generate books. "If, as they used to say, everyone has a book in them, AI has created a world where tech utopianists dream openly about excising the human part of writing a book — any amount of artistry or craft or even just sheer effort — and replacing it with machine-generated streams of text," writes Vox's Constance Grady. Here's her story about the underbelly of online self-publishing.
The phrase “AI” has been poisoned so hard that nobody building serious products should use it. I would love to see more free software projects leverage computer vision or natural language processing for example, but I would never describe those as “AI” because it’s a brand destroying phrase at this point
@danirabbit No joke. At this point, most #marketing copy using the terms "AI" or "artificial intelligence" really just means "uses computer."
For real, my security camera is not using #AI to find humans in the frame. That's basic #ImageProcessing that's been around for years. My #DSLR could accurately fimd and focus on faces a decade ago. That certainly was not #ArtificialIntelligence!
Most workers would likely agree — it’d be nice to have someone else fill in for them once in a while. And, better yet, get paid for it. London-based model Alexsandrah has experienced a real-life version of this daydream with help from her AI-generated virtual twin that has appeared as a stand-in on a photo shoot. Is this the future of modeling? Here’s what proponents and critics are saying in this report from the Associated Press. https://flip.it/DYjnSC #Tech#Technology#AI#ArtificialIntelligence
The tech industry can’t agree on what open-source AI means. That’s a problem.
an AI model could require access to the trained model, its training data, the code used to preprocess this data, the code governing the training process, the underlying architecture of the model, or a host of other, more subtle details.
The risks around artificial intelligence might be better understood by following the money not the technology....
This suggests we need to be regulating (better) the economic & market uses of AI, rather than focussing on its technical capabilities.
Indeed, states have a long history of regulating markets, and whatever some economists seem to think, regulation is what has most often saved capitalism from eating itself!
“At first the plan was for the DAO to vote on whether or not to hire a writer & how much to pay them. But although the DAO was fine for making so-called smart contracts, it didn’t have a mechanism for signing regular old-fashioned dumb contracts, or for paying anybody in what crypto people derisively call ‘fiat currency,’ but which you and I call ‘money.’ […] In the end I was offered a contract from Mysterious Entity. I submitted my invoices to and was paid, in dollars, by Mysterious Entity, Inc. The Piper DAO was not a party to the contract.”
“bro they stole your entire game” — latest roundup from Jauwn, the Youtuber on a neverending quest to find and review an NFT game that’s actually good.
(*To be clear, crypto scams are still chugging right along. Web 3 Is Going Just Great has a steady influx of new posts! They’re just all variations on the same 3 or 4 themes. Even Amy and David’s blogging has occasionally thrown in an AI-scams roundup to fill space.)
Photo library Getty Images has entered into a deal with Nvidia to create AI tools trained on its copyright-protected stock images. Getty's CEO, Craig Peters, talked to the Hollywood Reporter about why he thinks this could be beneficial to creators, how the material created by this system will be labeled, copyright systems, and who gets paid.
Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.
Facing a lack of training data, OpenAI reportedly developed its Whisper audio transcription model to transcribe over a million hours of YouTube videos to train GPT-4. Unsurprisingly, @theverge writes, the training involved tactics that fall into the hazy gray area of AI copyright law. https://flip.it/sqabAF #Tech#AI#ArtificialIntelligence#ChatGPT