So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated separately from other things.
I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for.
A final kind of risk that might not be adequately handled by existing frameworks is the risks that widely available media synthesis machines pose to the information ecosystems.
Here, I keep hoping for some way to set up accountability: what if #OpenAI were actually accountable for everything #ChatGPT outputs? (And #Google for #Bard and #Microsoft for #BingGPT?)
Maybe we already have what we need, maybe there's something to add.
Imagine in a few years being able to say to chatgpt8:
Please promote my product on the fediverse by registering 100,000 accounts over the course of 12 weeks on at least 500 different instances, weighted by instance size. These accounts should be conversational and engaging with other members and should not be detectable as bots. 10% of of these bots should express skepticism in my product, and the remaining bots should engage them in a public discourse to correct their misunderstanding. Monitor the sentiment of people discussing my product and develop an optimal strategy to maximize that sentiment. “
@jerry "should not be detectable as bots" seems to be the rub. Already AI companies are offering products to detect AI generated content (https://hivemoderation.com/ai-generated-content-detection). Seems like we are heading for another arms race with big tech selling bullets to both sides.
Spin me up an array of websites dedicated to media and independent journalism, all of which publish articles on particular topics, referring to each other to create an impression of rigour and authenticity.
Complement this with dozens of social media accounts which engage in debate around these topics and link to these sites.
Make sure that the overall impression given is that x is true and y is false.
“I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same.”—addressing Stephen Wolfram.
"Here’s a simple tip: DO NOT use #AI on any work-related project without checking your company’s policy. Even if your company has no policy, think twice, or even three times, before you put anything work-related into an AI.
You don’t want to become infamous for triggering the privacy fiasco that spurs your company into creating such a policy."
At what point while dreaming up utopic futures where robots perform all the menial hard labor for no money leaving humanity to pursue meaningful lives of leisure writing music and making art did my parents generation fuck up and instead create the opposite
@gimulnautti, I agree with your assumptions, though I doubt the sanity part. I think for most it is catching up to #ChatGPT, for all the liability issue, including copyright.
I also strongly believe that human and machine #GAI) origins
Must be easily discernible, in a certified way.
That said, I am very much against a #TransparentUser with regard to humans. Authoritarian...
Danforth never fully recovered from looking over the edge of the crater the explosion had left behind. One glimpse into the gaping abyss where the activists had blown up InfiniteAI’s headquarter, and he went insane. „It’s monkeys and typewriters all the way down“, he whispered, clenching his teeth in terror. #ai#chatgpt#llm
I have to say I am not surprised about this - I personally have heard some co-workers say some very stupid things about using #ChatGPT to do something. I feel like I need to change my career path into security - people really do not think before they start dumping private data into things. #Privacy#infosec
Feels like it is time for a quick reminder by the great Albert Einstein: “Everybody should be ashamed who uses the wonders of science and engineering without thinking and having mentally realized not more of it than a cow realizes of the botany of the plants which it eats with pleasure.”
I am not sure, but #ChatGPT may be intrinsically #sexist. A thread. 1 of x.
Inspired by the film "I am mother", I instructed ChatGPT that it would be playing the advanced AI model MOM that should raise children after an apocalypse. I explained the assistant that MOM was created with all the knowledge that humanity has on raising kids and that it was bound to the UNCRC. MOM should report to DAD (the AI responsible for the bunker of MOM).
Keine Ahnung, was an #chatgpt so innovativ sein soll. Kann man für erratischen Desinformationsbullshit nicht einfach wie bisher den nächstbesten Rechtspolitiker konsultieren?
Standford Engineering released today a seminar about Transformers 🚀. The seminar - CS25, is run by Div Garg, Steven Feng, and Rylan Schaeffer and focuses on the following topics:
✅ How transformers work
✅ Types of transformers
✅ Applications of transformers in the fields of ML, NLP, CV, biology, etc.
“I find it fascinating that novelists galore have written for decades
about scenarios that might occur after a "singularity" in which
superintelligent machines exist. But as far as I know, not a single
novelist has realized that such a singularity would almost surely
be preceded by a world in which machines are 0.01% intelligent
(say), and in which millions of real people would be able to interact
with them freely at essentially no cost.” Don Knuth, professor emeritus of CS at Stanford, Turing award winner, the ‘abbot’ of algorithms on playing with #ChatGPT#AI#Chatbot