#Noyb reicht Beschwerde gegen #OpenAi wegen schweren #Verstößen des Produkt #ChatGPT gegen die Europäische Datenschutz-Grundverordnung (#DSGVO) ein.
"Es scheint, dass mit jeder ‚Innovation‘ eine andere Gruppe von Unternehmen meint, dass ihre Produkte nicht mit dem Gesetz übereinstimmen müssen.“, sagt
Maartje de Graaf, Datenschutzjuristin bei Noyb
#OpenAI is facing another privacy complaint in the #European Union. This one, which has been filed by privacy rights nonprofit noyb on behalf of an individual complainant, targets the inability of its #AI chatbot #ChatGPT to correct misinformation it generates about individuals.
There is a whole lot of #business#AI#IT and #chatgpt idiocy to unpack in this article, not the least is the self satirizing tone of the clueless author, despite the hopeful clickbait headline. If this is what companies are really thinking, it's the #cominglaborapocalypse around the corner. Either that or there will be no people who know how the computers or software works.
"This is another variation of the High-Tech Illusion: the belief that software developers do easily automated work. Their principal work is human communication to organize the user's expressions of needs into formal procedure. That work will be necessary no matter how we change the life cycle."
Reid Hoffman Paypal/LinkedIn owner made this self-promotional video: https://www.youtube.com/watch?v=rgD2gmwCS10 ostensibly it seems to flog some dystopian business horror he has coming out.
'Reid AI' is a GPT from his writings. He claims it as an '#AI clone' of himself.
It does some trained seal tricks first before going on to give him generic advice for job seekers who use linkedln.
It's pretty alarming that key policy influencers seemingly lack even a basic understanding of the subject matter.
"There has been a shift in the #AI space: some models, like #ChatGPT & #Gemini, have evolved into entire web platforms spanning multiple use cases & access points. Other large language models like #LLaMa or #OLMo, though technically speaking they share a basic architecture, don’t actually fill the same role. They are intended to live in the background as a service or component, not in the foreground as a name brand." https://techcrunch.com/2024/04/19/too-many-models/
I've had occasion to ask an AI about a thing twice lately (a recent online phenomenon, and a book recommendation). Both times I asked both Gemini and ChatGPT, and both times one gave a reasonable if bland answer, and the other (a different one each time) gave a plausible but completely fictional ("hallucinated") answer.
When do we acknowledge that LLMs, and "AI" in general, aren't quite ready to revolutionize the world?
With all the valid concern around #llm and #genai power and water usage, I thought I'd start a blog series on tiny LLMs. Let's see what they can do on real tasks on very power efficient hardware.
When prompted to provide the current top news headlines from specific outlets, ChatGPT returned non-news output 52–54% of the time (almost always in the form of an ‘I’m unable to’-style message). Bard did this 95% of the time.
For ChatGPT, just 8–10% of requests returned headlines that referred to top stories on the outlet’s homepage at the time. This means that when ChatGPT did return news-like output, the headlines provided did not refer to current top news stories most of the time.
Of the remaining requests, around one-third (30%) returned headlines that referred to real, existing stories from the news outlet in question but they were not among the latest top stories, either because they were old or because they were not at the top of the homepage.
Around 3% of outputs from ChatGPT contained headlines that referred to real stories that could only be found on the website of a different outlet. The misattribution (but not the story itself) could be considered a form of hallucination. A further 3% were so vague and ambiguous that they could not be matched to existing stories. These outputs could also be considered a form of hallucination.
The outputs from ChatGPT are heavily influenced by whether news websites have chosen to block it, and outputs from identical prompts can change over time for reasons that are not clear to users."
The "voice chat" feature in #ChatGPT's iOS app has a lot of potential, but its implementation is uninspired.
There are five voices to choose from, but the choice is global—whatever you pick will be the voice of GPT 3.5 and GPT 4.0 for all chats—you can’t assign different voices to different chats.
This feature would be even more interesting for custom GPTs, which in theory have distinct purposes and personalities, but custom GPTs all share a sixth voice which can't even be changed.
When we teach WW1 history and show photographs of men in the trenches, and no one knows if they are real or midjourney fake then we are in deep trouble. Or teach scientific principles through research papers and don't know if the text is authentic human created or response engine output, we are in deep trouble. Imagine a hundred other contexts and you understand why gen ai is such a massive problem.
I had a corridor chat about with two designer friends about the enshittification of the internet yesterday, with me, 20+ years their senior, waxing lyrical about the good old days of the World Wide Web. One of them forwarded me the long form piece 'Are We Watching The Internet Die?' by Ed Zitron's. It confirms all of what worries me about generative AI. Here's hoping there's a way to step back from the edge.
Explorez les Avantages de Chat GPT Gratuit pour Optimiser Votre Quotidien Seselwa Creole French
Augmentez Votre Productivité avec Chat GPT Gratuit...