Watching all the content producers losing their jobs to ChatGPT is sad, but the thing they all keep bringing up is “my clients will take the quality hit if it means cutting costs for writing by 99%” and I can’t help but think “that quality hit will largely get sorted out in a few months as people learn better prompting techniques with these LLMs and their wacky and amazing emergent behaviors” and I would be really worried for all writers. It’s time to reinvent ourselves. Also there’s going to be a glut of AI text>AI deepfake voice podcasts covering every topic you can possibly imagine.
@codefolio on epeoblem is that the latency loop for #chatgpt vs. people is seconds vs. hours/days/weeks. You can iterate with chatgpt in a way you can’t easily with most people.
@pallenberg Die Landesverwaltung #SchleswigHolstein darf #ChatGPT demnächst nutzen, wenn auch unter starken Einschränkungen. Das ITEG-SH macht den Einsatz rechtlich möglich.
“OpenAI’s new privacy and data protection extends only to customers who use the company’s API services. ‘We may use Content from Services other than our API,’ the company’s updated Terms of Use note. That could include, for example, text that employees enter into the wildly popular chatbot ChatGPT. Amazon reportedly recently warned employees not to share confidential information with ChatGPT for fear that it might show up in answers.”
I’m not against #AI tech in principle. But I do think we need to be quite concerned about who currently leads its development, and we should be actively seeking out and supporting ethical alternatives, just like we’re doing here on the Fediverse.
“The challenge for the open movement is already clear: build an alternative to corporate, closed machine learning systems. And to protect the commons from exploitation by these systems.”
Solving causal #reasoning tasks is a hallmark of intelligence. One recent study [1] categorizes these tasks into covariance-based and logic-based reasoning (screenshot) and examines how #GPT models perform on causal discovery, actual causality, and causal judgments.
Coincidentally, another recent study systematically tests how many books GPT models have memorized [3]. The researchers used a similar approach in the form of a name cloze test (screenshot 1) and found that the models do retain substantial information from these books (GPT4 remembers more than #ChatGPT; screenshot 2).
ChatGPT is optimized for giving the most predictable, obvious response.
Usually that means it's dull and boring, but I find it useful sometimes, when I end up thinking too divergently about something, I need help finding the obvious solution right that's in front of my face.
✅ 2 PM EDT: The Digital Domain with @nick and Bob returns to its silly normal. The narrator of an HKC favorite book series goes Eleven Labs, and what does Derek's girlfriend have to do with it?
✅ 4 PM: That Party Show! @hannah_renee_11 plays your up-tempo, weekend party favorites and gives you a drink mix recipe, maybe?
✅ 7 PM: The Country Bar and Grill reopens, and @chris is making the drinks and running the jukebox. Country Music!
Still ahead. #ElevenLabs R.C. Bray reads a #ChatGPT story composed by Derek...but what does Derek's girlfriend have to do with it? More silliness with AI and your calls and messages. It's hour 2 of the Digital Domain with @nick and Bob!
At a time when artificial intelligence technology is learning to mimic human behavior, there is growing concern around the world that this rapid development could lead to new challenges, including job losses and increased misinformation. Included.
For more Info---> https://mughalnewsnetwork.blogspot.com/2023/05/fear-of-artificial-intelligence.html
Worrying:
"#ChatGPT Chief Executive Sam #Altman later told reporters that "all the heads are surprisingly on the same page about what needs to be done."
This probably mean that oversight and legislation will be less than #BigTech had feared.
VP [#Harris...S]He said that the private sector has a moral and legal responsibility to ensure the safety and security of...
@santiago@MonaApp@MaJ1@tootapp Btw., this is what happens when I do as #ChatGPT suggests. It fails to mention that this is a #Windows only method of entering codes. I’m not sure whether #Linux can do that, but an #Apple#Mac can’t, because #macOS has other means of accessing special characters.
Laying off reporters, but starting a 24/7 TV channel, is not a good look for WaPo. A lot of resource suck for a trickle of revenue. Hurts the core product.
「 we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today 」
— semianalysis.com
FWIW if you're using the free "research" version of ChatGPT you're probably still getting conversations mixed up with other users. At 6am this morning I started a new thread and gave it a list of data points. Then asked it to start summarizing data as topics. The result was based on a completely different list of data points and didn't match any previous threads in my account. #ChatGPT https://apnews.com/article/chatgpt-openai-data-privacy-italy-b9ab3d12f2b2cfe493237fd2b9675e21
What are you staring at?
Move on, nothing to see here!
While humans are very gullible when it comes to language, realizing how little "understanding" those LLMs have about any concept at all is easier using vision. We tolerate way fewer visual errors than we do language errors, and the AI scammers know that.
The best use for #ChatGPT I found is to have it write these German notes for me based off an English prompt. I hate typing out all of these set German phrases (Floskeln). By the looks of this it can even generate those DABs.
One question I have though is: Can Germans differentiate between ChatGPT generated German and bureaucratic German? It feels like there is a very thin line between the two.
We are just at the beginning of the AI era, but already we can see that there are different ways to work with it. The tool determines the result - here just like with other services.
Chatbots are trained on astronomical amounts of data taken from the internet. Operating in a way akin to predictive text, they build a model to predict the likeliest word or sentence to come after the user’s prompt. This can result in factual errors, but the plausible nature of the responses can trick users into thinking a response is 100% correct.
The role of critical thinking and health skepticism to be taught in schools, colleges and universities is more critical now than ever before. The onslaughts are coming, and discerning facts from fiction (worse, “directional nonsense”) will become a critical skill for an informed human being
Don't think I'm putting out UX team out of a job yet - but I found a good way to communicate about a design concept via #ChatGPT using ASCII https://shareg.pt/HaR1Pn9