ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more....
It is absolutely astounding to me that we are still earnestly entertaining the possibility that #ChatGPT and #LLMS more broadly have a role in scientific writing, manuscript review, experimental design, etc.
The training data for the question below are massive. It's a very easy question if you're trained on the entire internet.
Question: What teams have never made it to the World Series?
I've been thinking about the new Associated Press guidelines to avoid referring to #AI in ways that could imply humanness, sentience, or intent:
Don't say, "It WANTS you to enter more information," for example.
I've often used that kind of wording for computers in the past.
But more precise wording matters now because it's the first time we've widely had systems that could be mistaken for being human or having sentience, and it's important not to reinforce that idea.
Not sure what those who advocate for the use of ChatGPT in scientific writing have in mind. It is the very act of writing that helps us think about the connections and implications of our results, identify gaps, and devise further experiments and controls.
Any science project that can be written up by a bot from tables of results and associated literature isn’t the kind of science that I’d want to do to begin with.
Can’t imagine completing a manuscript not knowing what comes next, because the writing was done automatically instead of me putting extensive thought into it.
And why would anyone bother to read it if the authors couldn’t be bothered to write it. Might as well put up the tables and figures into an archive online, stamp a DOI on it, and move on.
EDIT: the word "hallucination" is probably out of place and misleading here, as @apodoxus argues in this thread. The anthropomorphic connotations of the word may actually create more misconceptions than help eliminate.
Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.
Maybe this is an unpopular take, but why are people getting so hung up on #chatGPT if it's still getting so many things wrong in an at times hard to verify way?
Maybe my queries have just been too specific but I have never actually seen the thing say something halfway useful yet. It all sounds authoritative enough, but doesn't hold up to the smallest amount of scrutiny. Am I just asking it the wrong things? Does it get better with GPT4? I just don't see it yet. So far, what I see is another Facebook that tricks people into thinking they're smart by parrotting something the internet said without fact-checking, just on demand now rather than on the schedule of others. Is it actually useful to folks yet as a way to learn new facts/information or does it just summarize and generate clickbaity blog posts so far?
Also #ChatGPT halluziniert auch nur noch wild herum, wenn es zu den Problemen auch im Netz keine (oder falsche) Lösungen gibt oder der Kontext von den Lösungen nur scheinbar passen.
Allerdings ist es in einigen Fällen (meiner Tests - und das kann für jeden anders sein) durchaus schneller, als zig Suchmaschinenrecherchen.
Vielleicht ist es aber in der Softwareentwicklung besonders leicht, damit zufriedengestellt zu werden. Man hat idR ein gutes Wissen in ein paar Programmiersprachen.
Hi there, if you don’t want me to hit you, please carry this sign that says “please don’t hit me” with you always. Otherwise, I can’t possibly be held responsible if I hit you. Because it’s in my nature to hit you. I can’t live without hitting people. It’s just who I am and what I do. Thank you for your understanding in this delicate matter.
Let's explore why a system capable of partial driving automation (like FSD Beta) and automated driving systems more broadly are decidedly not at all like ChatGPT.
People keep telling me that #ChatGPT is amazing for proofreading text and improving scientific writing.
I just gave #GPT4 a section of a grant proposal and it made 11 suggestions, none of which were worth keeping (often adding or removing a comma, or repeating a preposition in a list).
More interestedly, a number of its suggestions were identical to my originals.
Just realised chatGPT is violating the gov.uk content licence:
You must (where you do any of the above):
acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence;
So I've landed in Nextdoor's "Assistant" experiment, which will offer to completely rewrite your post. It is just amazingly creepy having this thing come up without warning.
OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk. If Sam truly believes in the risks posed by AI, he should consider shutting down OpenAI. Power down the data center, delete software & models associated with it. I don't get his motivation here. What do you want, man? First,you say OpenAI is the best. Then, the next day you say it will pose a significant risk to the human race. So, shut down the OpenAI & move on with your life. Stop making up bullshit #ChatGPT
This is the funniest thing I've seen in a while. Someone asks #ChatGPT to rewrite a simple statement (sitting outdoors enjoying a bagel) in a way that cannot possibly cause any offense to anyone.
"GPTs werden zu den No. 1 Google Content-Spammern"
Keine Angst, ich spamme niemanden zu, produziere aber ein tgl. Format, dessen Inhalte komplett durch ChatGPT erstellt werden & mit nem Bulk-Prozess umgesetzt werden.
Zeitlicher Aufwand? Ca. 1h um Content fuer 20 Tage zu produzieren.
Am 17. Mai 2024 sehen wir, ob das klappt oder nicht!
That this question is even asked shows the vast dangers of the premature deployment of these generative AI technologies, and the utter lack of meaningful education of the public about their risks. Disclaimers on the user interfaces are routinely ignored. This is all a disaster waiting to happen.
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data (www.404media.co)
ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more....
DuckDuckAssitant is back!
AI and Coding.
How reliable is AI lke ChatGPT in giving you code that you request?
Sarah Silverman is suing OpenAI and Meta for copyright infringement (www.theverge.com)
Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.
ChatGPT Can Be Broken by Entering These Strange Words, And Nobody Is Sure Why (www.vice.com)
Reddit usernames like ‘SolidGoldMagikarp’ are somehow causing the chatbot to give bizarre responses.