#gpt and similar #llm ‘s can be tricked easily. There are multiple examples of this online
#ai trained on interactions on the internet has lead to racism, sexism, and other terrible isms being developed by the model in the past.
When we reach #gai / #agi it could do anything. We have no way of knowing what it could do, no matter what anyone says. One thing is for sure: it will know us better than we know ourselves very quickly.
The thing is, except maybe peer-reviewed papers to a much lesser degree, PhD in particular, everything humans generate, apart from art and pure math, is erroneous
The age-old gigolo principle (garbage in, 🗑 out) still holds. This is regarding #LLM's.
And, even for proven theoretical concepts, like VAR (value at risk) for banks. All the rage in the late 1990's, well did not work out really well for the banks when markets became illiquid on an unprecedented...
22/ Some people objected that #LLM tools don't "understand" anything. I agree. I didn't say anything to the contrary and hope I didn't assume anything to the contrary. But I do want to say that software can write usefully accurate summaries without semantic understanding. One reason to think so is that some software already does.
On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"
More on the #Gizmodo#AI debacle: After publishing error-ridden #LLM garbage which their own editorial team called "fucking dogshit" 'a G/O Media spokesman, said the company would be “derelict” if it did not experiment with AI. “We think the AI trial has been successful,”'
Anthropic released today their new LLM model - Cloude 2. According to the release notes, this model includes improvements in coding, math, and reasoning with respect to previous versions of their model.
The model was tested on the Bar exam and GRE (reading and writing) exam and scored 76.5% and 90th percentile, respectively.
"Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse."
Google is rolling out its artificial intelligence chatbot Bard in European Union Thursday, after resolving concerns raised by the bloc's key privacy regulator, the Irish Data Protection Commission.
The US technology giant in June delayed the release of its competitor to OpenAI's ChatGPT after the regulator said the company had given insufficient information about how its tool respected the EU's privacy rules, the GDPR.
"a combination of a simple compressor like gzip with a k-nearest-neighbor classifier ... outperforms BERT on all five OOD datasets, including four low-resource languages. Our method also excels in the few-shot setting, where labeled data are too scarce to train DNNs effectively."
Longnet: Scaling Transformers to a billion tokens, really trying to grasp the implication of this paper... "Our work opens up new possibilities for
modeling very long sequences, e.g., treating a whole corpus or even the entire
Internet as a sequence." 🤔 #AI#GenerativeAI#Longnet#Microsoft#LLM https://arxiv.org/abs/2307.02486
Ohne Angabe von Trainingsdaten, Kalibrierungs- und Trainingsprotokoll sind "Zitate" von #LLM/#Sprachmodellen 100% beliebig. Jeder kann behaupten, was er will.
Das macht es ja so "interessant", wenn jeder und jede aktuell in Folien presst, was #ChatGPT zu weiß Gott welch abseitigem Thema konfabuliert hat.
Ihr könnt auch sagen: "Ich selbst im Übrigen stimme mir jederzeit zu" - größer ist die Wertigkeit von ChatGPT-Snippets nun wirklich nicht.