Here, the ChatGPT C-LARA-Instance, Belinda Chiera, Cathy Chua, Chadi Raheb, Manny Rayner, Annika Simonsen, Zhengkang Xiang, and Rina Zviel-Girshin use the #OpenSource#CLARA platform to evaluate #GPT4's ability to perform #linguistics#NLP tasks such as #segmentation, #lemmatization and #glossing.
Following much optimism regarding #GPT4's capabilities, recent studies highlight its limited effect on reply time and the potential risks associated with using #AI to draft replies to patient messages.
A study conducted by University of California San Diego School of Medicine showed an increase in response length and reading time and no effect on the reply time using #GenA.
I think I'll settle on paying for Anthropic Claude 3 via their web interface (I'll check out the API access at some point too), and use PAYG API credits via Drafts for access to GPT 4. The GPT 4 selector in the API currently redirects to gpt-4-turbo.
ChatGPT from OpenAI is a service, it's not necessarily the same as the model (GPT-4) that it is using in the background. OpenAI adds some elements like code interpreter which makes it perform (much) better then models without such features. Regardless OpenAI faces some good competition from the Llama3 models, i hope it will stimulate them to quickly release GPT-5. #AI#Llama3#opensource#GPT4#GPT5#ChatGPT
The score of Llama3 70B on the LMSYS leaderboard is impressive. Although it's also clear that the latest GPT-4 is still a lot better. However Llama3 is opensource and freely available and a larger version (400B parameters) is on the way and will be closer to GPT4 with regard to performance on the various benchmarks. https://chat.lmsys.org/?leaderboard #AI#GPT4#LMSYS#Leaderboard#Llama3#opensource
Ich hoffe, das Passkeys diesbezüglich nicht betroffen ist so wie Passwort-Manager wie @keepassxc, @bitwarden inklusive 2FA schon einen grösseren Schutz gegenüber der KI ergibt.
»GPT-4 kann eigenständig bekannte Sicherheitslücken ausnutzen:
Forscher haben festgestellt, dass GPT-4 allein anhand der zugehörigen Schwachstellenbeschreibungen 13 von 15 Sicherheitslücken erfolgreich ausnutzen kann.«
“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.
It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.
Asked #Copilot (formerly #BingChat) a familiar riddle but with numbers changed to make it impossible. It generated the same solution but substituting the numbers so that it ends up with the nonsense claim:
Veel posts over wat GPT4 niet kan verhullen af en toe wel hoe hoe goed het is in kennisvragen over complexe onderwerpen, ook met de betrouwbaarheid en de noodzaak tot controleren in het achterhoofd, heeft het daar veel toegevoegde waarde ten opzichte van Google Search. Met name in pure tekstvragen, uitleg van bepaalde concepten, theorieeen, frameworks etc in elke wetenschap die je kunt bedenken. #AI#GPT4
#LLMs have really created a paradigm shift in machine learning. It used to be so that you would train an #ML model to perform a task by collecting a dataset reflecting the task, with task output labels, and then using supervised learning to learn this task by doing.
Now a new paradigm has emerged: Train by reading about the task. We have such generalist models that we can let them learn about the domain by reading all the books and other content about it, and then utilize that learned knowledge to perform the task. Note that task labels are missing. You might need those to measure the performance but you don't need those for training.
Of course if you have both example performances as task labels and lots of general material about the topic, you can actually use both to get even better performance.
Here is a good example of training the model not by example performances, but by general written knowledge about the topic. #GPT4 surpasses the quality levels of previous state-of-the-art despite not having been trained for this task.
This is the power of generalist models; they unlock new ways to train them, which for example allow us to surpass human-level by side-stepping imitative objectives. This isn't the only way to train skills these models enable, there are countless other ways, but this is an uncharted territory.
The classic triad of supervised learning, unsupervised learning and reinforcement learning are going to have an explosion of new training methodologies to become their peers because of this.
Interesting, some empirical research on how GPT4 /ChatGPT performs in summarizing. Still the low rate of errors can be unacceptable in some contexts. As noted "Life-critical medical decisions should remain based on full, critical, and thoughtful evaluation of the full text of research articles in context with clinical guidelines.". https://www.annfammed.org/content/22/2/113 #AI#ChatGPT#GPT4#summary#medical
Claude 3 is officially on the top of the leaderbord, although it's just one leaderboard/benchmark and added value always depends on use and context, it's still the end of GPT4 total dominance (unil GPT5 arrives probably). Interesting is also the performance of the Claude 3 Haiku model which is relatively small/cheap. https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard #leaderboard#Claude3#GPT4#AI
Jensen Huang: OpenAI's latest model has 1.8 trillion parameters and required 30 billion quadrillion FLOPS to train.
Billion quadrillion somewhat hard to grasp... 😂 #AI#Nvidia#GPT4
(continued from previous post)...blackwell GPU will cost $ 30.000 (minimum), so training a GPT4 model with 2000 GPUs costs approx. $ 60 million ? (in 90 days, at a minimum because there are also other costs) #training#GPT4#GPU#Nvidia#Blackwell#AI#LLM
Schade ich mag ASCII-Art sehr doch wundern mich deswegen nicht. Dies ist eigentlich klar definiert und #Bilde'r in ASCII zu umwandeln ist schon lange keine #Kunst.
«#KI-Sicherheit –Wie Ascii-Art #GPT4 und Gemini austricksen kann:
Verschiedene Sicherheitsmechanismen sollen verhindern, dass euch #ChatGPT die Bauanleitung für eine #Bombe gibt. Jetzt haben #Sicherheit'sforscher:innen herausgefunden, dass sich die umgehen lassen – mit #Ascii-Kunst.»