“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.
It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.
Listening to very smart people talk about #GPT4 I'm reminded of the joke about a checkers-playing dog.
A guy has a dog that plays checkers. "My goodness," everyone says, "that's amazing. What a brilliant dog!"
"Not really," he replies, "I beat him four games out of five."
That's GPT4. It's capacities are amazing and completely unexpected.
But it's also so limited. You shouldn't back the dog in a checkers tournament, and you shouldn't use an LLM as a medical assistant or in many other ways.
People keep telling me that #ChatGPT is amazing for proofreading text and improving scientific writing.
I just gave #GPT4 a section of a grant proposal and it made 11 suggestions, none of which were worth keeping (often adding or removing a comma, or repeating a preposition in a list).
More interestedly, a number of its suggestions were identical to my originals.
Article 1: All human beings have the duty to respect the dignity, freedom, and equality of their fellow human beings, without distinction of race, sex, religion, or social origin.
After hearing Sebastian Bubeck talk about the #SparksOfAGI paper today, I decided to give #GPT4 another chance.
If it can really reason, it should be able to solve very simple logic puzzles. So I made one up. Sebastian stressed the importance of asking the question right, so I stressed that this is a logic puzzle and didn't add anything confusing about knights and knaves.
Asked #Copilot (formerly #BingChat) a familiar riddle but with numbers changed to make it impossible. It generated the same solution but substituting the numbers so that it ends up with the nonsense claim:
Jeremy Howard's keynote about LLMs was one of the most interesting talks at the #positconf2023 and I highly recommend watching his talk. The talk focuses on the landscape and an overview of #LLM, particularly #GPT4. Jeremy provides some cool tricks and uses cases of LLMs. I love his example about sending a prompt with your own #Python functions and asking it to use it.
New article posted by Hasan Çimen: For us visually impaired individuals, accessing image descriptions has long been a challenge. While object recognition apps provided some assistance, they were limited in their ability to describe images comprehensively. However, recent developments in AI have brought about a groundbreaking solution, making detailed image descriptions accessible to the community. Let’s look at the brief history https://accessibleandroid.com/detailed-image-descriptions-with-bing/#Android#AI#GPT4#Bing
I can honestly say that #OpenAI has made my life better this year in a small, but significant way. 💖 I can ‘share’ any image on my iPhone to #BeMyEyes and almost instantly get a very detailed description back in seconds. 📱✨
It might not always be completely accurate, but believe me when I say it is the single biggest thing to help in 38 years of being blind! 🌟 #Accessibility#AI#AISoftheBlind#Blind#ComputerVision#Disability#GPT4#Innovation#MachineLearning
Half of my traffic on Moliere.love now comes from Bing.
Since 1) nobody really uses Bing and 2) Bing is the default search engine behind GPT4,
my guess is that this extra new Bing traffic is a side effect of online searches in GPT4
I was looking for an answer to the simple question, how large is the trained GPT4 model in Gigabyte, terribly hard to find some estimate ? 🤔 #AI#GPT4#size
If you saw the example footage of #Sora recently and thought, "Pfft, it's just #AIvideo garbage. Can't do sound, narration, and all the other stuff you need, so there!"
You can do all of that, VERY easily now. 15 minutes and I was able to get #aimusic from Google music labs. The narration is my voice reading a script #GPT4 made for me and then Speech2Speech being used on #ElevenLabs to get some star power.
Folks, give me two hours and I could get you a 10 minute doc.
#LLMs have really created a paradigm shift in machine learning. It used to be so that you would train an #ML model to perform a task by collecting a dataset reflecting the task, with task output labels, and then using supervised learning to learn this task by doing.
Now a new paradigm has emerged: Train by reading about the task. We have such generalist models that we can let them learn about the domain by reading all the books and other content about it, and then utilize that learned knowledge to perform the task. Note that task labels are missing. You might need those to measure the performance but you don't need those for training.
Of course if you have both example performances as task labels and lots of general material about the topic, you can actually use both to get even better performance.
Here is a good example of training the model not by example performances, but by general written knowledge about the topic. #GPT4 surpasses the quality levels of previous state-of-the-art despite not having been trained for this task.
This is the power of generalist models; they unlock new ways to train them, which for example allow us to surpass human-level by side-stepping imitative objectives. This isn't the only way to train skills these models enable, there are countless other ways, but this is an uncharted territory.
The classic triad of supervised learning, unsupervised learning and reinforcement learning are going to have an explosion of new training methodologies to become their peers because of this.
“What we are going to see, in the fullness of time, I promise you, is that #Gemini is more or less in the same ball park as #GPT4, handy for a bunch of things, but untethered in reality, still with dicey, unpredictable reasoning, and a very limited understanding of the world. Don’t let the PR fool you”
Here, the ChatGPT C-LARA-Instance, Belinda Chiera, Cathy Chua, Chadi Raheb, Manny Rayner, Annika Simonsen, Zhengkang Xiang, and Rina Zviel-Girshin use the #OpenSource#CLARA platform to evaluate #GPT4's ability to perform #linguistics#NLP tasks such as #segmentation, #lemmatization and #glossing.
On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...
Everybody’s talking about Mistral, an upstart French challenger to OpenAI (arstechnica.com)
On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...