I am not a native English speaker. I make grammar mistakes, and this can make me difficult to understand. Instead of nagging colleagues to proofread my writing, I decided to find a tool to correct my mistakes and help me learn along the way.
I evaluated four popular tools to make an informed decision. Here is a tale of Privacy Policies, slapping OpenAI on products to catch up with competition, and grammar checkers.
TL:DR: Curious about which language models perform best on legal reasoning tasks? The latest evaluation reveals that Open AI's GPT-4 takes the lead, followed closely by Google's Gemini Pro.
I've been thinking about program understanding, and about how to encourage #languagemodels to do compositional/verifiable #reasoning on program text ("statically").
If, as some recent literature suggest, transformer-based LMs are not more expressive than regexps, this line of thinking is doomed, but at least it could be a valuable heuristic and complementary to rigorous #formalverification. #machinelearning#nlproc
On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...
Very excited to share a substantially updated version of our preprint “Language models show human-like content effects on reasoning tasks!” TL;DR: LMs and humans show strikingly similar patterns in how the content of a logic problem affects their answers. Thread: 1/10 #LanguageModels#lms#AI#cogsci#machinelearning#nlp#nlproc#cognitivescience
I'm taking some time today to test a few new libraries/tools.
These CLI tools for working with llms by @simon work like a charm! And they support unix pipes. <3
There's this Google internal document, for example, that points out FLOSS community is close to eating Google's and OpenAI's cake:
ttps://www.semianalysis.com/p/google-we-have-no-moat-and-neither
So here is my question to you:
What are the best examples of useful, small, on-device models already out there?
Everybody’s talking about Mistral, an upstart French challenger to OpenAI (arstechnica.com)
On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...
OC NanoLLM - A Python streamlit app that implements the smallest usable LLM
This post is meant as a small demonstration of both streamlit and languagemodels packages....