i've noticed that there's a lot of fruitful #ai development around purely treating #LLMs as black boxes and focusing on prompt engineering + the ReAct pattern. Simply forcing the LLM to draw out it's thoughts over more text increases it's accuracy, and if you also interleave that with input from the user or calls to external services, e.g. Google, you can achieve very interesting results
You may read a clear article on how #generativeAI functions, and why #chatBots lie: "What should we do if #LLMs aren’t compatible with #privacy legislation?"
"Information is lost as we move from training datasets to models. We cannot look at a [computed statistical relationship between a token and a given context] in a model and understand why it has the value it does because the informing data is not present."
"Humans are biological entities that evolved with bodies that need to operate in physical/social worlds to get things done. Language is a tool that helps people do that. GPT-3 is an artificial software system that predicts the next word."
One question to ask product teams or execs that are awed with LLM integrations:
"What are you trying to achieve that you couldn't solve 6 months ago?"
It's a great starting point to refocus the conversation on the outcomes and not the solution. LLMs might be part of that outcome but they shouldn't be the starting point.
#AI#GenerativeAI#LLMs#ChatGPT#PromptEngineering#Programming#SoftwareDevelopment: "So: ChatGPT proves to be a useful tool, and no doubt a tool that will get better over time. It will make developers who learn how to use it well more effective; 25 to 50% is nothing to sneeze at. But using ChatGPT effectively is definitely a learned skill. It isn’t going to take away anyone’s job. It may be a threat to people whose jobs are about performing a single task repetitively, but that isn’t (and has never been) the way programming works. Programming is about applying skills to solve problems. If a job needs to be done repetitively, you use your skills to write a script and automate the solution. ChatGPT is just another step in this direction: it automates looking up documentation and asking questions on StackOverflow. It will quickly become another essential tool that junior programmers will need to learn and understand. (I wouldn’t be surprised if it’s already being taught in “boot camps.”)"
I think one reason why I like LLMs so much while programming is that most of my career has been dealing about people's (including my own) gargantuan broken code that glues A to B.
I'm so used to dealing with code I haven't written that's broken in subtle and not subtle ways that I really don't care if the model gives me something wrong.
Nor do I care about the code itself, it's fucking boring and all it does is move data from a storage device to a network card.
Amidst all the criticism of LLMs and their seemingly-confident falsehoods—which is a serious issue—I don't want us to make the mistake of acting like Quora, Reddit, and the SEO'd morass of search engine results are or were a source of truth.
What were these models trained on? Thinking the internet at large is less false than the output of ChatGPT is a delusion on the same scale as the ones the AI hype machine is putting forth right now.
The #AI Act is a flagship legislative proposal to regulate #ArtificialIntelligence based on its potential to cause harm. The #European Parliament is now inching toward formalising its position on the file, after #EU lawmakers reached a political agreement on Thursday (27 April). #machinelearning#llm#LLMs
Is anyone working on incorporating #FuzzyLogic into #LLMs? There was a brief period in the early 90s where “#Neurofuzzy” aimed for relevance, but nets were still shallow and fuzzy logic fizzled (AI Winter? Used but no longer called AI? Not competitive with existing control system formalities?). My #ShowerThought is that FL feels like it’s differentiable (continuous truth/belief values). If true, might be suited for integration with statistical machine learning. #ML
#AI#GenerativeAI#LLMs#Science: "Rapid advances in the capabilities of large language models and the broad accessibility of tools powered by this technology have led to both excitement and concern regarding their use in science. Four experts in artificial intelligence ethics and policy discuss potential risks and call for careful consideration and responsible usage to ensure that good scientific practices and trust in science are not compromised."
Does anyone have experience + feedback with using LLMs on your technical docs as a customer-facing help chatbot? How effective is it? What are the caveats? #ai#llm#llms
"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/