Modern #AI text generators create randomized output with no prior planning. They resist to be quality-checked by tools and processes established in the software industry.
Given this, the results are amazing. However, companies are selling the idea that these assistants will do quality checking themselves soon™.
This is mass delusion. But hey, the perks for managers/investors are worthwhile 🤷.
I think one of the biggest fears people have about AI is that it isn't perfect as assumed, but that, like us humans, it takes the given information, assumes the most likely outcome, and presents it plausibly.
Hello Mastodon, I know that a lot of you discuss the high environmental cost (such as energy use and water use) of AI and I hope that some of you could reply with authoritative publications/links regarding this problem! I want to try to convince an environmental science colleague #climatechange#AI#chatgpt#energy#technology#machinelearning#llm
I’d like to trust this story, but it fails to link to its supposed source or provide enough info to find it elsewise. A few clicks around the site makes me think that it may well be nothing but a #LLM-composed content farm. https://cosocial.ca/@kgw/112498693958537559
An #LLM with all the smoke and mirrors of statistical and analytic processing of the best of them, but based solely on the complete corpus of Shakespeare’s writings.
Ask any question and its reply will be in his words.
There's an economic curse on Large Language Models — the crappiest ones will be the most widely used ones.
The highest-quality models are exponentially more expensive to run, and currently are too slow for instant answers or processing large amounts of data.
Only the older/smaller/cut-down models are cheap enough to run at scale, so the biggest deployments are also the sloppiest ones.
I have an #AI article writing tool that makes about 20 different API calls. Most of them are for generation but several of them use the #LLM for reasoning tasks. For example matching keywords to the article headings it would be most appropriate to write about them under, then returning a JSON.
I'm only a hobbyist but I'd say a couple of the prompts are pretty complex.
I'd been writing a post for #weblogpomo2024 talking about some of the more comical fuck-ups all of these #ai and #llm have been spewing. And now I'm fucking furious.
Note: content warning for depression, self-harm, and suicide
Given how many #writing contests, anthologies and magazines are currently struggling with a flood of #AI / #LLM -generated spam, have you heard about anyone trying to fight the problem by asking specifically for stories which AI cannot easily write?
Even the best models I've tried cannot easily use #solarpunk themes, symbols and structures - they always come out unnatural.
Are there any specific limitations, formats that can work similarly?
i’m very excited about the interpretability work that #anthropic has been doing with #LLMs.
in this paper, they used classical machine learning algorithms to discover concepts. if a concept like “golden gate bridge” is present in the text, then they discover the associated pattern of neuron activations.
this means that you can monitor LLM responses for concepts and behaviors, like “illicit behavior” or “fart jokes”
so now we have a way to interpret and query #LLM responses in a structured format, as well as a control mechanism for driving LLM behavior
this is great news
Bruce Schneier wrote that prompt injection boils down to the fact that data and code pass through the same channel. with this interpretability work, we’re seeing the beginnings of a control channel separated from the data channel — you can control LLM behavior in a way that you can’t override via the data channel
At PyCon Italia 2024 Ines Montani is presenting her talk "The AI Revolution Will Not Be Monopolized: How open-source beats economies of scale, even for LLMs" 🐍