I'd been writing a post for #weblogpomo2024 talking about some of the more comical fuck-ups all of these #ai and #llm have been spewing. And now I'm fucking furious.
Note: content warning for depression, self-harm, and suicide
I have an #AI article writing tool that makes about 20 different API calls. Most of them are for generation but several of them use the #LLM for reasoning tasks. For example matching keywords to the article headings it would be most appropriate to write about them under, then returning a JSON.
I'm only a hobbyist but I'd say a couple of the prompts are pretty complex.
There's an economic curse on Large Language Models — the crappiest ones will be the most widely used ones.
The highest-quality models are exponentially more expensive to run, and currently are too slow for instant answers or processing large amounts of data.
Only the older/smaller/cut-down models are cheap enough to run at scale, so the biggest deployments are also the sloppiest ones.
An #LLM with all the smoke and mirrors of statistical and analytic processing of the best of them, but based solely on the complete corpus of Shakespeare’s writings.
Ask any question and its reply will be in his words.
I’d like to trust this story, but it fails to link to its supposed source or provide enough info to find it elsewise. A few clicks around the site makes me think that it may well be nothing but a #LLM-composed content farm. https://cosocial.ca/@kgw/112498693958537559