Dear #LazyWeb: What is needed to get Google to show me fun AI suggestions like adding glue to pizza sauce? How do I get the fake-#AI results?
I am not kidding. Most of my searches are on macOS (12 & 14) using Safari and occasionally other browsers (I've got 7 installed...) but I only log into my G accounts on an as-needed basis and because I use a real mail client for email, I almost never need to log in. I wipe cookies on every browser restart.
Modern #AI text generators create randomized output with no prior planning. They resist to be quality-checked by tools and processes established in the software industry.
Given this, the results are amazing. However, companies are selling the idea that these assistants will do quality checking themselves soon™.
This is mass delusion. But hey, the perks for managers/investors are worthwhile 🤷.
I would not be surprised if LLMs could get us to 99% correctness. Which is still too low for automated processes but plenty good for manual work.
You can have one #LLM check another's work, and it works to a reasonable degree, because LLMs are stronger evaluators and classifiers than truth generators. They are better at telling whether an answer is correct than giving a correct answer.
LLMs aren't #AGI but they may end up a tool used by a theoretical AGI.
I think one of the biggest fears people have about AI is that it isn't perfect as assumed, but that, like us humans, it takes the given information, assumes the most likely outcome, and presents it plausibly.
Given how many #writing contests, anthologies and magazines are currently struggling with a flood of #AI / #LLM -generated spam, have you heard about anyone trying to fight the problem by asking specifically for stories which AI cannot easily write?
Even the best models I've tried cannot easily use #solarpunk themes, symbols and structures - they always come out unnatural.
Are there any specific limitations, formats that can work similarly?
Hello Mastodon, I know that a lot of you discuss the high environmental cost (such as energy use and water use) of AI and I hope that some of you could reply with authoritative publications/links regarding this problem! I want to try to convince an environmental science colleague #climatechange#AI#chatgpt#energy#technology#machinelearning#llm
I’d like to trust this story, but it fails to link to its supposed source or provide enough info to find it elsewise. A few clicks around the site makes me think that it may well be nothing but a #LLM-composed content farm. https://cosocial.ca/@kgw/112498693958537559
An #LLM with all the smoke and mirrors of statistical and analytic processing of the best of them, but based solely on the complete corpus of Shakespeare’s writings.
Ask any question and its reply will be in his words.
I'd been writing a post for #weblogpomo2024 talking about some of the more comical fuck-ups all of these #ai and #llm have been spewing. And now I'm fucking furious.
Note: content warning for depression, self-harm, and suicide
There's an economic curse on Large Language Models — the crappiest ones will be the most widely used ones.
The highest-quality models are exponentially more expensive to run, and currently are too slow for instant answers or processing large amounts of data.
Only the older/smaller/cut-down models are cheap enough to run at scale, so the biggest deployments are also the sloppiest ones.
I have an #AI article writing tool that makes about 20 different API calls. Most of them are for generation but several of them use the #LLM for reasoning tasks. For example matching keywords to the article headings it would be most appropriate to write about them under, then returning a JSON.
I'm only a hobbyist but I'd say a couple of the prompts are pretty complex.