The training of ChatGPT-3 emitted an estimated 552 tons of carbon dioxide equivalent — that’s more than three round-trip flights between San Francisco and New York.
Recently, I was asked about the apparent ‘schism’ between those making a lot of noise about fears inspired by fantasies of all-powerful ‘AIs’ going rogue and destroying humanity, and those seeking to illuminate and address actual harms being done in the name of ‘AI’ now and the risks that we see following from increased use of this kind of automation.
I objected strongly to the framing and tried to explain how it was ahistorical. Here is my response in blog form:
@emilymbender Excellent read. I like your style, thorough, direct and mixing some kind of humour as well (e.g. "clown car"). This false equivalence habit is a nasty one as well. #AI
On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"
Anyway, there's a new bug, so if you have thoughts on #MDN adding #AI stochastic bullshit to what has, up to now, been the premier technical reference for web developers, you could make them heard there https://github.com/mdn/yari/issues/9230
The post also notes that many users were happy with the answers, ignoring that the target audience of people who came to MDN looking for help with something they didn't already know may not immediately recognize that the answer is subtly wrong, or just plausible looking #AI gibberish
#Facebook, which has signalled to the financial market that VR is dead and they're all about #AI now 1, putting "AI in every product", is absolutely going to claim that synthetic text generation machines are "users" of their products, including #InstagramThreads.
You're not even the product anymore, you're not even a person. You're training data. You're the "shit they grow their money in" 3.
AI-Powered Tools for UX Research: https://www.nngroup.com/articles/ai-powered-tools-limitations/
Issues and Limitations: the technology is promising, but there’s a lot of “market claims” that are just claims, and the tools don’t actually live up to their marketing brochure. So, for now, play with it, but also be cautious and double check everything. #AI#UserResearch#AIAnalysis
#US#USArmy#AI#GenerativeAI#LLMs#Chatbots: "Large-language models, LLMs for short, are trained on huge swaths of internet data to help artificial intelligence predict and generate human-like responses to user prompts. They are what power generative AI tools such as OpenAI’s ChatGPT and Google’s Bard.
Five of these are being put through the paces as part of a broader series of Defense Department experiments that are focused on developing data integration and digital platforms across the military. The exercises are run by the Pentagon’s digital and AI office and military top brass, with participation from US allies. The Pentagon won’t say which LLMs are in testing, though Scale AI, a San Francisco-based startup, says its new Donovan product is among the LLM platforms being tested.
The use of LLMs would represent a major shift for the military, where so little is digitized or connected. Currently, making a request for information to a specific part of the military can take several staffers hours or even days to complete, as they jump on phones or rush to make slide decks, Strohmeyer says."
I asked #BingChat (creative) to explain the relevance of the ancient proverb "When elephants fight it is the grass that suffers" in relation to #Zuckerberg positioning his new social media platform, #Threads in direct competition with #ElonMusk's #Twitter
It may come as a shock to some but Mastodon posts can and probably will be used to train AI models. That's completely logical because Mastodon posts are completely public. #Mastodon#ai
The Ultimate Guide to Embracing Randomness: Unleash the Joy of Serendipity!
Me:...