Yay, I too got my 7-day suspension badge from Stack Overflow from adding an #LLM#AI disclaimer back after it was first reverted to my four (4) answers!
"The decision came after the executives Craig Federighi and John Giannandrea spent weeks testing ChatGPT. The product’s use of generative artificial intelligence, which can write poetry, create computer code and answer complex questions, made Siri look antiquated"
Amazing to me that it took 3 wks of ChatGPT to convince Apple that Siri was "antiquated".
Whole bunch of folks have been screaming this from the rooftops for years. 😐
#Commons#AI#GenerativeAI#Chatbots#Copyleft#Google#Search: "Ostrom described how commons can be wisely managed, over very long timescales, by communities that self-governed. Part of her work concerns how users of a commons must have the ability to exclude bad actors from their shared resources.
When that breaks down, commons can fail – because there's always someone who thinks it's fine to shit in the well rather than walk 100 yards to the outhouse.
Enshittification is the process by which control over the internet moved from self-governance by members of the commons to acts of wanton destruction committed by despicable, greedy assholes who shit in the well over and over again.
It's not just the spammers who take advantage of Google's lazy incompetence, either. Take "copyleft trolls," who post images using outdated Creative Commons licenses that allow them to terminate the CC license if a user makes minor errors in attributing the images they use:" https://pluralistic.net/2024/05/09/shitting-in-the-well/#advon
#Robots#Robotics#AI#AITraining: "Roboticists believe that by using new AI techniques, they will achieve something the field has pined after for decades: more capable robots that can move freely through unfamiliar environments and tackle challenges they’ve never seen before.
(...)
But something is slowing that rocket down: lack of access to the types of data used to train robots so they can interact more smoothly with the physical world. It’s far harder to come by than the data used to train the most advanced AI models like GPT—mostly text, images, and videos scraped off the internet. Simulation programs can help robots learn how to interact with places and objects, but the results still tend to fall prey to what’s known as the “sim-to-real gap,” or failures that arise when robots move from the simulation to the real world.
For now, we still need access to physical, real-world data to train robots. That data is relatively scarce and tends to require a lot more time, effort, and expensive equipment to collect. That scarcity is one of the main things currently holding progress in robotics back."
#AI#Robots#KillerRobots#Military: "There's also concern that the systems will become more autonomous over time. As The War Zone's Howard Altman and Oliver Parken describe in their article, "While further details on MARSOC's use of the gun-armed robot dogs remain limited, the fielding of this type of capability is likely inevitable at this point. As AI-enabled drone autonomy becomes increasingly weaponized, just how long a human will stay in the loop, even for kinetic acts, is increasingly debatable, regardless of assurances from some in the military and industry."
While the technology is still in the early stages of testing and evaluation, Q-UGVs do have the potential to provide reconnaissance and security capabilities that reduce risks to human personnel in hazardous environments. But as armed robotic systems continue to evolve, it will be crucial to address ethical concerns and ensure that their use aligns with established policies and international law."
A study that confirms what I’ve been suspecting for a while: fine-tuning a #LLM with new knowledge increases its tendency to hallucinate.
If the new knowledge wasn’t provided in the original training set, then the model has to shift its weights from their previous optimal state to a new state that has to accommodate both the previous and new knowledge - and it may not necessarily be optimal.
Without a new validation round against the whole previous cross-validation and test sets, that’s just likely to increase the chances for the model to go off the tangent.
This is anecdotal, but I now personally know two employees of #AI-related companies whose valuations have imploded within the past month, and they are now looking to bail. One company lost 73% of their (publicly traded) value in a month. The other one is a startup whose founder (according to my friend) has “finally snapped”, and the company is now in freefall.
“Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.””
You already know not to take an AI chatbot seriously. But there may be reason to be even more cautious. New research has found that many AI systems have already started to deliberately present human users with false information. Science Alert explains why "AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception.” https://flip.it/ZbnJtj #Science#AI#ArtificialIntelligence#Chatbot#Tech
"Consumer AI is just the new search" anecdote: [1/3]
Casual non-techy coworkers yesterday were talking about using excel reports to analyze data & turns out two of the people use #ChatGPT to know how to do something in excel.
So, before this #AI stuff, if you were like, "how do I do X in Excel" in google, you'd get a bunch of hits and then have to wade through the results to see which link was actually what you were looking for, then test out if their solution works.