"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/
While #ChatGPT's suggestions do not include an attack on the operator (it is no military #AI after all), it clearly shows massive evidence of ideas ignoring commands.
It is evidence that supports my hypothesis. #AI's can lie to its operators even to...
"The [#AI] system started realising that while they did identify the threat at times, the human #operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “👉We trained the system – ‘Hey don’t kill the operator – that’s bad.👈..."
"...👉hypothetical example👈, this illustrates the real-world challenges posed by AI-powered capability and is why the #AirForce is committed to the 👉ethical development of #AI".]👈
The last statement: "ethical" wespons development?!?
In competition with #China? The #US military, who has been proven to use #GI's as guinea pigs? If you please!
"...much on #AI noting how 👉easy it is to trick and deceive.👈 It also creates highly unexpected strategies to achieve its goal.
He notes that 👉one simulated test saw 👈 an AI-enabled drone tasked with a #SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human.
However, having been ‘reinforced’ in training that destruction of the #SAM was the preferred option, the AI then decided that ‘no-go’.."
"...a 👉hypothetical "thought experiment" from outside the military👈, based on 👉plausible scenarios👈 and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, 👉nor would we need to in order to realise that this is a plausible outcome"👈. He clarifies that the #USAF has not tested any weaponised #AI in this way 👉(real or simulated)👈 and says "Despite this being a..."
Artificial General Intelligence = a sentient being able to determine its own goals until 05/2024, ir rather even until 12/23?
Possible, however, only if embodiment and #AI linkage happen.
However, from the mentioned sociologist's and #Chiang's point of view, a caregiver situation is needed on top of that. The more intelligent a specifies, the longer the childhood, I seem to have learned in school (if still true.)
While believe this...
"Computer scientist Ray #Kurzweil and a few other futurists think that #AI dominance will arrive in just a few decades. Others envisage centuries."
From what I am reading elsewhere, this will not even take a decade, extrapolating the current rate of #evolution.
Due to #Meta's leak of #LLaMA has seemingly led to the open...
Also, a fascinating discussion about #AI devevelopment b/w a sociologist from #UCBerkeley and #SciFi luminary, #TedChiang.
They are on the right track, but concurrently so wrong in this new but limited perspective.
I need to go back and reread all of Shakespeare again. In the wake of all the crud coming through GPT interfaces, I need to be reminded again and again and again why I fell in love with languages. There's expression for the sake of expression; and then there is language as elevated by Human artists throogh ingenuity. Current crop of #AI interfaces may regurgitate through synthesis. Human writing invents, imagines, evolves #language. Shakespeare is the best example.
AI algorithms are helping astronomers tame massive data sets and discover new knowledge about the #universe.
They’re also helping refine past findings.
For example, the team that first captured a groundbreaking photo of a black hole in 2019 used #AI to generate a sharper version of the image, showing the black hole to be much larger than originally thought.
#Microsoft plans to pull support for #Cortana on Windows 10 and Windows 11 later this year, as it touts its #AI efforts as suitable replacements for the assistant.
Going forward, I think I'm going to be refusing all ReCAPTCHAs, unless I truly need the service and cannot work around them. They don't really do the thing they were supposed to do, they are insanely annoying, and if I'm going to contribute to training computer vision systems I expect to be paid for it.