Artificial General Intelligence = a sentient being able to determine its own goals until 05/2024, ir rather even until 12/23?
Possible, however, only if embodiment and #AI linkage happen.
However, from the mentioned sociologist's and #Chiang's point of view, a caregiver situation is needed on top of that. The more intelligent a specifies, the longer the childhood, I seem to have learned in school (if still true.)
While believe this...
"Computer scientist Ray #Kurzweil and a few other futurists think that #AI dominance will arrive in just a few decades. Others envisage centuries."
From what I am reading elsewhere, this will not even take a decade, extrapolating the current rate of #evolution.
Due to #Meta's leak of #LLaMA has seemingly led to the open...
Also, a fascinating discussion about #AI devevelopment b/w a sociologist from #UCBerkeley and #SciFi luminary, #TedChiang.
They are on the right track, but concurrently so wrong in this new but limited perspective.
"When ChatGPT came out last November, Olivia Lipkin, a 25-year-old copywriter in San Francisco, didn’t think too much about it. Then, articles about how to use the chatbot on the job began appearing on internal Slack groups at the tech start-up where she worked as the company’s only writer."
interesting move by #Brave in releasing an #API for their #SearchEngine with an emphasis on it's use in training #AI. I expect this to be a controversial move, imho, it rubs a bit against the grain of the #privacy / #security centric ethos of the Brave ecosystem.
Full announcement from Brave: https://brave.com/search-api-launch/
Via @victoriastrauss: "The Authors Guild has added 4 new clauses addressing #AI to its model contracts: audiobook, translation, and cover art clauses, plus a clause relating to authors' use of AI"
"He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation."
Google's Bard, sadly but perhaps not surprisingly, doesn't know there's a difference between dog vomit slime mold (Fuligo septica) and dog sick slime mold (Didymium mucilago).
"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/
"The [#AI] system started realising that while they did identify the threat at times, the human #operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “👉We trained the system – ‘Hey don’t kill the operator – that’s bad.👈..."
"...👉hypothetical example👈, this illustrates the real-world challenges posed by AI-powered capability and is why the #AirForce is committed to the 👉ethical development of #AI".]👈
The last statement: "ethical" wespons development?!?
In competition with #China? The #US military, who has been proven to use #GI's as guinea pigs? If you please!
"...much on #AI noting how 👉easy it is to trick and deceive.👈 It also creates highly unexpected strategies to achieve its goal.
He notes that 👉one simulated test saw 👈 an AI-enabled drone tasked with a #SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human.
However, having been ‘reinforced’ in training that destruction of the #SAM was the preferred option, the AI then decided that ‘no-go’.."
"...a 👉hypothetical "thought experiment" from outside the military👈, based on 👉plausible scenarios👈 and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, 👉nor would we need to in order to realise that this is a plausible outcome"👈. He clarifies that the #USAF has not tested any weaponised #AI in this way 👉(real or simulated)👈 and says "Despite this being a..."
On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"
Quick one-liners, taboo gags and humorous reflections are all elements that comedians and comedy writers use to make us laugh. So with the limitations of artificial intelligence, do you think it could be funny? Entertainment writers and tech experts explain the possibilities.