I'm a PhD biologist and I read @OpenAI's threat preparedness assessment plan for CBRN threats. It appears to be total nonsense designed without any input from a scientist. Here's why: #ai#artificialintelligence#airisk#aisafety
When people fret that A.I.s will achieve superhuman general intelligence and take over the planet, they neglect the physical limits on these systems. This essay by Dan Roberts is a useful reality check. A.I. models are already resource-intensive and will probably top out at GPT-7. Roberts is one of the physicists I feature in my new book about physics, A.I., and neuroscience. #AIrisk#AIsafety#Singularity@danintheoryhttps://www.sequoiacap.com/article/black-holes-perspective/
The US, UK, and China all signed an agreement. That it doesn't say much isn't the point. Many people predicted there could be no agreement and an arms race was inevitable and now they have to update. It's now not obviously true that we have to build it first or China will beat us. #ai#artificialintelligence#airisk#aisafety https://techhub.social/@Techmeme/111358871909216452
There are some causes that had a surge in awareness over the past few years, which have seen attention wane recently with the wars & greater salience of #airisk. Expect said cause advocates to try to grab the mic again soon. If you're doing anything that might get a you a bit of attention, be on alert for tactics to make your thing about them & their cause. #communication#pr#prtips#media#outreach
Hi all. I expanded my blogging to @medium ! It that’s your favourite platform for #writing, you can follow me there.
I’m starting out with the article I wrote about #AIrisk involved with the automatisation of #hatespeech, and how that alone is a serious threat to #democracy.
The article then goes into deeper technical details on how to pre-proof our social discourse platforms against the attack of intelligent bots, which surely is just a question of time.
A group of prominent #AI and #ML scientists signed a very simple statement on giving the possibilities of global catastrophe caused by AI more prominence.
This is part of a broader movement of #AISafety or #AIRisk. I don't disagree with everything this movement has to say; there are real, and tangible consequences to unfettered development of AI systems.
But the focus of this work is on possible futures. Right now, currently, there are people who experience discrimination, poorer outcomes, impeded life chances, and real, material harms because of the technologies we have in place right now.
And I wonder if this focus on possible futures is because the people warning about them don't feel the real and material harms #AI already causes? Because they're predominantly male-identifying. Or white. Or socio-economically advantaged. Or well educated. Or articulate. Or powerful. Or intersectionally, many of these qualities.
It's hard to worry about a possible future when you're living a life of a thousand machine learning-triggered paper cuts in the one that exists already.