Effective altruists and longtermists have infiltrated the UK’s AI policy discussions, leaving the real harms unaddressed while the focus is placed on “existential risks” fueled by the obsessions of rich tech funders.
It’s not really useful to talk about #skynet and robot overlords when discussing the dangers of #artificialintelligence because that’s not what we have to worry about, at least not for a while yet.
It is much more useful to talk about the concept of #policymurder which is when systems codified by law and regulation squeeze people into destitution and death. See also “social murder.”
AI has long played a critical role @Flipboard. Today we've updated our Community Guidelines to give publishers guidance on using artificial intelligence in content creation, and to protect the quality of our product.
When college administrator Lance Eaton created a working spreadsheet about the generative AI policies adopted by universities last spring, it was mostly filled with entries about how to ban tools like ChatGPT.
"[...] as companies like Coca-Cola start making huge investments to use generative AI to sell more products, it’s becoming all too clear that this new tech will be used in the same ways as the last generation of digital tools: that what begins with lofty promises about spreading freedom and democracy ends up micro targeting ads at us so that we buy more useless, carbon-spewing stuff."
Microsoft has removed an article that advised tourists to visit the "beautiful" Ottawa Food Bank on an empty stomach, after facing ridicule about the company's reliance on artificial intelligence for news.
The AI generated code absolutely does not care about #unicode at all, so it panics, when you give it a unicode character that happens to not have their char boundary at byte index 1.
There 'will come a time where no job is needed'; #AI is the 'most disruptive force in history" In th future, 'you can have a job if you want a job . . . but AI will be able to do everything'.
As with so many #techbros Musk has a hard time imagining jobs that are not #digital or mechanical... here is a man who must be sociopathic in his absence of any consideration of the value & necessity of human contact in so many socio-economic activities.
There are many things to worry about in the accelerating ascendance of #artificialintelligence but its #energy (and cooling) requirements have been more of a technical concern... now as John Naughton points out we need to be clear about quite how much energy & resources the new generations of #AI are likely to consume.
If you thought this was all 'virtual', think again, there is a very real material element to the AI economy... one we may not want to 'afford'!
Is there anyone reading this who could give a talk on "Math(s) and Artificial Intelligence"?
It would need to be aimed at a general audience, so while the material itself doesn't need to be deep, the person giving the talk would need to have some first-hand experience of the actual math(s) that's involved.
Anyone?
If you're comfortable doing so, please boost for reach ... Mastodon-the-platform relies on networking effects.
Reuven Lerner: “I teach courses in Python and Pandas. Never mind that the first is a programming language and the second is a library for data analysis in Python. Meta’s AI system […] assumed that I was talking about the animals (not the technology), and banned me. The appeal that I asked for wasn’t reviewed by a human, but was reviewed by another bot, which (not surprisingly) made a similar assessment.” https://lerner.co.il/2023/10/19/im-banned-for-life-from-advertising-on-meta-because-i-teach-python/
Why is it that natural forms of intelligence (advanced forms of problem-solving in metabolisms, organisms, ecosystems) developed over billions of years hardly gain any attention, while so many people are enthralled by digital faux intelligence pushed by shady characters and companies with control and money-making goals? It‘s probably one of the most narrow-minded fascinations in the world, especially as we destroy naturally intelligent systems on a daily basis. #artificialintelligence#nature
We need to focus on the AI harms that already exist.
Fears about potential future existential risk are blinding us to the fact AI systems are already hurting people here and now.
Source MIT Technology Review
October 30, 2023
This is an excerpt from Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini, published on October 31 by Random House. It has been lightly edited.
The term “x-risk” is used as a shorthand for the hypothetical existential risk posed by AI. While my research supports the idea that AI systems should not be integrated into weapons systems because of the lethal dangers, this isn’t because I believe AI systems by themselves pose an existential risk as super intelligent agents.
AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes for individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real.
One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity.
I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.
Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. The Norwegian sociologist Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.
Given what my “Gender Shades” research revealed about algorithmic bias from some of the leading tech companies in the world, my concern is about the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that would also help create a future where the burdens of AI did not fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that lead to false arrests or wrong diagnoses need to be addressed now.
When I think of x-risk, I think of the people being harmed now and those who are at risk of harm from AI systems. I think about the risk and reality of being “excoded.” You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant-screening algorithm denies you access to housing. All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.
This is why my research cannot be confined just to industry insiders, AI researchers, or even well-meaning influencers. Yes, academic conferences are important venues. For many academics, presenting published papers is the capstone of a specific research exploration. For me, presenting “Gender Shades” at New York University was a launching pad. I felt motivated to put my research into action—beyond talking shop with AI practitioners, beyond the academic presentations, beyond private dinners. Reaching academics and industry insiders is simply not enough. We need to make sure everyday people at risk of experiencing AI harms are part of the fight for algorithmic justice.
Be afraid, be very afraid. Palantir is the last company you want leading AI because they will be the company that neo-fascists hire to spy on your every message sent, every dollar spent, and every movement made.
We Need Smart Intellectual Property Laws for Artificial Intelligence (www.scientificamerican.com)
“One-size-fits-all” regulation will sideline medical and research benefits promised by the advent of artificial intelligence
Schools are teaching ChatGPT, so students aren't left behind | CNN Business (www.cnn.com)
When college administrator Lance Eaton created a working spreadsheet about the generative AI policies adopted by universities last spring, it was mostly filled with entries about how to ban tools like ChatGPT.
YSK r/Futurology has an official Lemmy instance (futurology.today)
An announcement post has been made a week ago btw...