A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really no different from a hammer". I was compelled to make a poster to address these claims. Steal it, share it, print it and use it where you see fit.
Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.
"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."
In this regard the tools don't take us to the future, but to the past.
No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.
In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.
Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.
It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.
The "effective altruism" and "effective accelerationism" ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let's try something else.
😆 In general ‘AI’ is a very poor name for a bunch of technologies that enable computers to do better pattern matching.
Even worse, #OpenAI is now using ‘AGI’ to mean better-than-human level performance (at what task exactly is unclear), which isn’t what the phrase is generally understood to mean at all.
The more you look into this whole area, the more you realise there’s a lot of smoke and mirrors: marketing hype dressed up as technological revolution. #AI#AIEthics
This essay by Karawynn Long is the best introduction to #LLM-driven #AIHype I've read. Highly recommended!
In a nutshell, #BigTech is generating PR and racking up profits by exploiting the psychological heuristics that
language fluency means intelligence, and
machines produce factual output.
The inability to notice the failure of these heuristics is how #AIEthics problems – worker exploitation, misinformation, massive energy use – gets swept under the rug.
Read this from @alex and @emilymbender.
“AI-related policy must be science-driven and built on relevant research, but too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity….” #AIEthics#AIHypehttps://dair-community.social/@emilymbender/110871604982224575
A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.
Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬
Finally watched Adam Conover's interview of @timnitGebru and @emilymbender and this was such a delight to watch! Funny, and a lot of information.
Took so much notes for a presentation I'm giving in September and I will be recommending this video to everyone, alongside the stochastic parrot day recording !
Thank you for the work you're doing, for speaking out and being a voice of reason in the ambient madness.
What jobs are we preparing students for by boosting their writing productivity with AI? After shedding 40% of its workforce, the gaming site Gamurs posted an ad last June for an editor to write 250 articles per week. That’s a new article every 10 minutes, at $4.25 per article.
As @novomancy has noted, AI is only the accomplice here. This clickbait nightmare is the logical conclusion of the ad-supported web.
While a misleading narrative of doom is being widely pushed, I want to contribute to bringing attention to the real harms and risks of AI. And I do this, honestly, with an intent to show how the harms are tangible and can be addressed.
This is a good thing.
If we can be open and honest about the actual risks then we stand a better chance of owning them and acting to evade or mitigate them. If we don't talk about them, our chances of managing them responsibly are zero.
All the harms are human-made and hence are under human control. What we need to do is demand more transparency around each issue, and acknowledge how all teams that deploy or make use of AI need a mitigation strategy for many different types of harms.
I give you The Elements of AI Ethics, which borrows from and builds on my chart from 2021, The Elements of Digital Ethics.
👀 "All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as #ChatGPT, Harvey.AI, or Google #Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being."
"The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value."
Interesting: The National Library of the Netherlands restricts access to its collections for commercial #AI companies crawling without permission. They remain committed to open access for research and will adapt the AI policy as needed.
In July I got a tip introducing me to the paper 'A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle' (Suresh & Guttag, 2021). It contains a very useful diagram outlining where and how bias will affect the outcome of the machine learning process. I could not help but apply my own visual interpretation to it for use in teaching. Feel free to use it to understand why and how so-called AI tools may contribute to harm.
I contacted the authors, Harini Suresh and John Guttag, and have their approval for my reimagination of their chart. For an in-depth understanding of the different biases I recommend diving into their paper (linked in the post).
In my post I provide a brief overview of the biases that the diagram addresses. You can also download the diagram in PDF-format.
Generative AI can not generate its way out of prejudice
The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.
「 If you’re asking your chatbot for political information, are the results skewed by the politics of the corporation that owns the chatbot? Or the candidate who paid it the most money? Or even the views of the demographic of the people whose data was used in training the model? 」
... including a framework to evaluate models from several sustainability-related angles, including #EnergyEfficiency, #carbon intensity, #transparency, and #social implications
How do you approach questions of sustainability & #AIethics in your #GeoAI work?