A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really no different from a hammer". I was compelled to make a poster to address these claims. Steal it, share it, print it and use it where you see fit.
I read the executive order on AI from the White House, wrote a summary and used an AI-powered video generator to create the appearance of me presenting it. In accordance with suggestions in the order, the video is clearly labeled.
Remember that while audio and video are synthetic, the script has been written without any AI assistance. That's all me. 😊
This is part 1 of 2 and part two is out soon, but you can already read the full script of my summary on https://axbom.com/aiorder.
As we are all learning together right now, I would love to hear your impressions and reflections after watching and listening to a summary where my likeness is synthetic.
🧵 1/
LexisNexis, one of the biggest data brokers on the planet, is incorporating ChatGPT-style Generative AI into their legal search engine. I read the report (linked in comments). There is a black-hole-sized void in it.
The entire report does not include a single mention of "hallucinations" or "confabulations". How can you introduce GenAI in the legal sector without ever addressing the Achilles' heel of LLMs?
The "effective altruism" and "effective accelerationism" ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let's try something else.
Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.
"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."
In this regard the tools don't take us to the future, but to the past.
No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.
In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.
Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.
It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.
... including a framework to evaluate models from several sustainability-related angles, including #EnergyEfficiency, #carbon intensity, #transparency, and #social implications
How do you approach questions of sustainability & #AIethics in your #GeoAI work?
Generative AI can not generate its way out of prejudice
The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.
Can anyone recommend a good, and preferably relatively recent, textbook or anthology on the #PhilosophyOfAI? Ideally this should include discussion of ethical issues and be suitable for undergraduate study. Thanks! 🙏
![Title: If a hammer was like AI Illustration of a hammer (a line drawing) and the quote "It's just a tool". Various topics surround the hammer and have lines drawn to the hammer, as if explaining what it is made of. Obscured data theft. It copies the <br />design of most constructions in the western, industrialised world without consent and strives to mimic the most average one of those. Bias & injustice. The hammer will most often just hit the thumb of Black, Brown and underserved people. Carbon cost. The <br />energy use is about 100 times greater than achieving a similar result with other tools. Monoculture and power concentration. The hammer is made by a small, western and wealthy subset of humanity – the only ones with the resources to build it. Invisible <br />decision-making. It will mostly “guess” your aim, tend to miss the nail and push for a different design. Often unnoticeably. Accountability projection. If the hammer breaks and hurts someone, it’s only because the hammer has “a mind of its own”. <br />Jerry-building (Misinformation) Optimised for building elaborate structures that don’t hold up to scrutiny. Data/privacy breaches. May reveal blueprints from other people using a hammer from the same manufacturer, or other personal data that happened to <br />be part of its development. Moderator trauma Low-wage moderators work around the clock watching filth and violence to ensure the hammer can’t be used to build brothels or torture chambers. Unless someone hacks it of course. The footer reads: "You <br />can’t own it, but you can subscribe. Perpetually." CC BY-SA Per Axbom. version 1, June 2023. Download on axbom.com/hammer-ai](https://axbom.me/media/a577b3d7-11b9-42a7-83bb-71ebaee15f3a/axbom-hammer-ai-01.webp "Title: If a hammer was like AI Illustration of a hammer (a line drawing) and the quote "It's just a tool". Various topics surround the hammer and have lines drawn to the hammer, as if explaining what it is made of. Obscured data theft. It copies the <br />design of most constructions in the western, industrialised world without consent and strives to mimic the most average one of those. Bias & injustice. The hammer will most often just hit the thumb of Black, Brown and underserved people. Carbon cost. The <br />energy use is about 100 times greater than achieving a similar result with other tools. Monoculture and power concentration. The hammer is made by a small, western and wealthy subset of humanity – the only ones with the resources to build it. Invisible <br />decision-making. It will mostly “guess” your aim, tend to miss the nail and push for a different design. Often unnoticeably. Accountability projection. If the hammer breaks and hurts someone, it’s only because the hammer has “a mind of its own”. <br />Jerry-building (Misinformation) Optimised for building elaborate structures that don’t hold up to scrutiny. Data/privacy breaches. May reveal blueprints from other people using a hammer from the same manufacturer, or other personal data that happened to <br />be part of its development. Moderator trauma Low-wage moderators work around the clock watching filth and violence to ensure the hammer can’t be used to build brothels or torture chambers. Unless someone hacks it of course. The footer reads: "You <br />can’t own it, but you can subscribe. Perpetually." CC BY-SA Per Axbom. version 1, June 2023. Download on axbom.com/hammer-ai")
Poster: If a hammer was like AI.
🧵
1/ This is unreal. Am I the only one this stuff happens to?
A few days after I posted about Explainability Washing (https://hci.social/@upol/110397476968179709), I got a rather lengthy 900+ word message from a senior VP at a prominent tech company.
I take pictures of clouds from airplane windows. iPhone memories classified them as snow and used them to make me a “Snowy days” memory album. I suppose clouds sorta look like snow, out of context.
Giving me a perfect example of this adage from 1891, applied to current AI “maths math” technology - “There three kinds of falsehood: the first is a 'fib,' the second is a downright lie, and the third and most aggravated is statistics.”
I'm putting together a reading list for a course on (un)sustainable AI. The goal is to keep it varied and not too theoretical. What are your suggestions? #sustainability#ai#aiethics#academia#sts
In July I got a tip introducing me to the paper 'A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle' (Suresh & Guttag, 2021). It contains a very useful diagram outlining where and how bias will affect the outcome of the machine learning process. I could not help but apply my own visual interpretation to it for use in teaching. Feel free to use it to understand why and how so-called AI tools may contribute to harm.
I contacted the authors, Harini Suresh and John Guttag, and have their approval for my reimagination of their chart. For an in-depth understanding of the different biases I recommend diving into their paper (linked in the post).
In my post I provide a brief overview of the biases that the diagram addresses. You can also download the diagram in PDF-format.
Remember when AI was used to generate a George Carlin comedy routine? That didn’t happen. Generative AI isn’t that good.
After the George Carlin’s estate sued, a representative of the show admitted that it was human-written. The claim that it was produced by an AI trained on Carlin’s material appears to be far from the truth, and rather used as a way to garner attention.
Cory Doctorow frequently reminds us that these stories of magical AI is peddled by both boosters and critics. Critics of AI make the mistake of also assuming that AI is this good, and talk about it as bad use of AI. This unnecessarily inflates the idea that AI can do things that it really can’t, adding fuel to magical thinking.
It’s probably a good idea to more often question if AI is even in the picture, given how effective it has become as a marketing vehicle. Was there AI involved? Perhaps, but not to the extent that salespeople would have you imagine
And yes, there's a name for this kind of criticism: "criti-hype.” A term coined by Lee Vinsel. Read more in Doctorow’s blog post, as always littered with further reading:
Happy to share that my paper on why AI art is theft has been accepted to the 2024 ACM Conference on Fairness, Accountability, and Transparency! See you in Rio in June 😃
Preprint here (revisions soon):
philpapers.org/rec/GOEAAI-2
arxiv.org/abs/2401.06178