axbom, (edited ) to random
@axbom@axbom.me avatar

A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really no different from a hammer". I was compelled to make a poster to address these claims. Steal it, share it, print it and use it where you see fit.

https://axbom.com/hammer-ai/

molly0xfff, to ai
@molly0xfff@hachyderm.io avatar

excuse me what the fuck

axbom, to random
@axbom@axbom.me avatar

I read the executive order on AI from the White House, wrote a summary and used an AI-powered video generator to create the appearance of me presenting it. In accordance with suggestions in the order, the video is clearly labeled.

https://vimeo.com/880661920

Remember that while audio and video are synthetic, the script has been written without any AI assistance. That's all me. 😊

This is part 1 of 2 and part two is out soon, but you can already read the full script of my summary on https://axbom.com/aiorder.

As we are all learning together right now, I would love to hear your impressions and reflections after watching and listening to a summary where my likeness is synthetic.

upol, to ai
@upol@hci.social avatar

🧵 1/
LexisNexis, one of the biggest data brokers on the planet, is incorporating ChatGPT-style Generative AI into their legal search engine. I read the report (linked in comments). There is a black-hole-sized void in it.

The entire report does not include a single mention of "hallucinations" or "confabulations". How can you introduce GenAI in the legal sector without ever addressing the Achilles' heel of LLMs?

https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-announces-launch-of-lexis-ai-commercial-preview-most-comprehensive-global-legal-generative-ai-platform

molly0xfff, to ArtificialIntelligence
@molly0xfff@hachyderm.io avatar

The "effective altruism" and "effective accelerationism" ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let's try something else.

https://newsletter.mollywhite.net/p/effective-obfuscation

barik, to ai
@barik@hci.social avatar

🎁 2023 https://hci.social WRAPPED ☃️ 🎄 ✨

👫🏾 New users: 382
✏️ Toots tooted: 46,536
❤️ Toots favorited: 105,419

🤖 Most used hash tags (Top 10):
#ai, #CHI2023, #economics, #academicrunplaylist, #HCI, #law, #CSCW2023, #ux, #aiethics, #LLMs

:ham: Most followed people (Top 5):
@cfiesler, @bkeegan, @jbigham, @andresmh, @axz

📕 HCI in toots: 1,186
😆 LOL in toots: 884
😱 OMG in toots: 110

💾 Media storage: 1.89 TB
💰 Hosting fees: $2,912 (thanks, Princeton Research!)

HAPPY NEW YEAR!

axbom, to random
@axbom@axbom.me avatar

Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.

"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."

In this regard the tools don't take us to the future, but to the past.

No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.

In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.

Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.

It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.

https://www.nature.com/articles/s41746-023-00939-z

underdarkGIS, to Ethics
@underdarkGIS@fosstodon.org avatar

Geographic considerations are largely missing from the ongoing & in discussion.

We've written up some thoughts to start this discussion: https://agile-giss.copernicus.org/articles/4/42/2023/

... including a framework to evaluate models from several sustainability-related angles, including , intensity, , and implications

How do you approach questions of sustainability & in your work?

axbom, to random
@axbom@axbom.me avatar

Generative AI can not generate its way out of prejudice

The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.

https://axbom.com/generative-prejudice/

keithwilson, to ai
@keithwilson@fediphilosophy.org avatar

Can anyone recommend a good, and preferably relatively recent, textbook or anthology on the #PhilosophyOfAI? Ideally this should include discussion of ethical issues and be suitable for undergraduate study. Thanks! 🙏

#Philosophy #AI #AIEthics @philosophy @philosophyofmind

shimriez, to random

https://axbom.me/users/axbomPer Axbom wrote the following post Thu, 22 Jun 2023 14:10:19 +0200

![Title: If a hammer was like AI Illustration of a hammer (a line drawing) and the quote "It's just a tool". Various topics surround the hammer and have lines drawn to the hammer, as if explaining what it is made of. Obscured data theft. It copies the <br />design of most constructions in the western, industrialised world without consent and strives to mimic the most average one of those. Bias & injustice. The hammer will most often just hit the thumb of Black, Brown and underserved people. Carbon cost. The <br />energy use is about 100 times greater than achieving a similar result with other tools. Monoculture and power concentration. The hammer is made by a small, western and wealthy subset of humanity – the only ones with the resources to build it. Invisible <br />decision-making. It will mostly “guess” your aim, tend to miss the nail and push for a different design. Often unnoticeably. Accountability projection. If the hammer breaks and hurts someone, it’s only because the hammer has “a mind of its own”. <br />Jerry-building (Misinformation) Optimised for building elaborate structures that don’t hold up to scrutiny. Data/privacy breaches. May reveal blueprints from other people using a hammer from the same manufacturer, or other personal data that happened to <br />be part of its development. Moderator trauma Low-wage moderators work around the clock watching filth and violence to ensure the hammer can’t be used to build brothels or torture chambers. Unless someone hacks it of course. The footer reads: "You <br />can’t own it, but you can subscribe. Perpetually." CC BY-SA Per Axbom. version 1, June 2023. Download on axbom.com/hammer-ai](https://axbom.me/media/a577b3d7-11b9-42a7-83bb-71ebaee15f3a/axbom-hammer-ai-01.webp "Title: If a hammer was like AI Illustration of a hammer (a line drawing) and the quote "It's just a tool". Various topics surround the hammer and have lines drawn to the hammer, as if explaining what it is made of. Obscured data theft. It copies the <br />design of most constructions in the western, industrialised world without consent and strives to mimic the most average one of those. Bias & injustice. The hammer will most often just hit the thumb of Black, Brown and underserved people. Carbon cost. The <br />energy use is about 100 times greater than achieving a similar result with other tools. Monoculture and power concentration. The hammer is made by a small, western and wealthy subset of humanity – the only ones with the resources to build it. Invisible <br />decision-making. It will mostly “guess” your aim, tend to miss the nail and push for a different design. Often unnoticeably. Accountability projection. If the hammer breaks and hurts someone, it’s only because the hammer has “a mind of its own”. <br />Jerry-building (Misinformation) Optimised for building elaborate structures that don’t hold up to scrutiny. Data/privacy breaches. May reveal blueprints from other people using a hammer from the same manufacturer, or other personal data that happened to <br />be part of its development. Moderator trauma Low-wage moderators work around the clock watching filth and violence to ensure the hammer can’t be used to build brothels or torture chambers. Unless someone hacks it of course. The footer reads: "You <br />can’t own it, but you can subscribe. Perpetually." CC BY-SA Per Axbom. version 1, June 2023. Download on axbom.com/hammer-ai")
Poster: If a hammer was like AI.

HxxxKxxx, to OpenAI
@HxxxKxxx@det.social avatar

This is a big deal for open AI and ChatGPT! Now you can really talk to the AI. Chat is no longer just text, but a voice interface.
🎧

video/mp4

upol, (edited ) to ai
@upol@hci.social avatar

🧵
1/ This is unreal. Am I the only one this stuff happens to?

A few days after I posted about Explainability Washing (https://hci.social/@upol/110397476968179709), I got a rather lengthy 900+ word message from a senior VP at a prominent tech company.

The note starts well, but ...

eric, to random
@eric@social.coop avatar

I am looking for #online training on #ethicsInTech or #aiEthics.
Level: postgraduate (or "second level": after Bachelor)
Workload: at least 100 hours!

This distance-taught programme looks impressive:
https://efi.ed.ac.uk/data-and-artificial-intelligence-ethics/
Programme Director: Professor @ShannonVallor
It also is very selective and expensive 😀

Harvard and LSE have interesting modules but i personally avoid business approaches.

Would you recommend another distance postgraduate #learning programme on #ethicsInTech or #aiEthics?

eric, to IsraelPalestine
@eric@social.coop avatar

#Lavender is traditionally used in France to reduce the moth population.

Only a small proportion of French Jews emigrate to #IsraelPalestine. These Binationals are subject to compulsory military service.

This army prepared and launched the first #AIWar in 2021: https://techhub.social/@estelle/111510965384428730

A development team has designed a more efficient product, which a Frenchman has suggested calling Lavander: https://techhub.social/@estelle/112220409975979758 @palestine

#humour #innovation #techBros #ethics #Gospel #tech #AIEthics #AI

keithwilson, to ai
@keithwilson@fediphilosophy.org avatar
dahukanna, to random
@dahukanna@mastodon.social avatar

I take pictures of clouds from airplane windows. iPhone memories classified them as snow and used them to make me a “Snowy days” memory album. I suppose clouds sorta look like snow, out of context.

Giving me a perfect example of this adage from 1891, applied to current AI “maths math” technology - “There three kinds of falsehood: the first is a 'fib,' the second is a downright lie, and the third and most aggravated is statistics.”

axbom, to random
@axbom@axbom.me avatar

Fascinated by the rebuttal that humans make mistakes too, so algorithms should be allowed to as well.

As if the machine is disconnected from the humans building, deploying and using them.

The machines are an extension of human mistakes, not its own separate entity with free will.

It would appear, ironically, that when a huge number of humans are involved in building the machines, the humans become invisible.

Future historians:
"They couldn't see the humans because of all the machines."

jmnw, to sustainability
@jmnw@post.lurk.org avatar

I'm putting together a reading list for a course on (un)sustainable AI. The goal is to keep it varied and not too theoretical. What are your suggestions?

axbom, to random
@axbom@axbom.me avatar

In July I got a tip introducing me to the paper 'A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle' (Suresh & Guttag, 2021). It contains a very useful diagram outlining where and how bias will affect the outcome of the machine learning process. I could not help but apply my own visual interpretation to it for use in teaching. Feel free to use it to understand why and how so-called AI tools may contribute to harm.

I contacted the authors, Harini Suresh and John Guttag, and have their approval for my reimagination of their chart. For an in-depth understanding of the different biases I recommend diving into their paper (linked in the post).

In my post I provide a brief overview of the biases that the diagram addresses. You can also download the diagram in PDF-format.

https://axbom.com/bias-in-machine-learning/

axbom, (edited ) to random
@axbom@axbom.me avatar

Remember when AI was used to generate a George Carlin comedy routine? That didn’t happen. Generative AI isn’t that good.

After the George Carlin’s estate sued, a representative of the show admitted that it was human-written. The claim that it was produced by an AI trained on Carlin’s material appears to be far from the truth, and rather used as a way to garner attention.

Cory Doctorow frequently reminds us that these stories of magical AI is peddled by both boosters and critics. Critics of AI make the mistake of also assuming that AI is this good, and talk about it as bad use of AI. This unnecessarily inflates the idea that AI can do things that it really can’t, adding fuel to magical thinking.

It’s probably a good idea to more often question if AI is even in the picture, given how effective it has become as a marketing vehicle. Was there AI involved? Perhaps, but not to the extent that salespeople would have you imagine

And yes, there's a name for this kind of criticism: "criti-hype.” A term coined by Lee Vinsel. Read more in Doctorow’s blog post, as always littered with further reading:

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

underdarkGIS, to random
@underdarkGIS@fosstodon.org avatar

Excited about our upcoming @emeraldseu #Webinar: Navigating AI's Ethical Aspects

It's a big topic to tackle.

When? 28 March 11:00 CET
Where? https://emeralds-horizon.eu/events/emeralds-webinar-navigating-ais-ethical-aspects

#emeraldseu #aiethics #xai #mobilitydatascience #geoai

ErrantCanadian, to philosophy
@ErrantCanadian@zirk.us avatar

Happy to share that my paper on why AI art is theft has been accepted to the 2024 ACM Conference on Fairness, Accountability, and Transparency! See you in Rio in June 😃

Preprint here (revisions soon):
philpapers.org/rec/GOEAAI-2
arxiv.org/abs/2401.06178

@facct @philosophy

OmaymaS, to tech
@OmaymaS@dair-community.social avatar
  • "Business" is NOT neutral.
  • Tech is NOT apolitical.
  • Industry is NOT detached from the wider societal and political issues.

Executives & investors who promote opposite facts are either naïve or benefiting from isolating & silencing their employees.

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

"What do you mean by progress when you talk about AI?" and progress for whom?

I asked the techno-optimist guy at an AI Hype Manel!

  • Does progress mean getting bigger or better models?

  • What about the impact on environment, water resources, destruction of communities, mining raw materials in Africa?

He first didn't get my Q. Then he said he believed in the "utilitarian view" & developing intelligence is very important.

Just parroting the AI hype people!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines