axbom, (edited ) to random
@axbom@axbom.me avatar

A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really no different from a hammer". I was compelled to make a poster to address these claims. Steal it, share it, print it and use it where you see fit.

https://axbom.com/hammer-ai/

axbom, to random
@axbom@axbom.me avatar

Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.

"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."

In this regard the tools don't take us to the future, but to the past.

No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.

In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.

Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.

It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.

https://www.nature.com/articles/s41746-023-00939-z

molly0xfff, to ArtificialIntelligence
@molly0xfff@hachyderm.io avatar

The "effective altruism" and "effective accelerationism" ideologies that have been cropping up in AI debates are just a thin veneer over the typical blend of Silicon Valley techno-utopianism, inflated egos, and greed. Let's try something else.

https://newsletter.mollywhite.net/p/effective-obfuscation

molly0xfff, to ai
@molly0xfff@hachyderm.io avatar

excuse me what the fuck

keithwilson, to OpenAI
@keithwilson@fediphilosophy.org avatar

😆 In general ‘AI’ is a very poor name for a bunch of technologies that enable computers to do better pattern matching.

Even worse, is now using ‘AGI’ to mean better-than-human level performance (at what task exactly is unclear), which isn’t what the phrase is generally understood to mean at all.

The more you look into this whole area, the more you realise there’s a lot of smoke and mirrors: marketing hype dressed up as technological revolution.

https://mastodon.social/@jamesbritt/111466685493641006

jonippolito, to generativeAI
@jonippolito@digipres.club avatar

AI companies to universities: Personalized tutors will make you obsolete

Also AI companies: Thanks for recording your lectures so we can sell them on the open market to train personalized tutors

https://annettevee.substack.com/p/when-student-data-is-the-new-oil

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

Doodling while listening to Mystery AI Hype Theater.

"Just because you identified a problem, it doesn't mean a chatbot is the solution" -- @emilymbender

ttiurani, to llm
@ttiurani@fosstodon.org avatar

This essay by Karawynn Long is the best introduction to -driven I've read. Highly recommended!

In a nutshell, is generating PR and racking up profits by exploiting the psychological heuristics that

  1. language fluency means intelligence, and
  2. machines produce factual output.

The inability to notice the failure of these heuristics is how problems – worker exploitation, misinformation, massive energy use – gets swept under the rug.

https://karawynn.substack.com/p/language-is-a-poor-heuristic-for

bibliotecaria, to random
@bibliotecaria@blacktwitter.io avatar

Read this from @alex and @emilymbender.
“AI-related policy must be science-driven and built on relevant research, but too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity….”
https://dair-community.social/@emilymbender/110871604982224575

jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

#AIethics #Cybersecurity #GPT #OpenAI #LLM #GenAI #GenerativeAI #Python #NodeJS #Ruby #Golang

jenny_ai_land, to llm
@jenny_ai_land@hci.social avatar

Finally watched Adam Conover's interview of @timnitGebru and @emilymbender and this was such a delight to watch! Funny, and a lot of information.

Took so much notes for a presentation I'm giving in September and I will be recommending this video to everyone, alongside the stochastic parrot day recording !

Thank you for the work you're doing, for speaking out and being a voice of reason in the ambient madness.

https://www.youtube.com/watch?v=jAHRbFetqII&t=266s

barik, to ai
@barik@hci.social avatar

🎁 2023 https://hci.social WRAPPED ☃️ 🎄 ✨

👫🏾 New users: 382
✏️ Toots tooted: 46,536
❤️ Toots favorited: 105,419

🤖 Most used hash tags (Top 10):
#ai, #CHI2023, #economics, #academicrunplaylist, #HCI, #law, #CSCW2023, #ux, #aiethics, #LLMs

:ham: Most followed people (Top 5):
@cfiesler, @bkeegan, @jbigham, @andresmh, @axz

📕 HCI in toots: 1,186
😆 LOL in toots: 884
😱 OMG in toots: 110

💾 Media storage: 1.89 TB
💰 Hosting fees: $2,912 (thanks, Princeton Research!)

HAPPY NEW YEAR!

upol, to ai
@upol@hci.social avatar

1/
OpenAI quietly shut down its "AI" detector.

Did shutting it down undo the harms?

No, its Algorithmic Imprint lives on.

Here's how ⤵️

https://arstechnica.com/information-technology/2023/07/openai-discontinues-its-ai-writing-detector-due-to-low-rate-of-accuracy/

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

Github copilot suggests real use names!

Seems that, not only do they include copyrighted data, but also keep user names in the training data (which is sth irrelevant!)

What about private repos?

jonippolito, to journalism
@jonippolito@digipres.club avatar

What jobs are we preparing students for by boosting their writing productivity with AI? After shedding 40% of its workforce, the gaming site Gamurs posted an ad last June for an editor to write 250 articles per week. That’s a new article every 10 minutes, at $4.25 per article.

As @novomancy has noted, AI is only the accomplice here. This clickbait nightmare is the logical conclusion of the ad-supported web.

https://www.sciencetimes.com/articles/44308/20230614/gaming-media-company-looks-hire-ai-editor-write-250-articles.htm

#Journalism #Writing #AIethics #AIEdu #AIinEducation #Gaming

axbom, to random
@axbom@axbom.me avatar

While a misleading narrative of doom is being widely pushed, I want to contribute to bringing attention to the real harms and risks of AI. And I do this, honestly, with an intent to show how the harms are tangible and can be addressed.

This is a good thing.

If we can be open and honest about the actual risks then we stand a better chance of owning them and acting to evade or mitigate them. If we don't talk about them, our chances of managing them responsibly are zero.

All the harms are human-made and hence are under human control. What we need to do is demand more transparency around each issue, and acknowledge how all teams that deploy or make use of AI need a mitigation strategy for many different types of harms.

I give you The Elements of AI Ethics, which borrows from and builds on my chart from 2021, The Elements of Digital Ethics.

Read about it here and download the chart in PDF format for your own, free, use: https://axbom.com/aielements/

Let me know what you think.

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

I feel dizzy, sick and bored at the AI discourse.

We keep hearing the same bullshit.

We keep seeing new variations of the same flawed products.

We keep reading papers that state the obvious.

We keep pushing back the nonsense.

We keep seeing people cheering for the same nonsense.

We keep being pushed to embrace that nonsense.

🤕

MattHodges, to ChatGPT

👀 "All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as , Harvey.AI, or Google ) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being."

Original: https://www.txnd.uscourts.gov/judge/judge-brantley-starr
Archive: https://archive.is/JpSsf

mattjhodgkinson, to random

"The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value."

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

SteveThompson, to ai
@SteveThompson@mastodon.social avatar
HxxxKxxx, to ai
@HxxxKxxx@det.social avatar

Interesting: The National Library of the Netherlands restricts access to its collections for commercial companies crawling without permission. They remain committed to open access for research and will adapt the AI policy as needed.

https://www.kb.nl/en/ai-statement

axbom, to random
@axbom@axbom.me avatar

In July I got a tip introducing me to the paper 'A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle' (Suresh & Guttag, 2021). It contains a very useful diagram outlining where and how bias will affect the outcome of the machine learning process. I could not help but apply my own visual interpretation to it for use in teaching. Feel free to use it to understand why and how so-called AI tools may contribute to harm.

I contacted the authors, Harini Suresh and John Guttag, and have their approval for my reimagination of their chart. For an in-depth understanding of the different biases I recommend diving into their paper (linked in the post).

In my post I provide a brief overview of the biases that the diagram addresses. You can also download the diagram in PDF-format.

https://axbom.com/bias-in-machine-learning/

axbom, to random
@axbom@axbom.me avatar

Generative AI can not generate its way out of prejudice

The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.

https://axbom.com/generative-prejudice/

jbzfn, to ai
@jbzfn@mastodon.social avatar

:welp: From @TheConversationUS:

「 If you’re asking your chatbot for political information, are the results skewed by the politics of the corporation that owns the chatbot? Or the candidate who paid it the most money? Or even the views of the demographic of the people whose data was used in training the model? 」


https://theconversation.com/can-you-trust-ai-heres-why-you-shouldnt-209283

underdarkGIS, to Ethics
@underdarkGIS@fosstodon.org avatar

Geographic considerations are largely missing from the ongoing & in discussion.

We've written up some thoughts to start this discussion: https://agile-giss.copernicus.org/articles/4/42/2023/

... including a framework to evaluate models from several sustainability-related angles, including , intensity, , and implications

How do you approach questions of sustainability & in your work?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines