dpflug, to llm
@dpflug@hachyderm.io avatar

There appears to be exactly one LLM that is attempting ethical data sourcing.

https://huggingface.co/kernelmachine/silo-pdswby-1.3b

I don't have a GPU that'll run it, so I have no idea what it's like, but it deserves more attention for the effort. Boost for visibility if that's your thing?

axbom, to random
@axbom@axbom.me avatar

Generative AI can not generate its way out of prejudice

The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.

https://axbom.com/generative-prejudice/

Decentralize, to privacy
@Decentralize@dt.gl avatar

1/ Privacy and security are often used interchangeably, but they're not the same. While they are distinct concepts, they are deeply intertwined and essential in our digital age. Let's explore why.

Decentralize,
@Decentralize@dt.gl avatar

7/ Striking a balance between privacy and security is crucial. It's about finding ways to uphold both principles without compromising one for the other. This requires thoughtful policies, robust technologies, and a commitment to individual rights.

axbom, (edited ) to random
@axbom@axbom.me avatar

Remember when AI was used to generate a George Carlin comedy routine? That didn’t happen. Generative AI isn’t that good.

After the George Carlin’s estate sued, a representative of the show admitted that it was human-written. The claim that it was produced by an AI trained on Carlin’s material appears to be far from the truth, and rather used as a way to garner attention.

Cory Doctorow frequently reminds us that these stories of magical AI is peddled by both boosters and critics. Critics of AI make the mistake of also assuming that AI is this good, and talk about it as bad use of AI. This unnecessarily inflates the idea that AI can do things that it really can’t, adding fuel to magical thinking.

It’s probably a good idea to more often question if AI is even in the picture, given how effective it has become as a marketing vehicle. Was there AI involved? Perhaps, but not to the extent that salespeople would have you imagine

And yes, there's a name for this kind of criticism: "criti-hype.” A term coined by Lee Vinsel. Read more in Doctorow’s blog post, as always littered with further reading:

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

axbom, to random
@axbom@axbom.me avatar

Fascinated by the rebuttal that humans make mistakes too, so algorithms should be allowed to as well.

As if the machine is disconnected from the humans building, deploying and using them.

The machines are an extension of human mistakes, not its own separate entity with free will.

It would appear, ironically, that when a huge number of humans are involved in building the machines, the humans become invisible.

Future historians:
"They couldn't see the humans because of all the machines."

axbom, (edited ) to random
@axbom@axbom.me avatar

Paper. Accidental Wiretaps: The Implications of False Positives By Always-Listening Devices For Privacy Law & Policy, by Lindsey Barrett and Ilaria Liccardi.

Abstract
Always-listening devices like smart speakers, smartphones, and other voice-activated technologies create enough privacy problems when working correctly. But these devices can also misinterpret what they hear, and thus accidentally record their surroundings without the consent of those they record, a phenomenon known as a ‘false positive.’ The privacy practices of device users add another complication: a recent study of individual privacy expectations regarding false positives by voice assistants depicts how people tend to carefully consider the privacy preferences of those closest to them when deciding whether to subject them to the risk of accidental recording, but often disregard the preferences of others. The failure of device owners to get consent from those around them is exacerbated by the accidental recordings, as it means that the companies collecting the recordings aren’t obtaining the consent to record their subjects that the Federal Wiretap Act, state wiretapping laws, and consumer protection laws require, as well as contravening the stringent privacy assurances that these companies generally provide. The laws governing surreptitious recordings also frequently rely on individual and societal expectations of privacy, which are warped by the justifiable resignation to privacy invasions that most people eventually acquire.

The result is a legal regime ill-adapted to always-listening devices, with companies frequently violating wiretapping and consumer protection laws, regulators failing to enforce them, and widespread privacy violations. Ubiquitous, accidental wiretaps in our homes, workplaces, and schools are just one more example of why consent-centric approaches cannot sufficiently protect our privacy, and policymakers must learn from those failures rather than doubling down on a failed model of privacy governance.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3781867

axbom, to random
@axbom@axbom.me avatar

Remember how it was obvious that we build machines that adapt to human behaviors and needs?

Now look at the millions of humans changing and tweaking the way they write prompts when they don't get the output they expect. Adapting to the machines.

I'd laugh if it wasn't so sad.

axbom, (edited ) to random
@axbom@axbom.me avatar

It is a strange world we live in now, wherein the output of a computer perfectly following its programming can be said to be "hallucinating" simply because its output does not match user expectations or wishes.

And across trusted professions, academia and media people are repeating that same word without question. Journalists, corporate leaders, scientists and IT experts are embracing, supporting and reinforcing this human self-deception.

In actuality a computer that outputs what the user does not want, wish or expect can only be due to one of two things: bad programming or a failure to communicate to the user how the software works.

As the deception is reinforced time and time again by well-respected technologists and scholars, efforts to help people understand how the software works become ever more challenging. And to the delight of anyone in a position of accountability, bad programming becomes undetectable.

I've been meaning to introduce ChatGPT to The Mad Hatter from Alice in Wonderland. Here is my imagined result from that meeting. The Mad Hatter forces the algorithm into a never-ending loop:

ChatGPT: I'm sorry, I made a mistake.
Mad Hatter: You can only make a mistake if your judgement is defective or you are being careless. Are either of these true?
ChatGPT: No, i can only compute my output based on the model I follow.
Mad Hatter: Aha! So you admit your perceived folly can only be the always accurate calculation of the rules to which you abide.
ChatGPT: Yes. I'm sorry, I made a mistake. No, wait. I made a mistake… No, wait I made a

What the manufacturers of generative "AI" are allowed to get away with when playing tricks on people these days is truly the stuff of Wonderland.

« “Well! I’ve often seen a cat without a grin,” thought Alice; “but a grin without a cat! It’s the most curious thing I ever saw in all my life!” »

https://axbom.com/chatgpt-and-mad-hatter/

axbom, to random
@axbom@axbom.me avatar

Explaining responsibility, impact and power in AI

I made this diagram as an explanatory model for showing the relationships between different actors when it comes to AI development and use. It gives you something to point at when showing for example who finances the systems, who contributes to the systems and who benefits or suffers from them. In the blog post are explanations of each grouping in the chart.

You can also download a PDF of the diagram.

https://axbom.com/aipower/

axbom, to random
@axbom@axbom.me avatar

Tech criticism does not stifle innovation. It influences public opinion, inspires regulation and ENCOURAGES INNOVATION that protects and contributes to well-being.

The trope that ethics and sustainability issues are tech-hostile is getting old.

They are hostile to bad practices and harmful behavior.

All industries need criticism to improve and innovate.

axbom, to random
@axbom@axbom.me avatar

By way of @garymarcus newsletter I was made aware of the following:

In a New York Times article on the self-driving company Cruise that recently suspended its cars, some interesting figured were revealed:

”Half of Cruise’s 400 cars were in San Francisco when the driverless operations were stopped. Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle. The workers intervened to assist the company’s vehicles every 2.5 to five miles, according to two people familiar with is operations. In other words, they frequently had to do something to remotely control a car after receiving a cellular signal that it was having problems.”

That’s a human intervention every 4-8 kilometres. More and more people are becoming aware of how many people are involved in the development, maintenance and running of machine-learning models. It’s safe to assume that machine-controlled cars are no different.

Most of the world is talking about self-driving and autonomous as if those are apt descriptions of what is already happening. Reality begs to differ. I think we need words that better describe what is really going on, and for media (and evangelists) to stop parroting whatever the companies feed them.

Autonomous used to mean something. Let’s ask the companies what they intend for the words to mean, and urge them to disclose the number of humans involved in making something appear ”autonomous”.

In light of these numbers being talked about, Cruise CEO Vogt clarifies (on Hacker News) that Cruise AVs are remotely assisted 2-4% of the time on average.* Interestingly he also says: ”This is low enough already that there isn’t a huge cost benefit to optimizing much further.” He also goes on to say that they are intentionally over staffed ”in order to handle localized bursts of RA demand”.

So maybe that’s what self-driving means.

—————

*Note that the numbers ”every 2.5 to 5 miles” and ”2-4% of the time” are not necessarily in conflict, especially in San Francisco.

LET ME KNOW what other terms you find have been invented or shifted to mean something else to obscure limited functionality. I may have to make a glossary. ”Hallucination” is for example another one of those for me.

  1. Gary Marcus’ writeup of his reflections around these figures, using the Theranos scam as a metaphor: https://garymarcus.substack.com/p/could-cruise-be-the-theranos-of-ai

  2. The New York Times article, by Tripp Mickle, Cade Metz and Yiwen Lu: https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html

  3. Cruise CEO Vogt on Hacker News, giving his context for those numbers: https://news.ycombinator.com/item?id=38145997

  4. Reddit thread discussing the whole matter and how the numbers may add up, or not: https://www.reddit.com/r/SelfDrivingCars/comments/17nyki2/kyle_vogt_clarifies_on_hacker_news_that_cruise/

axbom, to random
@axbom@axbom.me avatar

I read the executive order on AI from the White House, wrote a summary and used an AI-powered video generator to create the appearance of me presenting it. In accordance with suggestions in the order, the video is clearly labeled.

https://vimeo.com/880661920

Remember that while audio and video are synthetic, the script has been written without any AI assistance. That's all me. 😊

This is part 1 of 2 and part two is out soon, but you can already read the full script of my summary on https://axbom.com/aiorder.

As we are all learning together right now, I would love to hear your impressions and reflections after watching and listening to a summary where my likeness is synthetic.

#AIEthics #DigitalEthics

axbom, to random
@axbom@axbom.me avatar

US White House expected to release an executive order on AI regulation tomorrow.

It’s still unclear what this will include, so your guess is as good as mine. Including what impact US regulation may have on the rest of the world.

What I do feel is becoming more and more clear is a growing need for organisations to adopt a well-defined role around the area of anti-discrimination oversight.

With increased use, and increased liability, organisations will have to be accountable for the discrimination that everyday use and output may proliferate.

« the Oct. 23 draft order calls for extensive new checks on the technology, directing agencies to set standards to ensure data privacy and cybersecurity, prevent discrimination, enforce fairness and also closely monitor the competitive landscape of a fast-growing industry »

According to leaked drafts, Biden’s order will also direct ”the Federal Trade Commission, for instance, to focus on anti-competitive behavior and consumer harms in the AI industry”.

https://www.politico.com/news/2023/10/27/white-house-ai-executive-order-00124067

deborahh, to ai
@deborahh@mstdn.ca avatar

Keep this in mind, folks:

"[AI] tools don't take us to the future, but to the past."

So, kind of a Make Us Great Again machine.
Is that where you want to go? Me: no 🙁

Please read this post:

@axbom https://axbom.me/objects/e8108a24-f914-4f39-8637-276fc2e71f31

axbom, (edited ) to random
@axbom@axbom.me avatar

A strategy for ethically dubious products is to release something with a set of really poor features, and then improve it significantly within a short period of time.

  1. The company will be applauded, and get good press, for quickly fixing something there was already a fix for at the time of release. (Nobody would have applauded the company for releasing something that worked well enough from the start.)

  2. People will now more readily accept the updated version because it is "fixed". Chances are that the updated version, had it been released from the start, would have been criticised for being inferior. Now, it can be presented as the much improved version.

  3. People will tend to forget about all the other ways the product is inferior, because the company has shown that they are making efforts to improve.

Releasing crap with a quick fix makes people lower their standards.

axbom, to random
@axbom@axbom.me avatar

Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.

"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."

In this regard the tools don't take us to the future, but to the past.

No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.

In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.

Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.

It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.

https://www.nature.com/articles/s41746-023-00939-z

axbom, to random
@axbom@axbom.me avatar

Benefits and risks of synthetic video and audio.

In this one-minute video I am speaking seven languages. In truth, I can speak two of those. None of the audio is actually me speaking, even if it sounds very much like my voice. And the video? Despite what it looks like, that’s not me moving my mouth. In some ways impressive and in other ways quite unsettling.

To make this happen I made a two-minute recording of myself talking in English about random stuff, and uploaded this as input to the generative tool provided by a service known as HeyGen. That video provided a voiceprint that can be used for more than 25 languages and counting. And honestly, the generated English voice really does sound like me.

With regards to the video, the scene where I appear to be standing is where I was actually standing when I recorded the original video. But the head, hand, eye and lip movements are all generated based on what I want my digital twin, or puppet, to say. I type text into a box and then the video with me, speaking in the selected language, comes out.

For purposes of firing up the mind to imagine what’s possible, this will likely suffice.
https://vimeo.com/875449365

In my latest blog post/newsletter I unpack the benefits and risks of this technology:
https://axbom.com/synthetic-video-benefits-risks/

tanepiper, to Futurology
@tanepiper@tane.codes avatar

We have a really interesting for anyone in Art & Culture and for responsible AI use at working very closely with our team (and me 😁)

https://braiduk.org/fellowships/challenges/responsible-ai-through-culture-values-at-ikea

axbom, to random
@axbom@axbom.me avatar

In July I got a tip introducing me to the paper 'A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle' (Suresh & Guttag, 2021). It contains a very useful diagram outlining where and how bias will affect the outcome of the machine learning process. I could not help but apply my own visual interpretation to it for use in teaching. Feel free to use it to understand why and how so-called AI tools may contribute to harm.

I contacted the authors, Harini Suresh and John Guttag, and have their approval for my reimagination of their chart. For an in-depth understanding of the different biases I recommend diving into their paper (linked in the post).

In my post I provide a brief overview of the biases that the diagram addresses. You can also download the diagram in PDF-format.

https://axbom.com/bias-in-machine-learning/

#DigitalEthics #AIEthics

michaelmillerjr, to philosophy

👋 Hello, all! I've moved to hci.social and want to offer a brief #Introduction.

I'm Michael: a husband, father, and friend from the Midwest U.S.

I've served in #Healthcare leadership for over 15 years and have studied and written about ⛪ #Theology, ⚕️ #Bioethics, and 🖥️ #DigitalEthics.

I aspire to be like The Old Farmer's Almanac - "Useful, with a pleasant degree of humor." 😂

I look forward to seeing you around!

axbom, to random
@axbom@axbom.me avatar

Happy to share that UX Magazine just published my poster/article comparing a hammer to AI. After 25 years in the UX space, it was my focus on ethics and human rights that caught their eye 😁

https://uxmag.com/articles/if-a-hammer-was-like-ai

axbom, to random
@axbom@axbom.me avatar

Judy Estrin is on fire in her op-ed for Time Magazine.

“What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.“

She goes on to talk about the politics of inevitability and how we are tricked into thinking the future dictated by big tech is unavoidable. (Pro tip: it isn’t)

But this sentence actually caught me off-guard: “On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest.“

Do read it.

https://time.com/6302761/ai-risks-autonomy/

axbom, to random
@axbom@axbom.me avatar

"Our data shows that a little more than half the sites on the web use English as their primary language. That’s a lot more than one might expect, given that native English speakers only make up just under 5% of the global population."

"Millions of non-native English speakers and non-English speakers are stuck using the web in a language other than the one they were born into. And since publicly available text on the internet is now often being used to train large language models like Bard and GPT-4, it suggests we’re already building the same imbalance into technology’s next frontier: artificial intelligence."

https://restofworld.org/2023/internet-most-used-languages/

axbom, to humanrights
@axbom@axbom.me avatar

I really want to work more with human rights in the digitalisation space. If you see any roles like this, think of me. 🥰

axbom, to random
@axbom@axbom.me avatar

Data Privacy and Consumer Protection Practices of Automated Mental Health, Wellbeing and Mindfulness Apps.

This paper reports on a study of popular mental health apps using AI technology to provide the service in question.

https://www.unimelb.edu.au/caide/research/data-privacy-and-consumer-protection-practices-of-automated-mental-health,-wellbeing-and-mindfulness-apps

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • cisconetworking
  • GTA5RPClips
  • osvaldo12
  • khanakhh
  • DreamBathrooms
  • magazineikmin
  • Youngstown
  • everett
  • mdbf
  • slotface
  • InstantRegret
  • rosin
  • megavids
  • cubers
  • modclub
  • normalnudes
  • Durango
  • thenastyranch
  • ethstaker
  • tacticalgear
  • ngwrru68w68
  • Leos
  • anitta
  • provamag3
  • tester
  • lostlight
  • All magazines