luthis,

There’s been huge discussion on this already: lemmy.nz/post/684888

Sorry, not sure how to ! post so it opens in your instance.

TL;DR

Any result is going to be biased. If it generated a crab wearing liederhosen, it’s obviously a bias towards crabs. You can’t not have a biased output because the prompting is controlling the bias. There’s no cause for concern here. The model is outputting by default the general trend of the data it was trained with. If it was trained with crabs, it would be generating crab-like images.

You can fix bias with LoRAs and good prompting.

cerevant,

The bias isn’t in the software, it is in the data. The stock photos of professional women that were fed in were white.

That doesn’t say anything about the AI, but rather the community that created those biases.

luthis,

Yes, but they trained on easily accessible data in large amounts. Which actually says that stock photo websites are the biased ones there.

No model can be trained on an equal amount of diverse data for everyone, and it’s not supposed to anyway. I bet it was hardly if at all trained on Mongolian goat herders, but you could hardly say it’s biased against them, just that there wasn’t an easily accessible large amount of pictures of them.

cerevant,

That’s my point. The AI isn’t an independent subject to be criticized, it is a cultural mirror.

FaceDeer,
FaceDeer avatar

I recall a somewhat similar incident when I was showing an in-law of mine how Stable Diffusion worked a while back. She's of Indian descent, and she asked Stable Diffusion to generate a picture of an Indian woman. All of the women it generated had Bindis and other "traditional" Indian cultural garb on, and she was initially kind of annoyed by that. But I explained that that's because most of the photos of women in the training set that were explicitly tagged as Indian were dressed that way, whereas the rest of the Indian women in the training set probably weren't explicitly tagged. They were just women.

It was kind of interesting trying to figure out which option was more biased. Realizing that there was an understandable reason behind that helped ease her annoyance.

clobubba,

deleted_by_author

  • Loading...
  • fubbernuckin,

    Just, you know, as long as that profile isn’t for some site whose sole purpose is allowing others to identify you for your career.

    jeena,
    @jeena@jemmy.jeena.net avatar

    To be fair, I used a Chinese AI picture generator app with my face and it made it more Asian looking. It’s obvious that each software has biases towards the people who made and trained it. It’s not good, but it’s expected and happening everywhere.

    stopthatgirl7,
    stopthatgirl7 avatar

    Ok, but she asked it to make her look professional and the only thing it changed was her race. Not the background, not her clothes. Last I checked, a university sweatshirt wasn’t exactly professional wear.

    Vlyn,

    It doesn’t really matter that it was her in this image. When you put “professional” into it then you can expect something along these results:

    www.google.com/search?q=professional+woman

    And overall in I’d say… 7 out of 10 images this is a white woman in a Google search. So the probability is high that the training data also has a bias towards that.

    Someone in the original lemmy.nz post said they did the exact same thing, same image, same prompt, and it turned her Indian. So if you have very wide training data the result would be rather “random”. Or you have very narrow training data and the result will always be looking similar.

    Grab an app focused on an Asian audience with beauty filters for example and it will turn a white person into an Asian one. But no one complains there that the app is racist.

    stopthatgirl7,
    stopthatgirl7 avatar

    Notice how not a single woman there is wearing a university sweatshirt.

    My point still stands. It didn’t touch her clothing to make it more “professional.” Just her race. It screwed up on multiple levels here.

    tenextrathrills,

    It’s ok to admit that you don’t understand how the models were trained and that it in no way “screwed up”.

    stopthatgirl7,
    stopthatgirl7 avatar

    Have a lovely day.

    dorkian_gray,

    Lol, as though that’s ever going to happen. It’s a nice thought though, I’ll join you in hoping that we’ll one day live to witness such a miracle.

    Hobovision,
    Hobovision avatar

    Sounds like they fucked up training the AI then. For a user it doesn't matter whether the AI is designed poorly or trained poorly, it's behaving poorly.

    luthis,

    That’s because the denoising was set low. You can tell that it did actually modify her sweatshirt and the background. The model is just not able to turn her sweatshirt into a blazer, and keep her face relatively similar.

    To do this kind of editing, you add noise to the image and get the model to remove the noise, painting in new details. To fully change clothes, you would have to add so much noise that you would lose the original image entirely and end up getting a completely different person, background, pose, everything.

    We shouldn’t be surprised that race changed. The model didn’t know what race she was in the first place. It was just told to ‘change the image according to these prompts’ with about this |_| much wiggle room.

    Zarxrax,

    Playground AI founder Suhail Doshi said that “models aren’t instructable like that” and will pick “any generic thing based on the prompt.” However, he said in another tweet that Playground AI is “quite displeased with this and hope to solve it.”

    So the model wasn’t even designed to be used in the way she was trying to use it.

    Half of the outrage against ai models can be attributed to the users not even understanding what they are doing. Like when people complain about ChatGPT giving wrong information, when warnings about it are written right there on the page where users are typing in their prompts.

    EncryptKeeper, (edited )

    I mean wouldn’t this just be due to like the sheer number of BS “female professional” stock photos used on the websites of call centers globally, that the AI ingested? Said “professional white person” photos being used especially in non-western websites in order to gain legitimacy in the west?

    Like given what little I know about how AI ingests and spits out data, it might be correlating the buzzword “professional”, and stock photos of white people that were ingested from Asian websites. It might be “wrong” but the AI doesn’t attempt to be “right” it’s just trying to give you what you expect based on the data it has.

    EatMyDick,

    “Asian MIT grad who knows exactly what she is doing, pretends to be shocked after intentionally triggering industry known bias that are already acknowledged and being worked on”

    This is just a student manufacturing controversy ensuring she has a great talking piece at her interviews.

    fubbernuckin,

    While it’s definitely a predictable outcome, it’s really not fair to assume that’s what her motive were.

    EatMyDick,

    Yeah I’m sure this is the first time this MIT computer science grad had noooooo idea what she was doing.

    tictac2,

    Well we all know that Asians are far from professional

    /s

    monsterlynn,
    monsterlynn avatar

    @stopthatgirl7 She also ended up with slightly frizzy hair compared to her relatively straight hair.

    All around messed up and creepy.

    WoahWoah,

    That’s what I noticed. It made her hair arguably LESS “professional.”

    Blamemeta,

    You have to pick the model that fits you, and specify what you want. This is how ai works mathematically, it trends towards one image,

    Its like buying foundation randomly and being upset it doesn’t fit your skin tone perfectly.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • ethstaker
  • thenastyranch
  • GTA5RPClips
  • everett
  • Durango
  • rosin
  • InstantRegret
  • DreamBathrooms
  • magazineikmin
  • Youngstown
  • mdbf
  • slotface
  • cisconetworking
  • kavyap
  • JUstTest
  • normalnudes
  • modclub
  • cubers
  • ngwrru68w68
  • khanakhh
  • tacticalgear
  • tester
  • provamag3
  • Leos
  • osvaldo12
  • anitta
  • megavids
  • lostlight
  • All magazines