mwichary, to random
@mwichary@mastodon.online avatar

Has anyone written about how textual generative AI feels strangely close to toxic masculinity in some respects? The absolute confidence in everything stated, the lack of understanding of the consequences of getting that confidence wrong for important questions, the semi-gaslighty feeling when it “corrects” itself when you call it out on something. It so often feels like talking to someone one would despise and avoid in “real life.” I’m curious if anyone did some writing on this.

NatureMC,
@NatureMC@mastodon.online avatar

@mwichary Yes, there are studies about the social and ethical impact of biased , especially in questions of masculinism, racism or homophobia. It's a fact that the popular models are trained mainly by men (with the "philosophy"***) on men dominated content. The latest is this study: https://cepis.org/unesco-study-exposes-gender-and-other-bias-in-ai-language-models/
This test became quite well-known in 2023: https://rio.websummit.com/blog/society/chatgpt-gpt4-midjourney-dalle-ai-ethics-bias-women-tech/

cheukting_ho, to llm
@cheukting_ho@fosstodon.org avatar

opening keynote by @t_redactyl - and illusions

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines