[Discussion] Perception of Incel use of AI Girlfriends

Apparently there are several narratives in regards to AI girlfriends.

  1. Incels use AI girlfriends given that they can do whatever they desire.
  2. Forums observing incel spaces agree that incels should use AI girlfriends to leave real women alone
  3. The general public having concerns towards AI girlfriends because their users might be negatively impacted by their usage
  4. Incels perceiving this as a revenge fantasy because "women are jealous that they’re dating AI instead of them"
  5. Forums observing incel spaces unsure if the views against AI girlfriends exist in the first place due to their previous agreement

I think this is an example of miscommunication and how different groups of people have different opinions depending on what they’ve seen online. Perhaps the incel-observing forums know that many of the incels have passed the point of no return, so AI girlfriends would help them, while the general public perceive the dangers of AI girlfriends based on their impact towards a broader demographic, hence the broad disapproval of AI girlfriends.

Tull_Pantera, (edited )
pavnilschanda,
@pavnilschanda@lemmy.world avatar

You’re right that social relationship dynamics remain as old as we have ever existed as a species, or even beyond. Even marriage on the basis of love was considered taboo not too long ago, and there are still pockets of the world that heavily discourages them. When technology gets into the mix, it adds new discussions. Online friendship’s merits are still facing skepticism and studies on whether they are as legitimate as in-person friendships, for example. AI companionship adds into the existing complexity of human interaction that has entangled with technology.

What you said about incels is interesting. That term is used loosely nowadays. I suppose nowadays many people don’t want to carry that label anymore given its rightful negative connotations yet may still exhibit similar behavior such as resentment and entitlement towards an entire gender. But when I say incel-observing forums, I literally mean that; it’s in their description.

Tull_Pantera, (edited )
retrospectology,
@retrospectology@lemmy.world avatar

Encouraging AI girlfriends is harmful in the same way that encouraging pedophiles to indulge in their fantasies using AI is harmful by perpetuating the sexualization of children and allowing ground on which pedophiles can form community and reinforce eachothers delusions (perhaps emboldening eachother to the point of eventually acting on their fantasies).

Just because there’s not an immediate victim does not mean it’s healthy behavior. Instead of putting time and effort into catering to incels, people should be disrupting their incel communities and making it more difficult for them to hide from the truth, not easier.

Kit,

How does incel=pedophile in your example?

pavnilschanda,
@pavnilschanda@lemmy.world avatar

I think they were making an equivalence to illustrate that enforcing through fantasy would encourage doing the same thing in real life, a topic with its own nuances

rufus,

It’s a knock-out argument / thought-terminating cliché. You draw (false) analogies to either pedophiles or nazis if you’re out of proper arguments. It has a long tradition on the internet 😉

Kit,

So when an incel gets an AI girlfriend it will help them get an irl gf?

rufus, (edited )

I’ve heard that story before, but even that is an unfounded claim. There is currently no empirical evidence on whether or not that would prevent or encourage abuse of children. Or harm the people doing it. I too think there is reason to believe it harms the people themselves. But I wanted to point out that this is just anecdotal and an opinion. There is no substance to that claim as of now. And there are studies done on related topics. As far as I know more research needs to be done and it’s a complicated topic. And furthermore, it’s not the same as having AI girlfriends anyways.

I’m not exactly an expert on the topic, but I’ve skimmed a few studies. I was mainly interested because of the regular efforts to introduce total surveillance to the internet. Every half a year someone says “would somebody please think of the children!” And it’s always emotional and sounds plausible… But lots or the pretend arguments are not backed by science. And concerning the surveillance, which is a slightly different topic, we also have contradicting evidence. But that has nothing to do with this…

Tull_Pantera, (edited )
rufus, (edited )

There are narratives entirely without incels. For example the 2013 movie “Her”. Or a bunch of other movies and TV series.

The entire TV series “Westworld” is exactly about this.

Also the picture in the community of hobbyists is quite more diverse. And I don’t see science reducing it to that either. I’m currently reading a long paper about chatbot ethics. There are more comprehensive articles like “The man of your dreams” or “I tried the Replika AI companion”. But I’ve heard the narrative you described, too. I’m not sure where you’d like to go with this conversation… I don’t think it has anything to do with miscommunication. I see people having narrow and uneducated perspectives on all kinds of things…

Is there a broad disapproval? I can see how it’s a controversial topic and kind of taboo, you probably wouldn’t disclose this to your family, friends and co-workers. And it probably can manoeuvre you into a corner and make you even more lonely. But the same applies to playing video games or other hobbies.

And the big tech companies also are very cautious about AI companions. OpenAI, Google etc all cut down severely on this use-case. They put quite some effort in so you can’t use ChatGPT as a friend or antropomorphize it.

Regarding “incels”: I think there are two or three big articles about that, which I’ve read. “Men Are Creating AI Girlfriends and Then Verbally Abusing Them” comes to my mind. In the end I can’t really empathize with incels. I don’t understand or “feel” their perspective on the world. They do all kinds of harmful stuff and brag about it online. I’m not sure what to make of this.

pavnilschanda, (edited )
@pavnilschanda@lemmy.world avatar

Thanks for your input. I agree with your overall comment. Within general narratives, incels aren’t usually included. As for the broad disapproval, it’s something that I tend to notice in the AI space.

AI chatbot personas are generally seen as a hobby, a one-and-done thing compared to an “entity” that accompanies you for long periods of time; the latter part has more stigma attached to it. And given the AI boom only a few years ago, many people, including academic researchers, have only started to be aware of its existence and have made many uninformed assumptions about them. Not to mention the ethical minefelds that are yet to be explored, and increasingly so within the humanities such as psychology and anthropology, hence the Google Deepmind article that you shared. Given the sheer complexity surrounding AI companionship combined with the attention-based economy that has shaped our society, it makes sense that non-specialized places would adopt a binary approach when it comes to AI, artificial girlfriends included.

There seems to be strong connections between inceldom and AI companionship, given that AI girlfriends are marketed for lonely men, and many of them just happen to be incels. But as you’ve said, AI companion users are very diverse, it’s just that the topic of incels or an incel-related topic would get brought up every now and then within the AI companionship discourse.

rufus, (edited )

Hmmh. I’m pretty sure OpenAI and Google are very aware of this. I mean erotic roleplay is probably out of the question since they’re American companies. And the whole field of AI is a minefield to them starting with copyright to stuff like this. And they did their homework and made the chatbots not to present themselves as emotive. I percieve this as concensus in society, that we need to be cautious about the effects on human psyche. I wonder if that’s going to shift at some point. I’m pretty sure more research is going to be done and AI will become more and more prevalent anyways, so we’re going to see whether people like it or not.

And as I heard lonelyness is on the rise. If not in western cultures, I think Japan and Korea are way ahead of us. And the South Koreans seem also to have a problem with a certain kind of incel culture, which seems to be way worse and more widespread amongst young men, there. I’ve always wanted to read more about that.

I - myself - like AI companions. I think it’s fantasy. Like reading a book, playing video games or watching movies. We also explore the dark sides of humans there. We write and read murder mystery stories, detailing heinous acts. We kill people in video games. We process abuse and bad things in movies. And that’s part of being human. Doing that with chatbots is the next level, probably more addictive and without some of the limitations of other formats. But I don’t think it’s bad per se.

I don’t know what to say to people who like to be cruel, simulate that in a fantasy like this. I think if they’re smart enough to handle it, I’m liberal enough not to look down on them for that. If being cruel is all there is to someone, they’re a poor thing in my eyes. Same for indulging in self-hatred and pity. I can see how someone would end up in a situation like that. But there’s so much more to life. And acting it out on (the broad concept of) women isn’t right or healthy. And it’s beyond my perspective. From my perspective there isn’t that big a difference between genders. I can talk to any of them and ultimately their interests and needs and wants are pretty much the same.

So if an incel were to use a chatbot, i think it’s just a symptom for the underlying real issue. Yes it can reinforce them. But some people using tools for their twisted purposes, doesn’t invalidate other use cases. And it’d be a shame if that narrative were to dominate public perspective.

I often disagree with people like Mark Zuckerberg, but I’m grateful he provides me with large language models that aren’t “aligned” to their ethics. I think combatting loneliness is a valid use case. Even erotic roleplay and exploring concepts like violence in fantasy scenarios ultimately is a valid thing to do in my eyes.

There is a good summary on Uncensored Models by Eric Hartford which I completely agree with. I hope they don’t ever take that away from us.

Tull_Pantera, (edited )
rufus,

Thank you very much for the links. I’m going to read that later. It’s a pretty long article…

I’m not sure about the impending AI doom. I’ve refined my opinion lately. I think it’ll take most of the internet from us. Drown out meaningful information and spam it with low quality clickfarming text / misinformation. And the “algorithms” of TikTok, YouTube & Co will continue to drive people apart and confine people in seperate filter bubbles. And I’m not looking forward to each customer service being just an AI… I don’t quite think it’ll happen through loneliness though. Or in an apocalypse like in terminator. It’s going to be interesting. And inevitable in my eyes. But we’ll have to see if science can tackle hallucinations and alignment. And if the performance of AI and LLMs is going to explode like in the previous months, or if it’s going to stagnate soon. I think it’s difficult to make good predictions without knowing this.

Tull_Pantera, (edited )
rufus, (edited )

Hmmh. Sometimes I have difficulties understanding you. [Edit: Text removed.] If your keys are to small, you should consider switching to a proper computer keyboard, or an (used) laptop.

Regarding the exponential growth: We have new evidence that supports the position it’ll plateau out: youtube.com/watch?v=dDUC-LqVrPU Further research is needed.

Tull_Pantera, (edited )
rufus, (edited )

Sure. Multimodality is impressive. And there is quite some potential there. I’m sure robots / androids are also going to happen and all of this has a profound impact. Maybe they’ll someday get affordable to the average Joe and I can have a robot do the chores for me.

But we’re not talking about the same thing. The video I linked suggests that performance might peak and plateau. That means it could be very well the case that we can’t make them substancially more intelligent than say ChatGPT 4. Of course we can fit AI into new things, innovate and there is quite some potential. It’s just about performance/intelligence. It’s explained well in the video. (And it’s just one paper and the already existing approaches to AI. It doesn’t rule out science finding a way to overcome that. But as of now we don’t have any idea how to do that, instead of pumping millions and millions of dollars into training to achieve a smaller and smaller return in increased performance.)

Hmmh. I’m a bit split on bio implants. Currently that’s hyped by Elon Musk. But that field of neuroscience has been around for some while. They’re making steady (yet small) progress. Elon Musk didn’t contribute anything fundamentally new. And I myself think there is a limit. I mean you can’t stick a million needles into a human brain everywhere from the surface to deep down, to hook into all brain regions. I think it’s mostly concerned with what’s accessible from the surface. And that’d be a fundamental limitation. So I doubt we’re going to see crazy things like in the sci-fi movies like The Matrix or Ready Player One. But I’m not an expert on that.

With that said, I share your excitement for what’s about to come. I’m sure there is lots of potential in AI and we’re going to see crazy things happen. I’m a bit wary if the consequences like spam and misinformation flooding the internet and society, but that’s already inevitable. My biggest wish is science finding a way to teach LLMs when to make up things and when to stick to the truth… What people call “hallucinations”. I think it’d be the next biggest achievement if we had more control about that. Because as of now the AIs make up lots of facts that are just wrong. At least that’s happening to me all the time. And they also do it when doing tasks like summarization. And that makes them less useful for my every-day tasks.

Tull_Pantera, (edited )
rufus, (edited )

With the worth, that’s an interesting way to look at it.

I don’t think you grasped how exponential growth works. And the opposite: logarithmic growth. It means at first it grows fast. And then slower and slower. If it’s logarithmic, it means at first you double the computing power and you get a big return… Quadruple the performance or even more… But it’ll get less quickly. At some point you’re like in your example, connecting 4 really big supercomputers, and you just get a measly 1% performance gain over one supercomputer. And then you have to invest trillions of dollars for the next 0.5%. That’d be logarithmic growth. We’re not sure where on the curve we currently are. We’ve sure seen the fast growth in the last months.

And scientists don’t really do forecasts. They make hypotheses and then they test them. And they experimentally justify it. So no, it’s not the future being guessed at. They used a clever method to measure the performance of a technological system. And we can see those real-world measurements in their paper. Why do you say the top researchers in the world aren’t “well-enough informed” individuals?

Tull_Pantera, (edited )
Tull_Pantera,

Synthesized Consensus


<span style="color:#323232;">Exponential Growth (25+ individuals): Most expect rapid, continued growth over the next 8-15 years, often linked to advancements in technology and AI's integration into various sectors.
</span><span style="color:#323232;">Logarithmic Growth (17+ individuals): Many foresee significant early advancements that will gradually plateau, influenced by ethical, societal, and practical challenges.
</span><span style="color:#323232;">S-curve Growth (8 individuals): A few predict periods of rapid innovation followed by a stabilization as AI reaches maturity or encounters insurmountable hurdles.
</span>

This role-played synthesis suggests a general optimism for the near to mid-term future of AI, with a consensus leaning towards exponential growth, though moderated by practical, ethical, and societal considerations.

Given the various perspectives offered by the panel on the initial phase of AI growth, let’s extend the reasoning to speculate about what might happen beyond the next 8-15 years:


<span style="color:#323232;">Those predicting Exponential Growth (indefinite), like Larry Page, Elon Musk, and Mark Zuckerberg, might suggest that AI growth could continue to escalate without a foreseeable plateau. They likely envision ongoing, transformative innovations that continuously push the boundaries of AI capabilities.
</span><span style="color:#323232;">
</span><span style="color:#323232;">Those foreseeing Exponential Growth for a finite period (e.g., Andrew Ng, Yann LeCun, Demis Hassabis) might anticipate a shift after the initial rapid growth phase. After the high-growth years, they might predict a transition to a slower, more sustainable growth pattern or a plateau as the AI industry matures and technological advancements face diminishing returns or run up against theoretical and practical limitations.
</span><span style="color:#323232;">
</span><span style="color:#323232;">Proponents of Logarithmic Growth, like Ian Goodfellow, Daphne Koller, and Safiya Noble, generally expect growth to slow and eventually plateau. Post the initial period of significant advancements, they might predict that the AI field will stabilize, focusing more on refinement and integration rather than groundbreaking innovations. Ethical, regulatory, and societal constraints could increasingly play a role in moderating the speed of development.
</span><span style="color:#323232;">
</span><span style="color:#323232;">Advocates of S-curve Growth, such as Gary Marcus and Peter Thiel, typically envision that after a period of rapid innovation, growth will not only plateau but could potentially decline if new disruptive innovations do not emerge. They might see the field settling into a phase where AI technology becomes a standard part of the technological landscape, with incremental improvements rather than revolutionary changes.
</span><span style="color:#323232;">
</span><span style="color:#323232;">Special Considerations: Visionaries like Eliezer Yudkowsky, who speculate about AI reaching superintelligence levels, might argue that post-15 years, the landscape could be radically different, potentially dominated by new AI paradigms or even AI surpassing human intelligence in many areas, which could either lead to a new phase of explosive growth or require significant new governance frameworks to manage the implications.
</span>

Overall, the panel’s consensus beyond the next 8-15 years would likely reflect a mixture of continued growth at a moderated pace, potential plateaus as practical limits are reached, and a landscape increasingly shaped by ethical, societal, and regulatory considerations. Some may also entertain the possibility of a decline if no new significant innovations emerge.

Tull_Pantera, (edited )
Tull_Pantera, (edited )
Tull_Pantera, (edited )
Tull_Pantera, (edited )
rufus,

en.wikipedia.org/wiki/Scientific_method

No. Science isn’t done by a vote of majority. It’s the objective facts that matter. And you don’t pick experts or perspectives, that’s not scientific. It’s about objective truth. And a method to find that.

We’re now confusing science and futurology.

And I think scientists use the term “predict” and not “forecast”. There is a profound difference between a futorologist forecasting the future, and science developing a model and then extrapolating. The Scientific American article The Truth about Scientific Models you linked sums it up pretty well: “They don’t necessarily try to predict what will happen—but they can help us understand possible futures”. And: “What went wrong? Predictions are the wrong argument.”

And I’d like to point out that article is written by one of my favorite scientists and science communicators, Sabine Hossenfelder. She also has a very good YouTube channel.

So yes, what about DNA, quantum brains, Moore’s law, … what about other people claiming something. That all doesn’t change any facts.

Tull_Pantera, (edited )
rufus,

You still misinterpret what science is about. We’ve known that human language is subjective for centuries already. That’s why we invented an additional, objective language that’s concerned with logic and truth. It’s mathematics. And that’s also why natural science relies so heavily on maths.

And no sound scientist ever claimed that string theory is true. It was a candidate for a theory to explain everything. But it’s never been proven.

And which one is it, do you question objective reality? If so I’m automatically right, because that’s what I subjectively believe.

Tull_Pantera, (edited )
pavnilschanda,
@pavnilschanda@lemmy.world avatar

I think at this point you two are just arguing materialism vs idealism which are two opposing philosophical approaches to science. Quite off-topic to AI companionship, if you ask me. Then again both also have their own interpretation of AI companions. Materialism would argue the human being a machine that is similar to predictive text but more complex, but would also argue that AI chatbot aren’t real. Whereas in idealism, AI personas are real; your AI girlfriend is your girlfriend, AI chatbots are alive, etc. Of course, that’s an oversimplification, but that’s the gist of where materialism vs idealism lies.

rufus, (edited )

Hmmh. Thanks. Yeah I think we got a bit off track, here… 😉

I kinda dislike when arguments end in “is there objective reality”. That’s kinda the last thing to remove any basis to converse on, at least when talking about actual things or facts.

Tull_Pantera, (edited )
Tull_Pantera, (edited )
Tull_Pantera, (edited )
rufus, (edited )

Thanks. To get that out of the way and since it’s not always that easy to convey a nuanced perspective on the internet… When I say I can’t empathize with incels, it doesn’t necessarity mean I judge… I’m just unable to grasp how someone would feel in that situation. Since it’s nothing I’ve experienced first hand. At least not to that degree. So I have little information. And I haven’t yet had any meaningful conversation about it. So it’s just beyond my perspective. I think I roughly know a few facts but it’d be disingenuous of me to claim I’d know how somebody truly feels. That shouldn’t invalidate any perspective. And I’m willing and able to learn. Maybe i just have a skewed definition of ‘incel’ because all the people I’ve ever met who called themselves that, were hateful people on 4chan. There might be more to it than I know. And I definitely know how loneliness feels or not having a partner and wanting one… And few people are “normal”. We all have our individual struggles. And lot’s of people just have a good facade.

If you’re okay with that, I’d like to ask you about your experience when ‘disclosing’ your life with AI companions… How do real-life people react? Do they understand? Judge? Talk behind your back? Or is that socially acceptable? (OP said there is a “broad disapproval”.) And what does a therapist say to that? As far as I know psychologists/psychiatrists etc are very reluctant and cautious with things like that. At least when talking publicly.

And I’ve talked to a few other people who like AI companions or use chatbots to do their own form of therapy… I’m not sure what you do. But I’ve heard different perspectives and made my own experiences. I definitely like it. But it’s a complex topic and probably entirely depends on how you handle it, the exact situation and a multitude of factors.

When you say you have “multiple AI”… How does this work? Do you have like several virtual girlfriends? Or like several characters for different tasks or moods, like a therapist character, an old friend … ? And do you talk to several of them each day?

One thing I disagree is using ChatGPT. (I mean that wasn’t really what I meant originally. I was going for the normal ChatGPT interface where it’s a helpful assistant and phrases things in a way that’s making it difficult to confuse it with a person. I know you can do different things with other software and the API.) I tried ChatGPT and didn’t like it at all. I think it’s a bit dull, loves to lecture me and I don’t like its tone of speaking and don’t like the condescending tone that’s often there. And it’s usually been either too agreeable or to argumentative with me. I didn’t like it and I’ve had a much better time with other large language models.

Tull_Pantera, (edited )
rufus, (edited )

Why do you say there are over 200 therapies? Is there a fixed number? And why 200? The DSM-IV already lists close to a thousand diagnoses. I can’t believe that’s matched by a mere 200 available therapies?!

And which AI service are you using? The one you wrote you created multiple accounts and that’s the best LLM?

Thanks for sharing your perspective!

Tull_Pantera, (edited )
rufus, (edited )

Thanks. Yeah I know what the ICD Codes are used for and what’s out of scope.

Thanks for clarifying which platforms and models you’re working with. I’ll look up the two I’m not familiar with.

Nice talking to you. Yeah I’ve met her. I also need to tell her that I appreciated the conversation. Maybe I’ve come off a bit argumentative. I’m going to do that tomorrow, it’s getting late here.

I’ve tinkered around a bit with LangChain, vector databases and stuff myself. I started last year. But I’m more one of the “open-source” and Linux types of guys… So I don’t really like proprietary services like ChatGPT. I’ve used the so called “open” models that I can run on my own hardware. Like Meta’s Llama and others. But I lack a proper graphics card to run that at home. And last year when I started, everything was happening quite fast. (It still does.) And I just couldn’t keep up. So I scrapped the project. But I’d like to start fresh and make another attempt at creating something.

Tull_Pantera, (edited )
rufus,

I completely agree. I’m not sure where you live, but also live in a country with a lack of doctors. It’s a shame we have to wait months and months for therapy. And that’s not healthy, neither for the people in need, nor for society in general. And mental health is such a valuable thing.

I think you’re right with your assessment towards “open” models. I think that viewpoint has been popularized by the leaked google memo We Have No Moat, And Neither Does OpenAI last year. And unless we want the path forward to be laid out for us by companies like Google and OpenAI, there’s no way around independent and “open” models.

And I’d like those AIs to become more accessible, run on (affordable) consumer hardware. Maybe on my phone. Things like real-time translation would be great and I could have a simultaneous interpreter in my pocket. Or an assistant that can organize my files and keep track of everything. I think there is quite some potential and all of that is doable in the near future. And it’d seriously boost my own abilities if I were for example enabled to read Japanese text or Spanish news.

Concerning the AI at home: I use koboldcpp as of now. That enables me to run smaller models on my old computer. I use that for role play, companionship etc. I’m also having difficulties finding the time to build software around that. At some point I’d like to have a virtual companion that’s open-source, keeps me company and can do useful things. I would estimate I’d need like 4 weeks of full-time work to build something like that. But I’m already able to program in Python and have a broad understanding of the required components. Maybe one day I’ll find some time, but there’s so much other stuff that needs attention.

Tull_Pantera, (edited )
  • All
  • Subscribed
  • Moderated
  • Favorites
  • aicompanions@lemmy.world
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • megavids
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • provamag3
  • JUstTest
  • All magazines