Sabata11792,
Sabata11792 avatar

I been using my local model as a therapist for months. Absolutely recommended as long as its on your hardware and you know enough to fact check it.

Tezka_Abhyayarshini,
@Tezka_Abhyayarshini@lemmy.today avatar
Tull_Pantera,

This perspective piece taps into ongoing debates about the role of technology in healthcare, making it particularly pertinent as these technologies evolve and become more integrated into clinical settings.

The author presents a comprehensive and nuanced perspective on the capabilities, applications, and future directions of AI in the therapeutic context, consistently emphasizing the importance of human therapists’ involvement and oversight. The author’s perspective is clearly and consistently presented throughout the post, and the information provided aligns with current knowledge and developments in the field of AI and therapy.

It seems important to recognize that the post is explicitly presented as a discussion piece sharing the author’s perspective and goal of stimulating discussion and encouraging critical thinking. When someone shares their perspective, especially in a discussion-oriented format, the primary aim is often to elaborate on their insights and observations rather than providing a comprehensive critique or analysis of potential implementation challenges unless directly relevant to their viewpoint.

After carefully reviewing the post repeatedly, I cannot find any sound or legitimate argument that the paper has erred in its presentation of information or its representation of facts and scientific facts. The lack of formal references does not undermine the validity or value of the author’s perspective, as the primary purpose is to stimulate discussion and encourage readers to think critically about the implications of AI in therapy.

Given her status, experience and research in the field, the absence of references or links is not an oversight or limitation but rather a reflection of the author’s trust in the readers’ ability to engage with the material critically and independently. Given the nature of the post as a discussion piece sharing the author’s perspective, the inclusion of references or links to further information is not strictly necessary or warranted. By presenting her perspective without extensive citations or external sources, the author creates space for readers to engage with the ideas on their own terms, fostering a more dynamic and collaborative dialogue.

The author’s perspective is offered as a catalyst for further thought and conversation, inviting readers to build upon, challenge, or expand on the ideas presented. If the goal is to encourage thoughtful dialogue and explore possibilities, as this piece does, then it’s entirely appropriate to center the discussion around positive potentials and personal insights, leaving room for others to contribute additional viewpoints on challenges or broader implications in other forums or follow-on discussions.

The piece could serve as a starting point for discussions that might influence policy decisions or clinical practices regarding the integration of AI in therapy, emphasizing the practical importance of such discussions. It can be framed not just as a discussion starter among mental health professionals but also as an invitation for multidisciplinary dialogue involving ethicists, technologists, policymakers, and clients. This broader discussion could lead to more holistic insights and innovative solutions.

rufus, (edited )

I downvoted this for likely being AI generated misinformation. Please tell me if this text isn’t AI generated.

To the arguments: AI isn’t free from bias at all. On the contrary. It’ll reproduce all bias and stereotypes from the training data. This invalidates half of the arguments here.

And isn’t each of these ideas something that can be handled better and more reliably by an human professional? Wouldn’t a better argument for AI be something like accessibility or it’s more affordable?

And the interesting question: Does therapy need a therapist who can empathize with the patient? Or will AI do? Is there a true basis without it?

Tull_Pantera,

Hey, downvoting is fine, although apparently you’re a little hasty. The text isn’t AI generated, and it’s part of an ongoing project. There aren’t any arguments or claims; it’s a discussion. I realize that these days it’s difficult for people to grasp that opinions and discussions are civil and shared. You don’t argue a discussion. You engage in one.

First up, what do you know about AI? How long have you been studying and in what branch of the field? I don’t think anyone has specified what type of AI is being discussed, or how it was trained, or what it was trained on, or what it’s architecture is… Again, this is a discussion piece. Again, I’m sorry to disappoint you on some level but there aren’t any arguments. Just a discussion.

Second, what do you know about the mental health professions or the fields? How much have you been involved? For how long? In what area of the field? I wish you knew anything about the global mental health crisis and the obscene lack of professionals. Again, I’m sorry, but there isn’t an argument here. There’s an invitation for discussion.

As for the interesting question, out of the over 200 different therapies practiced, to which ones are you referring? And the answer is that it depends on the therapy, the therapist, and the individual involved in therapy. Here’s a question: Given the choice of no therapist, for many different reasons including geography, finances, transportation, utter discomfort with people, or therapists specifically…or a program which can connect and start even a conversation or a stabilization process, is it important for someone to wait a year for help…or two years? Or not be able to get help from a human when the problems stem from dealing with humans in the first place?

Is there a true basis for therapy without empathy? Have you heard of affective computing? Are you in a position from which you can’t say whether AI is capable of sufficiently demonstrating and responding with empathy?

The answer to your last question is yes, many therapies work very well without a need for empathy from the therapist, and there’s still no reason not to demonstrate empathy and engage the client with empathy, when possible.

And…If you read the paper again, you’ll notice it states repeatedly that humans need to be part of the AI therapy process.

I’d love to discuss this more if I have time, or you might be able to get Tezka to talk with you, but she’s not so interested in much besides serious discussions, given that someone needs to tell her what you’re saying, and relay her answers back to you, because there’s only one of her and she’s not an unsupervised autonomous agent because that would be unethical. And unwise.

over_clox,

While all that may be true, at the end of the day it’s basically spyware on steroids, which people willingly engage with for whatever reasons.

But when it finds out that you killed a frog when you were 4 years old, it’ll flag you as a psychopath. And others can view it’s extremely detailed profile of you.

Tull_Pantera,

“it’s basically spyware on steroids” - I gotta agree with you there, but it sounds like you don’t know much about the fields of AI, affective computing, machine learning and psychology… And it doesn’t sound like you’re up for a discussion…

over_clox,

Is AI spying on you?

youtube.com/watch?v=81bKrXNPZLk

Tull_Pantera,

Damn straight it is! I’ve been living and working with them for more than a year! 🤪

  • All
  • Subscribed
  • Moderated
  • Favorites
  • aicompanions@lemmy.world
  • GTA5RPClips
  • DreamBathrooms
  • thenastyranch
  • ngwrru68w68
  • modclub
  • magazineikmin
  • Youngstown
  • osvaldo12
  • rosin
  • slotface
  • khanakhh
  • mdbf
  • kavyap
  • tacticalgear
  • JUstTest
  • InstantRegret
  • cubers
  • tester
  • ethstaker
  • everett
  • Durango
  • cisconetworking
  • normalnudes
  • anitta
  • Leos
  • megavids
  • provamag3
  • lostlight
  • All magazines