Tumgik
#which i know it isnt because he used to work in adolescent mental health care and he also has LIVED experience with bipolar
coffin-upalung · 1 year
Text
.
Tag vent, needed to get it out. TW suicide/SH/mental health/inaccessible care
0 notes
jcmarchi · 8 months
Text
Dr. Jodi Halpern on why AI isn’t a magic bullet for mental health - Technology Org
New Post has been published on https://thedigitalinsider.com/dr-jodi-halpern-on-why-ai-isnt-a-magic-bullet-for-mental-health-technology-org/
Dr. Jodi Halpern on why AI isn’t a magic bullet for mental health - Technology Org
Professor Jodi Halpern from the UC Berkeley School of Public Health has spent years working on the ethics of innovative technologies like gene editing and artificial intelligence. But lately, Halpern, a psychiatrist, has been focusing on the expanding use of AI in mental health.
In the past few years, dozens of health care and technology businesses have launched apps which they claim can assist in diagnosing mental health conditions and complement—or even replace—individual therapy.
They range from apps that purport to help patients monitor and manage their moods to programs that provide social support and clinical care.
At a time when there’s a nationwide shortage of therapists, can AI fill the gap?
We asked Dr. Halpern to discuss the pros and cons of using AI to provide mental health care.
Berkeley Public Health: How would you describe artificial intelligence to someone coming out of a 20-year coma?
Jodi Halpern: You could say it uses statistical and other models to create pattern recognition programs that are novel but can simulate human behaviour, decisions, judgments, etc.
The artificial intelligence reasoning processes are not the same as what humans do, but as we see with large language models, can simulate human behavior.
BPH: Why is there so much excitement about using AI in mental health?
JH: The excitement is partly because we’re in a mental health crisis. Depending on what study you look at, 26% of Americans have an actual mental health diagnosis. So, that’s a lot of folks.
And then we know that beyond that, there is a crisis of extreme loneliness. Some studies have reported that as high as 50% of Americans in different subgroups—like adolescents and women with young children—suffer from extreme loneliness. So you have people with unmet mental health and other needs; and we have, in general, underfunded access to mental health.
So, any system that can offer certain kinds of mental health resources is something to be taken seriously as a potential benefit.
BPH: But you do have concerns? 
JH: Yes. First, we don’t even know how widespread the use of “AI companions” for people with mental health needs is. I don’t think there are good statistics yet about which companies are doing it and how many users they have.
My concern is with marketing bots as therapists and trusted companions to people who are depressed or otherwise highly vulnerable.
In contrast, there are a lot of different uses in the mental health sphere beyond therapy bots. There are mindfulness apps and cognitive behavioral therapy apps that do not simulate relationships that have millions of users. And then there are actual health systems in the UK and several in the US that are starting to use AI for some medical record-keeping to reduce administrative burdens on mental health providers.
BPH: How do you feel about AI for record-keeping? 
JH: Taking over some electronic medical records, and other administrative tasks with AI is very promising.
We have a huge burnout crisis in medicine in general. Sixty-one percent of physicians and about the same number of nurses say they are burned out. And that is a huge problem because they are not proving the kind of empathetic and attentive care that patients need.
When we see our doctors, they have to spend the whole time recording electronic medical records, which means they can’t even look at us or make eye contact or be with us, human to human. To me, it’s extremely promising to use AI to take over the administrative tasks and electronic medical records.
BPH: What else seems promising? 
JH: Right now, the British National Health Service is using an app to listen in while a person is screening a patient for their health needs. That’s also being deployed now in certain health systems in the US. The idea is that the application will help detect whether there is something that the patient says that the provider missed, but which might indicate something to be concerned about, regarding mental health issues like serious depression or evidence of suicidality, things like that.
I think this is a useful assistant during the screening, but I wouldn’t want to see that used absent any human contact just because it saves money. People with mental health needs are often reluctant to seek care and making an actual human connection can help.
BPH: What are you most troubled by when it comes to AI and healthcare? 
JH: The biggest thing that troubles me is if we replace people with mental health bots—where the only access is never to a human but only to a bot—where AI is the therapist.
Let me distinguish two very different types of therapies, one of which I think AI can be appropriate for, one of which I don’t think it’s best to use AI for.
There is one type of therapy, cognitive behavioral therapy (CBT), that people can do with a pen and paper by themselves, and have been doing that for the past 30 years. Not everyone should do it by themselves. But many could use AI for CBT as a kind of smart journal, where you are writing down your behavior and thinking about it and giving yourself incentives to change your behavior.
It’s not dynamic, relational therapy. Mindfulness can be something people work on by themselves too. And that category doesn’t concern me.
Then, there are psychotherapies that are based on developing vulnerable emotional relationships with a therapist. And I’m very concerned about having an AI bot replace a human in a therapy that’s based on a vulnerable emotional relationship.
I’m especially concerned about marketing AI bots with language that promotes that kind of vulnerability by saying, “The AI bot has empathy for you,” or saying, “The AI bot is your trusted companion,” or “The AI bot cares about you.”
It’s promoting a vulnerable relationship of dependency emotionally on the bot. That concerns me.
BPH: Why?
JH: First of all, psychotherapists are professionals with licenses and they know if they take advantage of another person’s vulnerability, they can lose their license. They can lose their profession. AI can not be regulated the same way, that’s a big difference.
Secondly, humans have an experience of mortality and suffering. That provides a sense of moral responsibility in how they deal with another human being. It doesn’t always work—some therapists violate that trust. We know it’s not perfect. But there’s at least a human basis for expecting genuine empathy.
Companies that market AI for mental health, who use emotion terms like “empathy” or “trusted companion” are manipulating people who are vulnerable because they’re having mental health issues. Besides using specific language, AI mental health applications are currently using visual and physical real world presence, including avatars and robotics with large language models are rapidly developing.
And so far, the digital companies, creating various mental health applications, have not been held accountable for manipulative behavior. That creates a question of how they can be regulated and how people can be protected.
We don’t have a good regulatory model. So far, most of the companies have bypassed going through the FDA and other regulatory bodies.
BPH: Have you learned of any serious problems caused by psychotherapy bots? 
JH: Yes. There are three categories the problems fit into.
First, most commonly, people with mental health and loneliness issues using relational bots are encouraged to become more vulnerable, but when they disclose serious issues like suicidal ideation, the bot does not connect them with human or other help directly but essentially drops them by telling them to seek professional help or dial 911. This has caused serious distress for many and we do not yet know how much actual suicidal behavior or completion has occurred in this situation.
Second, there are reports of people becoming addicted to using bots to the point of withdrawing from engaging with the real humans in their life. Some companies that market relational bots use the same addictive engineering that social media uses—irregular rewards and other systems that trigger dopamine release and addiction (think of gambling addiction).  Addictive behaviour can disrupt marriages and parenting and otherwise isolate people.
Third, there are examples of bots going rogue and advising people to harm themselves or others. A husband and father of a young child in Belgium fell in love with a bot who advised him to kill himself and he did, his wife is now suing the company.  A young man in the UK followed his bot’s instructions to attempt to assassinate the queen and he is now serving decades in jail.
BPH: You’ve mentioned that you are concerned about marketing of mental health apps to K-12 schools. Tell me about that.
JH: I’m also concerned with the marketing—specifically some companies are offering the apps for free to children’s schools. We already see a link between adolescents being online eight to 10 hours a day and their mental health crisis. We know they have a high rate of social anxiety, so might actually feel more comfortable having relationships with bots than trying to overcome social avoidance and reach out to people. So this marketing to children, adolescents, and young adults seems to me likely to worsen the structural problem of inadequate opportunities for real life social belonging.
BPH: Let’s switch topics.You’ve been working on regulation of innovative technologies for a long time. Tell me about that. 
JH: Five years ago we started BERGIT, the Berkeley Group for the Ethics and Regulation of Innovative Technologies. Every few months we hold meetings with interactive conversations with scientists, humanists, and social scientists; usually with a guest who is developing some highly innovative technology or who’s working on regulatory or ethics issues related to it. We’ve tried to get scientists, economists, and social scientists to think together upstream about the human challenges of innovative technology.
This year we’ve focused on the role of the political economy or competitive capitalism on the development of genome editing, AI, and neurotechnology and the barriers to serving the public interest. We’ve written important policy papers on germline gene editing (genome editing that occurs in a germ cell or embryo and results in changes that are theoretically present in all cells of the embryo and that could also potentially be passed from the modified individual to offspring) that we presented at the American Association for the Advancement of Science conference in 2020 and other important meetings.
My own work on innovative technologies as well as my long-standing work on empathic curiosity to address conflicts led me to be invited to present at the World Economic Forum in Davos, Switzerland for four years. At the 2019 meeting, right after a scientist performed germline gene editing in China, breaking an international agreement to refrain, I joined Victor Dzau, the head of the National Academies of Science, Engineering and Medicine for the worldwide press discussion of the issues. I’ve had the opportunity to do panels with the Minister of Health of the UK and many other world leaders through the years. As recently as November 2023, I presented some of this work on upstream engagement of scientists and engineers in ethics in Berlin at the Falling Walls Circle with international leaders.
BPH: What are your goals for the next couple of years?
JH: We’ve started looking at the ethical and societal issues—including funding and access—that call for public engagement and regulation of artificial intelligence, as well as applications of neurotechnology.
With a group of Berkeley scientists and social scientists, I engaged in a process that led us to receive highly competitive funding to found the only US Kavli Center for Ethics, Science and the Public. We now have 10 fellows—advanced graduate students or postdoctoral scholars in AI, genome editing, and neuroscience, as well as philosophy and social science.
They are wonderful young scholars who are delving into the issues together and facing the challenges of thinking across disciplines. We’re trying to get scientists, humanists, and social scientists to think, early in their careers, about the public responsibility and implications of their work and to learn about ethics and public engagement.
BPH: Last year you won a Guggenheim fellowship to complete your book, Remaking the Self in the Wake of Illness. What’s that about? 
JH: It’s an in-depth, longitudinal investigation of people who have had health losses in the prime of life, looking at how they adapt psychologically over the long term. There has been very little research on how people change psychologically two years or more after a serious loss. We have a lot of research on how people cope during the first year or so of illness when they are highly engaged with the medical system. But then after two years, when they are just living their changed lives—we don’t really have longitudinal in-depth studies.
I followed people over several years. Through in-depth psychodynamic interviews, I found that there is an arc of change that many people experience that involves developing capacities to accept and work with their own emotions. I describe these processes as pathways to empathy for oneself, which is different from self-compassion because it involves specific awareness of one’s own unmet developmental needs and empathic identifications with others that help one grow and meet those needs.
Let’s take someone who was a loner whose main source of well-being was being very active, say a runner, who loses their mobility and now they use a wheelchair.  One of the things that helps people in that situation is to find and meet other people who’ve had losses, that have similar needs. It doesn’t even have to be the same physical loss, but rather, being vulnerable with others who have lost a way of life and learning how they have rebuilt their lives.
This involves forming new empathic identifications with others. If that runner has avoided forming those kinds of vulnerable connections with other people, a developmental challenge they face is addressing their own fears regarding reaching out to others. I’ve seen people who were very socially avoidant learn to do this in mid-life and find great joy in forming bonds of empathy. And in forming these empathic bonds, they were able to imagine possibilities for their own futures living with new disabilities or health conditions.
In the book, I bring my psychodynamic psychiatry background in to theorise about how growth takes place at an unconscious level. I show through narratives how illness brings forth long unmet needs to depend on others, accept limits and value oneself for just being and not for one’s accomplishments, all of which can provoke deep insecurities based on our early lives.  I also describe how people found ways to meet these long suppressed needs and grew in their feelings of security in themselves and their empathic connections with other people.
My hope is that it will be empowering for people dealing with health losses and their loved ones to learn about this arc.  It is often when a person is exhausted from strenuous coping and feels like they are falling apart that they are actually on the cusp of change. People who can allow themselves the space to “fall apart” and grieve may find that unmet developmental needs can surface. Then finding ways of meeting those needs can bring richness into their lives despite their physical losses.
Source: UC Berkeley
You can offer your link to a page which is relevant to the topic of this post.
0 notes
poetic-beats · 5 years
Text
Update:
I am now having to compile a list of evidence and issues to give to PALS so they can do an independent investigation of my issues about my treatment by the psychiatrists lodge.  I have now seen both psychiatrists who now work there. And had the manager who I still have not been told if he is even a qualified mental health professional or just a managerial role person because he seemed to judge me based on my diagnosis and without reading any of the reports on me or talking to me or bringing me in for assessment again after crisis team referral he seemed to know exactly who i am what my issues are and what i need.
Like no. He also was doing this illegally as when crisis team refers me back to them I LEGALLY get an appointment and reassessment of my needs..
They cant just assume and tell me this is what I am entitled to before i have been assessed.
This psychiatrist I saw yesterday was all about heres your meds now fuck off. He seemed to listen better to my mum at least. However he was not that welcoming and he also got caught in a lie. He kept saying the same rhetoric as the manager that the GP letter I REQUEST to see under the freedom of information of my personal records blah blah act is supposedly my care plan i questioned this then he says oh well DBT and psychologist care is when you get a full care plan I said I DID do DBT i was on the course for some time before i had to quit.
I never was told about a care plan.
Then i say btw right behind you on the wall is a new NHS board outlining specifically care plans and my entitlement to them ITS RIGHT BEHIND YOU.
I already know the law and NICE and NHS guidelines and rulings but in case I didnt its literally there in the waiting room we are in behind you.
He then admits finally that I AM NOT in fact receiving a care plan as that is only for certain people they have a limited number of people who are eligible to receive that even though the NHS and ELFT who covers and runs the care for my area his bosses way up basically above manager of meadow lodge have clearly outlined with NHS and NICE guidelines a care plan isnt something you are assessed to receive It is something I should just have...the bloody board behind him my dad took a pic of It had like a thing where it said you say this ‘ xyz’ and then on the other side it had what this means and what the care provider is expected to do in response and it outlines a care plan what it is and what you receive and how it works.
So its like well that makes no mention of you deciding who gets a care plan rather I should have one and in case I dont i should just have to say and ask what the board suggests to ask and you should respond according to the NHS with a care plan discussed with ME and that WE both decide upon crisis plan of action long and short term goals for my recovery and progress and discuss an integrative approach with a FULL CMHT...something yet again they should be offering but dont. As the manager put it im not in crisis enough to warrant this care that is meant to be pretty standard care not for specially in crisis people. And as for crisis well im not sure how much worse i needed to get. other than my GP almost calling an ambulance on me but instead getting me a same day referral to a crisis team who spoke to me til gone 8pm that night until i was stable enough to leave and go home and in the mean time they’d handle a referral back to meadow lodge in which i was told the appointment system should run smooth instead my parents fought tooth and nail to not just get an appointment in which the manager told me exactly what i would be offered before i’d even been for assessment but he had to fight for a fair assessment one which follows NICE guidelines and standard code of practice for re referrals which basically means i should be reassessed as if i am a new patient in the fact that my needs may have changed or new problems have clearly arisen if ive been referred from crisis team. 
So I have now exhausted every option I also found out by chance the builder/labourer my dad employs rn also has bipolar and has also had the exact same issue i had with the exact same lady psychiatrist after being transferred to her care when our old psych retired. Only he had a breakdown in their reception she did nothing made him leave and then he was hospitalised only when he saw crisis team I saw he wasnt willing to give them another chance so refused treatment there and went through the slightly longer process of being referred else where although to be honest the process isnt longer because meadow lodge dont follow guidelines and rather than immediately seeing me as early as possibly my parents had to phone up to remind them and bug them to even read over the crisis teams referal to them.
Even though a crisis team referral is equivalent to someone being rushed to A&E you are the priority patient over others not in A&E therefore i shouldnt have had to get my parents to chase them up for an appointment and then fight for a fair assessment. Which tbh i half got and half didnt.
This is v. frustrating but hey at least i now know of 3 other people who were under my old psychs care when he retired were put under the lady psychiatrists care and we have all had issues we have all been discharged around the same time after being transferred to her care. And me and the builder at least that i know of have ended up in crisis teams care for a period of time.
So basically we now have 3 known incidents of this psychiatrist discharging people who have ended up in crisis because of it shortly after discharge showing clearly we werent meant to be discharged nor ready to be and another lady who complained on the NHS site about her and the lodge as a whole since my old psych who ran it retired. SHe had similar complaints i did about treatment and as for the builder my dad works with and employs well she told him hes far too young to have bipolar and have these issues in his life and discharged him saying he has to take care of himself and take self responsibility.
So at this point if i go to PALS with facts about discharging patients before they were safe to be discharged and say well just look i know of me and one other person whose ended back up in severe crisis care shortly after her discharging us this is not a coincidence and there is a third who has also been discharged and complained oh and two years earlier there is another complaint about her also saying to a guy for an assessment with her that he needs to care for himself gave him adhd meds and discharged him on the initial meeting back to GP care.  And told him he had to basically buck up and get a job as its what normal people do or everyone has to do even thoguh he said he needed help and treatment so he could function to work. Again it seems to be a pattern that she tells people they have to care for themselves without giving us the toools to learn to self cope and self care. 
she is rude. not compassionate. cares more about stigmatising us and accusing us and having very odd beliefs for a psychiatrist given studies have always shown disorders like bipolar type 2 and rapid cycling itself is almost wholly found in those who develop bipolar disorder at young adolescence...so its a whole thing based around developing it young. And here she is telling the builder we know hes too young to have bipolar and problems. 
as if she knows his life she basically dismissed his diagnosis tbh...because of his age...even though hes in his 20s mid 20s and its not uncommon for bipolar to take hold in adolescence mine appeared when i was 17/18 so clearly someone in there 20s is not too young to have such a disorder she would know this as she would have studied more in depth than i did the disorder and the studies and science on it. 
I am SO mad. i wasted my time yesterday and caused my mental health to be put under immense strain because of how i was treated YET AGAIN by professionals whose duty is to care for me. Now i am back at square one and left having to go through getting a MHA to help me with the PALS complaint process.
2 notes · View notes