false
Catalog
Equity and Access in Digital Mental Health: The Ro ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello and welcome. I'm Dr. John Torres, the Director of the Digital Psychiatry Division at Beth Israel Deaconess Medical Center and a technology expert for SMI Advisor. I'm thrilled that you're joining us for today's SMI Advisor webinar entitled Equity and Access in Digital Mental Health, the Role of Privacy, Safety and Ethics. Next slide please. SMI Advisor, also known as the Clinical Support System for Serious Mental Illness, is an APA and SAMHSA initiative devoted to helping clinicians implement evidence-based care for those living with serious mental illness. Working with experts from across the SMI clinician community, our interdisciplinary effort has been designed to help you get answers to questions you need to care for your patients. Next slide please. Today's webinar has been designated for one AMA PRA Category 1 credit for physicians, one CE credit for psychologists, and one CE credit for social workers. Credit for participating will be available until December 15th of this year. Next slide please. Slides for the presentation are available to download in the webinar chat. You can select the link to view it as you can see there. Next slide please. Captioning for today's presentation is available. Click Show Captions at the bottom of your screen to enable. Click the arrow and select View Full Transcript to open captions in the slide window. Next slide please. For question and answer, feel free to submit your question through the presentation by simply typing into the question area down the lower portion of the control panel. We'll reserve about 10 to 15 minutes for question and answer, which I will moderate. So feel free to put them in anytime. We'll handle them at the end. And next slide. Now I'm so happy to introduce the faculty for today's webinar, Dr. Nicole Martinez-Martin. So Dr. Martinez-Martin is really a leader in the space we're going to talk about. She's an assistant professor at Stanford School of Medicine Center for Biomedical Ethics. She has served as a principal investigator for research projects examining ethical issues regarding machine learning and healthcare, digital health technology, digital contact tracing, and digital phenotyping. She's examined policy and regulatory issues related to privacy and data governance, bias and oversight of machine learning, and digital health technology. Her career award from the federal government, funded by the National Institute of Mental Health, focused on ethics of machine learning and digital mental health technology. She has a huge amount of recent research. It's focused on issues of bias, equity, inclusion, especially around really machine learning and digital mental health and the social implications with it. So I think we're so lucky to have, I would say, the world leader on these topics with us. So with that, I'm going to hand it over to you, Dr. Martinez-Martin. Thank you so much for joining us. Thank you so much for this opportunity to speak on these issues. And thank you very much to Dr. Torres for his leadership in this area, which, you know, really has been a guidepost in a lot of ways. So I don't have any conflicts of interest to report. As Dr. Torres said, I am focusing on issues of equity and access today, including how privacy and equity issues intersect for digital health data and looking at, you know, practices that support equitable applications in digital mental health tools. And again, thank you for this opportunity to talk about these issues. So digital mental health, I'm putting this up there because, you know, over recent years as digital mental health tools, you know, there's been such massive investment in digital mental health, particularly in private sector and industry, but also in, you know, research institutions. And over and over, you know, this really encapsulates saying that, you know, digital mental health can, you know, be effective, improve the effectiveness, make therapy more accessible, make diagnosis and treatment, you know, more effective and more accessible. And these are the touchstones that when people are advancing, you know, talking about, you know, what is this revolution in digital mental health? These get talked about as the goals, of course. And, you know, just sort of talking again, what are these potential benefits or these benefits that we're hoping that we're seeing? You know, options for people to connect to therapy and care, having smartphones, the ubiquitousness of smartphones, you know, is meant to allow for psychiatrists, for therapists to manage a broader pool of patients, improve the diagnosis. In part, and this, you know, goes into the privacy elements in part by collecting more data, having these opportunities to collect more data, collect data from people, you know, out in the wild, as it were, you know, rather than just when they're having, you know, the moment of crisis or moment of coming into the office with goals of improving treatment and overall and having this access. Digital tools, I'll be talking a lot about apps today, you know, in terms of my examples, but digital mental health tools can, you know, encompass both, you know, digital tools that come from AI. That, you know, as well as, you know, predictive analytics and, you know, even some aspects of the platforms, the digital platforms that are in use today in order to deliver therapy. There overall, of course, are a suite of issues that often get talked about in terms of the ethics of digital AI, digital health, privacy, bias, and fairness are the ones that I am concentrating the most today. But I am putting this slide up because these are often intermingled issues that, you know, when we talk about privacy, things like consent and transparency often are part of that. What have you consented to? Are you being transparent about the data being collected? And bias and fairness also has these intermingled issues with, you know, safety, who do these work for? Is there accountability when they're not working, when they're not working for specific populations? And so that's where I wanted to have the moment to acknowledge these broader group of issues, even though I'll be focusing more on the equity, access, fairness issues, as well as privacy. And, of course, the context, as you all are aware, is coming with the pandemic, there was this major shift already. You know, there had been a lot of investment, a lot of thought that, you know, digital health, AI were this coming wave in psychiatry and for addressing mental health needs. And that shift was, you know, accelerated during the pandemic, coming from, you know, on a practical bent, you know, these expanded options for reimbursement for tele-options, lower liability for mental health professionals in case of privacy breach, as well as, you know, there were a number of mental wellness, mental health apps that kind of this was their moment. They either became free for a period of time, or they lowered their prices, or in some cases, they were taken up or used by, you know, sort of by cities or supported by employers in order to support mental health during the pandemic. There were, of course, you know, positives that, you know, people were still able to access mental health care and it really underlined, you know, what had been hoped for as a number of the positives about mental health care. People finding that they preferred some of these remote options, sometimes because they were less burdensome on their time or on other resources. But at the same time, you know, it also highlighted, particularly some questions about digital divide and what some potential problems for access were, that when it was talked about that, you know, digital mental health could, you know, improve care. These questions, you know, really sort of being highlighted of, well, for whom, for the people who mental health care already, you know, had been able to serve ultimately, or, you know, truly expanding that access. And so some of the issues that got highlighted, particularly after with the pandemic, and the hoped for expansion of digital mental health, were that there were differences in infrastructure, that yes, actually, you know, even among minoritized groups, things like smartphones were actually, you know, at least 80% of people, of adults had smartphones. But there were still other types of infrastructure that caused problems or caused issues in terms of access, whether people had high speed connections. Of course, people in more rural areas still, you know, had trouble accessing high speed internet or the types of, you know, smartphone connections that some of these apps needed. Also, you know, sort of marginalized low income groups less likely to have access because there were more people among those groups who may be accessing care through community mental health, which just as always had fewer resources and less infrastructure. There also, it really highlighted the issues of within digital mental health of being able to have the resources to help severe people with severe mental health issues. And this was for a number of reasons, and I'll touch on some of these more. But generally speaking, that, you know, the types of apps, because of the types of markets or target populations they were seeking, often were meant more or were more directed towards people who had more sort of moderate or mild mental health issues. And, you know, in some cases for safety reasons or otherwise. Also, you know, it highlighted that some of the things that therapists or psychiatrists would be needing to touch base and more sort of having more of the sort of broader physical picture of how someone was doing were harder to serve through these, you know, digital mental health apps, as well as, you know, what kind of resources were needed. And so it highlighted that divide as well, that it remained an area where people with more severe mental health needs were not as well served through these digital mental health services. And just bigger issues, and this remains an issue, is that because so many of these, so many apps, so many digital tools are, you know, app-specific or tied to a particular institution, particular company, that these bigger issues of how you coordinate resources among different people, even coordinating resources, again, between people who, you know, may not have the resources, people who, you know, may have more mild, moderate mental health needs versus people who would have a need for more resources, more kind of high needs areas, that sort of larger ability to sort of coordinate at a higher level between them is still something that is not, has not yet been addressed in terms of how that can be done through digital mental health, as well. And you also have examples, for example, Reno, Nevada, which had, you know, decided to use public funds in order to support a mental health app and therapy through a mental health app also showed, you know, again, some of the shortcomings of that as local therapists were passed over in favor of this app, and local therapists who may be able to, you know, mobilize more resources, have more understanding of local resources, how to serve people, and then once that, once that sort of period of time that the app was being used was over, that those people who had been using it didn't have a continuity of service. And, you know, as always, these questions of quality of service, whether people who are, you know, having to receive services overall through, you know, digital means, you know, these concerns about them being put in a position of receiving lower quality or less effective services through these digital means overall. And I want to underline here that, you know, of course, a number of these issues are issues that have been problems for mental health services for decades. But this is meant to highlight both that, you know, in going, in moving towards, you know, digital mental health, the concerns over, you know, how do we address those equity concerns, are there ways to address those equity concerns, are there opportunities for addressing them, you know, are very much there. And one big area, both, you know, in terms of, you know, how these different tools are developed, how they may be used, is these concerns over bias. And I'm focusing here for a moment on algorithmic bias because of the way that it highlights issues, but these can be found in different ways of, you know, how sort of research for digital mental health is often done, still primarily among, you know, more, whether it's more sort of affluent, whether it's white populations, but in ways that raise these concerns about how there may be bias, you know, whether algorithmic bias or, you know, simply in the research populations of who these tools work for. And, you know, the overall idea being that, you know, when tools are developed using data that comes from, you know, sort of limited, limited data in terms of which populations it serves, there then ends up being concerns over who these tools are developed to work best for or to work for at all. So bias is a systematic error. It leads to, you know, differences in, you know, sort of effect or association that come out in those algorithms. And the really one I underline here is the concern, of course, is unfairness, that these systematic errors lead to certain groups, certain populations, systematically being, you know, underserved or not being served by the tools that are being created. So there's systems levels concerns about this. And I want to sort of take it back for a moment, you know, looking at, you know, the field of psychiatry, the medical profession. And, you know, you have, of course, these obligations for patient well-being, for social benefit in, you know, how medical research is conducted, how tools are developed within the medical profession. And one of the issues that comes up when it comes to digital mental health is that you have a lot of interdisciplinarity, which can be a great thing, you know, bringing together different people's expertise for how digital mental health tools are created. But you also have people coming from rather different professional standpoints about what their responsibilities are, who their responsibilities are, that, you know, raise concerns, you know, even in the development of mental health tools about, you know, how these wider concerns about, you know, quality for patients, bias, end up getting served. And this is not meant to call out data scientists, but it is meant to note that, you know, professionally, that there's differences in how data scientists, when there's been, you know, studies of this, qualitative studies, that, you know, data scientists' view of accountability, because there's often more kind of distributed responsibility for developing different digital health tools, sort of who is responsible for what, sometimes it's not always addressed, especially when it comes to these issues of bias, or, you know, who these tools are going to work from downstream. They may have more professional obligations that are focused on things like, you know, how do you have high quality data, rather than, you know, having the same professional obligations towards, you know, how are we serving patients with particular concerns towards quality. And this is all just sort of meant to say that from development onwards, there are things to be concerned about in terms of, you know, how these tools serve different populations and what kinds of practices, even in the development phase, can better support what kind of responsibility is there, you know, towards protecting patient data, as well as developing these tools that hopefully are going to be accessible to, you know, different populations. And, you know, with this, you know, this question of bias is remembering that the data, the data that gets used in order to test digital health tools is not, is always representation in some way. And there's always ways that there's decisions made about whose data is being selected, where it's coming from, what data may get excluded. And these are always things that shape, again, you know, who these tools are useful for, or most useful for. So just a couple examples is that, you know, data sets used for AI and healthcare, data sets that often get used for training or developing different digital mental health tools. Common data sets in healthcare that get used are data sets that come from randomized control trials or electronic health records, these sort of larger databases that are available from different sort of healthcare perspectives. But in each cases, there are ways that these data sets are limited. Randomized control trials, this has come out more and more in recent years is this attention to where certain research institutions, urban areas are much more likely to have randomized control trials, even which patients get selected for inclusion or exclusion can end up meaning exclusion of minoritized populations, low income populations. And that needs to be understood in terms of, when these tools are developed using these data sets that there can be these limitations in terms of which populations they end up serving and working for. The same goes for electronic health records. Sorry. That it's been found with EHRs that there's often missing data that when, for example, for groups of people who may not have adequate health insurance, they may be more likely not to have continuity in their primary care provider or continuity even in like which institutions, which doctors they go to, which then affects where this data, the quality of the data that's in the EHR and how that data can be used in order to inform the kinds of digital health tools that then are relying on that data for training or other purposes. And in terms of data sets, I think in recent years, there's been, again, a lot more attention to systematic ways that black patients in the system that, first of all, that they are underserved, but also even how different tools get used with different patients, how that can affect the quality of data. And a really good example of that is the pulse oximeter where it was found that pulse oximeters are less able or do not work as well on people with darker skin. And it means that, as it says here, it can translate into as many as one in 10 inaccurate readings that are more likely to happen to black patients. That reading affects not just what the pulse oximeter data is in the records, but it can also have impacts then on physician treatment of these patients, as was found in COVID, that there may have been systematically that patients with darker skin tones had to be sicker at the point that they were hospitalized because of things like the pulse oximeter readings being different for people with darker skin. And so there's a sort of cascade effect of how data then from these records that's informing these different devices can be seen as really serving different populations. And an example of that within psychiatry are the questions around, for example, over-diagnosis of black and Latinx males in particular for schizophrenia and how that over-diagnosis as it becomes part of record systems, how that can, in more systematic ways, then affect sort of genomic research, affect digital health research that's done on these populations. And these are sort of much bigger questions about who, again, how data informs that bias within these tools and how that might be addressed in order to have tools that better serve different populations. With digital health tools, digital mental health tools, of course, there's a lot of excitement over, you know, whether it's digital phenotyping or other ways of gathering data from smartphone sensors, wearables or otherwise, and how that might be used in order to better understand different psychiatric conditions as well as develop those tools that are meant to improve psychiatry. But there are a lot, still a lot of things for understanding, you know, as this came from a paper from Dr. Toros, but, you know, really this, you know, need to better understand who's using the digital tech, who's contributing to the sensor research, digital biomarker research, so that also you can understand the limitations better. Of these different tools. And, you know, while race provides a number of examples of how, you know, sensor data or different tools may be, you know, used differently or distributed differently to different populations according to race, that also can come up according to gender, disability. There's been, you know, a lot of attention to how people who have disabilities can be underrepresented in a number of data sets, and that may include in social media type data sets that get used to inform these different digital mental health tools. And the other side of things is also, you know, understanding that, you know, how these tools or looking at, you know, that these tools, because of design elements in them may be taken up more or targeted in some ways in stereotypical ways towards, you know, according to gender, according to age, in ways that needs to be better understood, both in terms of what kind of data is coming in, but also in terms of who may end up being sort of excluded or included when it comes to using these different tools. And here, you know, apps for women that are targeted, women tend to have more stereotypically female designs, including pink colors, hearts, flowers, and, you know, sort of understanding that at the same time, it's saying stereotypically female, but also realizing that, you know, a number of times these apps for women also ignore diversity. They may depict, you know, white, thin, young, middle-class women be targeted at that without sort of noting that, that there are, you know, women who then fall outside of that, who's, you know, whether it's, whether you want to have their data collected, you know, from different diverse groups and, you know, sort of understanding how these apps may be biased in a way that excludes those groups, but also in terms of developing these apps, understanding the ways that this design language, you know, can really have impact in terms of, you know, who ends up using, who ends up being included for these different apps. So with my research, I have done a number of interviews. I've interviewed developers as well as clinicians and users where I've done semi-structured interviews. With each group, I talked to, you know, 50 people. With the digital mental health developers, I talked to people with a range of expertise, computer science, mental health, psychiatry, app design, but also, you know, I've spoken to clinicians, psychiatrists, psychologists, and users in talking about, in looking at these sort of different aspects. So from the developer side of digital mental health, you know, I've already talked about things like, you know, gender, race, when it comes to, you know, bias in digital mental health tools. One thing that I found really just kind of funny and interesting in talking to several of the people is, you know, saying that their teams, in this case, their team was primarily comprised of people from the Bay Area, but, you know, that also goes for, you know, some other teams that I've talked to where they're, you know, mainly comprised of, you know, someone of people who are from around these sort of, you know, tech savvy areas, you know, whether it's East Coast or West Coast. And, you know, what that means then in terms of what kinds of questions they ask, you know, what they want their tools to do. And certainly, you know, as I was talking to digital mental health developers around Silicon Valley, it really came up the number of assumptions that they might have, like the assumptions they had of just how much people, you know, love sharing data with their phone or love using different apps, you know, how ready they were to use those tools in particular ways that perhaps reflected more these assumptions that came from, you know, people who are, you know, living in Silicon Valley, who are more used to, you know, being around other people who use their apps in certain ways. And, you know, how that then affects what kind of apps are being developed. And again, you know, who they're actually going to work for or be usable by. And here, I mean, both work in the sense of, you know, if you are wanting it to be effective for, you know, certain types of treatment, those kinds of assumptions, but also, you know, in terms of who is going to want to use them, who is going to feel able to use them and have that access to them. Headspace was an app that came up a number of times when I talked to clinicians. And on the one hand, you know, some saying that they felt that it was developed, that, you know, you have these cartoony characters that, you know, kind of, you know, instead of it just showing, you know, as, you know, one person, you know, previously said, you know, just showing the sort of, you know, white women or people who look affluent, it's meant to show, you know, these sort of cartoony characters representing sort of, you know, a diversity of different areas. And I did have a number of people who said that they were able to recommend Headspace for some people who didn't have English as a second language, because there was more usability in terms of, you know, not having as much language needed in order to navigate through Headspace. But, you know, at the same time, you know, these same pictures also bring up, you know, some issues where the sort of outdoorsy space, you know, might not be as accessible to some groups, may feel like this mindfulness is attached more towards, you know, people who have a certain affluence for certain types of activities. And so while there's certain things that I heard about, you know, Headspace that, for example, spoke to it having some things that made it accessible, you know, for example, in terms of language used, it also came up that, you know, there are different ways that Headspace, as well as some of these other apps, may be not, you know, signaling through their design in different ways that, you know, made some people feel that these were not the apps for them. And particularly this quote came from a psychiatrist, you know, who was working in an urban area where her client base was many Latina, black clients. And, you know, as a side note, you know, many of the mental health professionals I talked to, you know, would use meditation apps as their sort of calculation of, this is an app that has a specific, you know, a specific goal in mind, you know, working on meditation or mindfulness that can work for a number of my clients. But at the same time, part of their calculation was they didn't think that a meditation app was likely to harm someone. So that, you know, there were apps that they'd heard of that they had more questions about, you know, whether, you know, doing CPT exercises through an app, whether there was more potential for harm. And so meditation apps were often the sort of big apps that I was hearing a lot about because different people, as they sort of made this calculation, what do I feel comfortable recommending, you know, meditation, mindfulness apps were the big ones. But, you know, with this caveat of, you know, who these apps, you know, seem to be more, you know, accessible for. And, you know, again, this sort of idea that the design of them, the language around the mindfulness apps that you had more sort of low-income minoritized groups who felt like maybe this app isn't meant for me. And it took more convincing and talking through to, you know, see if they were willing to try it. In talking to consumers, I noticed that there were certain ways that this came through as well, that, you know, some of the mindfulness apps, whether it was the sort of, you know, language around them, that was a more kind of yoga wellness language that some people did not connect to as much, or, you know, simply more the sort of usefulness of the app where the kinds of built-in ways that the app was meant to sort of keep trying to, you know, send up alerts and all of that ended up being more stressful for people in terms of using the app, that, you know, there were these sort of design elements that could affect accessibility or usability in these different ways. And so in terms of developing the apps, you know, there are these questions about how people even formulated their research question of what they wanted these apps to be able to do and whether that maps on to how people, you know, are using those apps in ways that support, support, you know, treatment and recovery. Another example that came up repeatedly when both when I was talking to some consumers, but also when I was talking to, you know, counselors, especially counselors and therapists who are in rural areas was that they found that, you know, for certain things, especially when they were using, you know, men who were in recovery for substance abuse was an example that came up repeatedly where both the sort of remote aspect of getting therapy was, you know, did not always sort of keep these men sort of involved in therapy, but also a gap in terms of whether they felt that, that the way that these apps were designed, the way that they were used was actually ones that they could feel a connection to, that, you know, going in person, seeing people face-to-face, seeing other men like them was really had a therapeutic aspect and that, you know, trying to do this type of work through digital mental health tools that there seemed to be more challenges and barriers in terms of having people opt into that because of, because both of, you know, whether design, but also just not having that more feeling of connection to other people was a barrier. And as I said before, you know, they, many of the clinicians were most willing to use meditation apps because they viewed it as least harmful while building the necessary skill. And the most concerns were expressed over, you know, whether it was using apps or using these more sort of remote options for therapy over people with severe mental illness, where, you know, they were saying that, you know, being able to see people's body language, being able to have, you know, that more ability to sort of interact with people who have severe mental illness as part of the therapeutic process was that, you know, was where they had the concerns. And in particular, and this was something that came up, not just from the clinician angle, but really from the consumer and developer was a concern over what happens when people are in crisis. On the developer side of things, because that there's not as much, especially in industry or, you know, sort of private industry, you know, how apps are developed, there's not as much sharing of information. And, you know, these sort of, how do we develop a system that actually can be responsible and accountable if people come to this app in crisis, that, you know, many apps, you know, now may have a sort of disclaimer saying, if you're really in crisis, call, you know, these numbers, national numbers. But there really was a feeling among developers of worry that that's not really enough. That's not really a system. Clinicians, you know, having this concern, you know, of sort of saying, you know, we, if we're sort of having people sort of connect to the apps more, if there's more a way that people then can be connected to help if they're having a crisis. And even from the consumer point of view, a sort of feeling of, you know, when they've interacted with these apps and, you know, just seen that call this national hotline that, you know, this feeling of that, that's really not enough. And so, you know, these are sort of big concerns and bigger questions, you know, whether it's sort of access, who these apps are working for, but this issue of what happens when people are really having a, you know, sort of a acute crisis is a bigger question. When it came to developers, there were all the, so these issues of, you know, who they imagined their users were. So many of these app developers I talked to imagined that their users were most likely going to be young people, especially young men, because they were, you know, designed in ways that they thought the game, the gaming or visual aspect of the app would be, you know, most useful for young men or young people in general. And then a number of these apps finding out that who ended up feeling most comfortable, interested using the apps were women in their, you know, from their thirties into their fifties. And these sort of question marks about it. And that really speaks to a need to, you know, sort of look into tests, you know, sort of more in these broader ways, you know, who these apps are drawing in, you know, where that data is, you know, that data coming into it, who it's coming from, but also, you know, noting, you know, which groups may be left out in some way, whether it's, you know, as I mentioned before, like older men, you know, feeling less connected, you know, when it comes to using apps in some way, or, you know, or whether young people are more likely to use these just for certain specific uses. This came up a lot that there was a disconnect sometimes between who they thought was most likely to use these apps. And then when it was finally released, you know, who ended up serving and in what ways. I also, you know, spoke to some developers who were trying to use or develop apps that would help serve underserved communities. So in this case, it was a maternal mental health app. And, you know, and what they were finding was that they had trouble finding the right kinds of data, you know, among sort of existing research databases, they were having trouble finding the right, or data that was as usable for, you know, training and going into how they were going to develop this app. They also, you know, were finding that the monetary aspect of it was, you know, a little more difficult to navigate, because, you know, they, if they were looking at a population that was disproportionately insured through Medicaid, you know, what that meant in terms of, you know, who these, you know, who would be able to access their apps because they would need to, you know, think through are there ways to make these apps more, you know, free in some ways. But, you know, it really sort of came through in terms of, you know, research and funding support that, you know, there are certain groups where there needs to be more attention to both, you know, having data, having tools that can be adapted in ways that can be, you know, funded, used to serve us minoritized populations, because you have a number of layers of how apps are developed that makes it more difficult. Whether it's funding, you know, whether it's getting government funding, whether it's, you know, having private companies that, you know, you have fewer of them that, you know, because these apps target sort of smaller populations, the idea of getting funding, private funding for a company can become more difficult. But, you know, also in terms of how people access apps, this issue of insurance, being able to afford apps, you know, these all end up being kind of thorny issues, even for people who were specifically trying to develop apps that might serve underserved populations. And, you know, this data question, both in terms of, you know, who data, you know, what data is going into building these apps, how you can use data to serve different populations, the flip side of that is privacy. So on the side of bias and fairness, there's often a desire, you know, how do you support getting more data, having more research done for underserved populations? Privacy, you know, is a constant issue, you know, with data and mental health apps and mental health tools. But, you know, it ends up being sometimes a thorny question because, you know, not only is there need to collect data from, you know, marginalized groups in order to have tools that better serve these groups, but in terms of collecting data, there also can be differences and equity concerns in how collecting data may affect different groups differently sometimes. And, you know, here are some examples, of course, of some of the ways that, you know, different digital tools have, you know, shared data, therapy apps that, you know, are failing in terms of protecting privacy checkups. Even, you know, those that may say that, you know, they're protecting data in different ways or ask for consent, you know, before data has been found that they may share data. You have these, you know, difficult issues of, you know, the crisis text line AI, which turned out to be sharing data with a for-profit spinoff. But also, you know, there's been more findings in the past year of a number of apps, for example, hospital scheduling apps, hospital calendar apps that turn out to have a meta pixel, which is trackers, that the institution using these scheduling apps or using these health apps may not even be aware that there's trackers in them that end up sharing information downstream where it's been found that, you know, some of them may share information with Facebook or Google. And I'm saying that as in, there's been more attention in the last year towards, you know, how you can make accountability in this area, but also it underscores how much there's still a lot of areas where data leaps through, even when you're not talking about a for-profit, you know, mental health app, there may be ways that, you know, different apps that get used in a hospital system or otherwise kind of unknowingly have trackers and, you know, how that gets affected. But again, you know, how these data practices can impact different subpopulations differently. And time and again, you know, in looking at how data gets used, you know, data may be de-identified and, you know, sold with, you know, to data brokers that then have commercial purposes where data can affect, you know, people's employment, mortgage, educational opportunities in a number of ways. It's been found disproportionately those kinds of data practices, even de-identified, it still can affect people at the group level and it disproportionately has negative impacts for minoritized populations, whether, you know, whether it's black, Latinx, where, you know, they may end up, you know, having getting lower scores for mortgages or otherwise because of the way that the data eventually gets used. But also, you know, the past year, unfortunately, also after Dobbs was overturned, really kind of highlighted the ways that for women or for people who can become pregnant, that their private data can be used in ways that was not foreseen or it was not foreseeable when they used an app. And so these are all sort of areas where underscoring, you know, that there's these group level harms, social implications from the way that data gets used in these different apps that can affect, that can impact, you know, different groups differently, but also should be part of that sort of calculation of, you know, how we're using these apps and gather, that will gather sensitive mental health information in different ways. So, you know, there's been different approaches at privacy level, still a lot of concerns, but, you know, definitely more approaches from the FTC for addressing how data may get used by these apps. There's also still a need more for diversity of researchers, diversity of developers, and including patients from a variety of different populations in the process of, you know, what is needed in an app, what excludes or includes people in using these apps. And these kinds of, you know, using community stakeholders, they're important in terms of developing the apps, but also I think it can be very important in terms of, you know, as professionals, mental health professionals or otherwise are looking at, you know, what apps, what apps they may want to use or what, how apps may get used with different clients of, you know, of really sort of looking at that information from different stakeholders, different communities about what these impacts are and, you know, who, how to include different communities in the benefits that can come from using digital mental health at the developer, but also at the, you know, use stage, whether how mental health professionals use those apps or otherwise. So these are some of the resources used during this, but I thank you very much again, and I look forward to your questions. No, thank you. Thank you so much, Dr. Martinez-Martin. That was wonderful. You covered so much ground. So eloquently, this is a large topic you took on. Before we shift into the question and answer, I just want to remind everyone that SMI Advisor is also accessible via your mobile device, perhaps appropriate for this talk. You can use the SMI Advisor app to access resources, education, upcoming events, complete mental health rating scales. The data is not stored, it's private. And you can submit questions directly to our team of SMI experts. You can download it now at smiadvisor.org slash app. So maybe we'll jump to the next slide, please. And sorry, that was, we'll go one more slide forward. And maybe we'll have a, do a couple of questions. We have some good ones in the chat and some ones that came in to me privately. But one question was, older adults as a group, there are some concerns that they may be tech savvy. We have some people saying, no, they actually know more. How do we think about, it's a very open question, but what do we do to make sure that all ages are kind of able to access and use these services? Uh-oh, it may be muted. There we go, yeah. No, that's a really good question. And I mean, what started off that question is a real tension in there, in that there's assumptions about the sort of older group not being as tech savvy. But I think there also being a number of instances that support that it can depend on what age you're talking, just how much elderly you're talking about or otherwise. And so I'd say a couple of things. I think that it really goes again to these questions around, I think there still needs to be just at a basic research level, some more research being done about which groups, which groups have different access, different challenges when it comes to digital mental health. Because I think there are just still a number of question marks around that. And that needs to be understood better in order to better understand how then to tailor digital mental health practices to serve these different groups. But I mean, and again, I'll also go back to, there needs to be still, or what would be really useful to have is a little more level of what can be the coordination, not just a sort of app to app, this app works, this app doesn't work, but a little more sort of overall coordination, whether it's at an association level, organizational level, of sort of looking through what are sort of practices with different populations that can help make sure that those populations get served without making these assumptions ahead of time of grandma can't use these and just understanding that better. It makes me think again, given this SMI advisor webinar, we have this digital navigator program. And I think for anyone interested, I would just Google digital health navigator, where we have some trainings up. And I think that's a potential role where we can of course make sure we can offer training and support to people to help everyone have a level playing field, but these are complex issues. One interesting related question is about the science is when we're testing these new devices, these new apps, how do we do in a way? Is it, how do we understand under there's new, there's existing biases in the healthcare system. We design it, we're working with different populations. How do we actually separate where the technology is working? Where bias from existing is, how do we even tease out sources of bias? I'll try to do justice to this again, extremely complex question. We'll only give you one minute to answer. I mean, they are wrestling with this question right now. So, I mean, some of it is just a need for transparency at this moment, just a little more transparency. So, whether for particular devices, just more transparency about, to me it matters who's developing this, like who's involved in developing it, but also the data sets used and all of that. So I think even just on a level of making sure there's more transparency around that for the data bias level. And this is where there's an opportunity, I think of using these tools or having ways to navigate these tools that then pays attention more to, lots of people know all the inequities, know various inequities in the system, but actually putting more attention to mapping that out so that you can look at the limitations at the different levels, that if we're going to use this tool, then we also know that community mental health centers are gonna have these limitations with us. This population may have less infrastructure. So I think things that are more sort of at least an initial level of transparency and mapping them out so that the next thing hopefully is people being able to navigate that better, I think is a very, the main way that I would give a short answer to that. It's a complex answer, certainly. And I think there's so many questions that we don't have time for. I think we may put in a chat that APA has an app advisor project that just gives guidance on what to look for in digital technologies. It doesn't recommend any, it doesn't endorse any, but it has some good principles that probably could be applicable for beginning to do informed clinical decision-making. So we'll try to put the app advisor one, but let's actually move on to the next slide because I realize I want to make sure we end on time and we have a couple more things to share that may be relevant. So have more questions is very relevant. So any of the topics covering this webinar, ones you would like to talk more about with a client, with a clinician to learn more about, you can post a question on SMI advisors discussion board. The easy way to network and share ideas of other people who partook in the webinar. Again, you've raised so many good questions that we don't have answers to fully. And if you have a question about the webinar, really any other topic related to SMI, you can get answers within one business day from our SMI team of experts. The service is available to all mental health clinicians, peer support specialists, administrators, or really anyone else in the mental health field who works with people with SMI. It's completely free and a hundred percent confidential. We'll jump to the next slide. And as all of you have probably told, there's a lot of great resources on the SMI advisor webinar. They're all free. We have some older talks on ethics. It keeps evolving what we have and the resources in it, but we certainly have different ones to look at. If you're interested in apps, if you're interested in virtual reality, interested in open notes, we have some information there to learn from. There's more to do. We'll jump to the next slide, please. I think certainly important that we help everyone obtain their credit. So to claim credit for partaking in today's webinar, you need to have met the prerequisite attendance threshold for your profession. After webinar ends, please click continue to complete the program evaluation. It's pretty fast and easy. The system will verify your attendance for claiming credit. It can take up to an hour and vary based on different web traffic, but usually it's pretty fast and we haven't had any issues with it. Everyone can easily get the credit that they need. And then we'll jump to the next slide. So we always have new webinars coming. And on October 20th, we have Dr. Jonathan Meyers bringing some new thoughts on lithium, of what it can be used for, what it couldn't be used for, what is new in its efforts about suicide prevention, that doesn't do these things. So with that, I think I want to, of course, thank Dr. Martinez-Martin and thank everyone for tuning in and post any questions on the forum or consult. So thank you everyone. Bye. Thank you all very much.
Video Summary
Dr. Nicole Martinez-Martin, an assistant professor at Stanford School of Medicine, discussed the topic of equity and access in digital mental health during the SMI Advisor webinar. She highlighted how digital mental health tools have the potential to improve access to care and make therapy more effective. However, there are challenges in ensuring equity in these tools. These challenges include issues of bias and fairness in algorithmic design, limitations in infrastructure and access to technology, and concerns over privacy and data governance. Dr. Martinez-Martin emphasized the need for more research into the use of digital mental health tools among underrepresented populations and for greater transparency and accountability in the development and use of these tools. She also discussed the importance of involving community stakeholders in the development and evaluation of digital mental health tools. Overall, Dr. Martinez-Martin highlighted the need for greater awareness and consideration of equity and access issues in the design and implementation of digital mental health interventions.
Keywords
equity
access
digital mental health
algorithmic design
privacy concerns
underrepresented populations
transparency
accountability
implementation
Funding for SMI Adviser was made possible by Grant No. SM080818 from SAMHSA of the U.S. Department of Health and Human Services (HHS). The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, SAMHSA/HHS or the U.S. Government.
×
Please select your language
1
English