false
Catalog
Meeting the Challenges of Security and Privacy Whe ...
Presentation and Q&A
Presentation and Q&A
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome, I'm Tristan Grindow, Deputy Medical Director and Director of Education for the American Psychiatric Association. I am pleased that you are joining us for today's webinar, Meeting the Challenges of Security and Privacy when Using Digital Technology Tools. The Clinical Support System for Serious Mental Illness is an APA and SAMHSA initiative devoted to helping clinicians implement evidence-based care for those living with serious mental illness. Working with experts from across the SMI clinician community, our interdisciplinary effort has been designed to help you get the answers you need in caring for your patients. Now without further delay, I'd like to introduce you to today's webinar faculty member, Dr. John Torres. Dr. Torres serves as technology and telemedicine expert for the SMI Advisor Initiative. He is the Director of the Digital Psychiatry Division in the Department of Psychiatry at Beth Israel Deaconess Medical Center, a Harvard Medical School-affiliated teaching hospital, where he also serves as a staff psychiatrist and academic faculty. Dr. Torres, thank you so much for taking time to lead today's webinar. Thank you very much for the kind introduction, Dr. Grindow, and thank you all for listening in. So today we're going to be talking about using technology with patients with SMI and focusing on especially smartphone apps, but we'll also cover other technologies and really focusing on safety and a different part of safety than we sometimes always think about. We certainly think about medical risks and benefits, but with technology there's new benefits and there's also new risks, and those will come around digital technology and privacy as we'll cover. So I have no relationships or conflicts of interest related to the subject matter in this presentation. So to give you guys an outline of what we're going to cover, we're going to do a broad overview of how safe is health and SMI clinical data, especially in a mobile digital format. What does HIPAA apply to in this kind of new digital world of smartphone apps and technologies? And we're going to dive in and look at what is the FDA's current stance on regulating these apps with a focus on ones for SMI? Is a certain smartphone app regulated? Is it not? And what are current efforts that the FDA is currently piloting and that you can offer feedback onto and have a role in shaping? So we're going to cover a bunch of topics, but the first one really is how safe is this data? If your patient brings you a smartphone app, if you're recommending a fitness tracker, if there's a commercial product someone wants to use and says, should I be using this kind of program? What do we know about that data and where is it going? So this partially came of interest because there was a study that actually happened in diabetes. And certainly we know that many of our patients with SMI have diabetes as a comorbidity. It can be very prevalent, but this was really just looking at diabetes smartphone apps on Android. And what the researchers showed was that the privacy protection kind of in place for patients using apps for diabetes were very poor. You can see highlighted, it's a table one privacy policy provision for the 41 apps for privacy policy. So only 19%, so just below 20% of apps that were looked at for diabetes really even had a privacy policy and told users what is happening to your data. The other 80%, there just wasn't as much information. If you look in that table, you can see it only 22% of these apps that people are using for diabetes promise not to sell people's information. And that's a little bit concerning. So what does that tell us? What do we know about apps for mental health, for SMI, for neurocognitive diseases? What we did with Dr. Ipsit-Fahia at McLean is we actually printed out privacy policies of smartphone apps. And to make a point, we said, what is a disease state that people are going to have a really hard time understanding the terms and conditions they agree to an app? And we said, dementia may be a kind of good proving case of this. People with dementia may have cognitive impairment that makes it hard to go through to fine print an app in the privacy policy. So we looked at these privacy policies and there's a lot of data on this table. And I think the one that's most interesting to draw your attention to is overall the 4% where it says may sell data in a merger or acquisition. So in essence, 96% of apps were basically guaranteeing to protect your data in the future if the app ever changes hands, if it gets bought, if it gets sold. So what this really tells us is a lot of the data that people are putting into apps, it's not under people's control. It's being shared with marketers, with business partners, and mergers and acquisitions. It can be sold. It can be disclosed to certain people. So the data is going to a lot of different places. A lot of different people may be looking at it. We don't have a really good picture of where this data is flowing, but certainly it's not staying often with just the person who collects it. And you guys, and I said, if this seems abstract, I want to kind of show two recent headlines from October, November, December from really recent times. And the first one from the Washington Post, it says US soldiers are revealing sensitive and dangerous information by jogging. So what happened in this story was basically soldiers were exercising. They were wearing fitness trackers, and those fitness trackers certainly have some idea where you are. They're sending data, they can send GPS things. And that data basically was pretty easy to look at and plot. You can almost look at a military base and say, hey, this is where people are, that this is actually where soldiers are. So you can see even kind of a consumer device, like a wearable that some of you may be wearing, can offer some data. I think a pretty concerning article that came out on November 21st, 2018, an NPR looked at this idea of CPAP machines. And certainly we know many of our patients of SMI have obstructive sleep apnea and use CPAP machines. And what this story revealed was that the CPAP machine is automatically sending data to insurance, whether you're adherent to it or not, whether you wear the CPAP machine. And if you're not wearing the CPAP machine or using it differently, insurance basically gets notification and can say that we're not going to cover that anymore. So again, we kind of look before these more abstract privacy policies when we go into them. But I think the real world implications you're seeing, this is coming up every day from kind of your location information being disclosed to, in this case, patients with obstructive sleep apnea, and this article also explained that people really weren't well-informed that their CPAP machine was, in essence, spying on them. And I think in some ways, all of us are exposed to this digital data, these issues about it. This is a screenshot of a short personality test that was offered in part through University of Cambridge. Some of you may have seen this, some of you may have not. It's actually not on the internet anymore because this was what the gateway to Cambridge Analytica was. So in early 2018, when there was the realization that Facebook had kind of allowed Cambridge Analytica access to a lot of people's profiles, it really happened actually through this. People were taking this personality test and then kind of unwittingly and unknowingly giving up their information. So certainly, we're getting the idea that there's some concerning aspects to all of this, and the idea being we want to be cautious. We don't want to not embrace technology and be excited, but I think we want to be cautious. And a kind of really bellwether event that's worth noting happened in 2015. The National Health Service in the UK actually had an app library where if you were interested in finding an app that was a better app, the NHS said, go to this NHS website and go to the app library, and we will kind of have pooled the better apps for you. And a lot of these apps involve depression. As I said, not all of them, there was a variety of diseases, but there were several for depression. And what happened is then people actually looked, what are the apps that the NHS has And they showed that there were these, again, the title of the article is Unaddressed Privacy Risks in Accredited Health and Wellness Apps, a Cross-Sectional Systematic Review. So Kit Huckabill actually went through and said the apps that the NHS has kind of said are safe and protected actually aren't. You can see the headline below, it says the problem of most NHS-recommended mental health apps, there's no evidence they work, and Kit Huckabill also showed there was very little evidence they were keeping your data safe. And I think this gets to, in part, the NHS had very good intention, they were trying to do a very good job, but very hard to kind of keep up with the technology security aspects around these apps. And I said, this slide on the right of a cloud is just to remind you that there really is no such thing as the cloud per se, it's just someone else's computer that you may be storing data on. And if you're wondering kind of why is there all this interest in kind of taking health data out of apps, why do people even care about this? Why do people want to get these location information? Why does a CPAP machine want to collect data on me? And in part, there's a very interesting great read in Scientific American that probably almost worth anyone taking a look at, it says how data brokers make money off your medical record. So there is a whole industry of kind of piecing together medical information in a legal way of getting bits and pieces of it. But you can imagine now, as we get into a smartphone app, there's a lot more data sometimes on someone on their phone, if your phone knows where you are, your phone knows who you called, your phone knows how many times you checked it, there's a lot of new information that phone has on people, then some ways will be very useful for us helping patients of SMI, but also certainly could be very valuable to marketers, to insurance, to different organizations and groups. So, so if I think if you ask kind of who, what I always tell my patients is, patients of SMI come to me and say, what do you think of this app? And the first question I kind of look at is if it's free, especially I kind of say, what is the price of this app? Is your data kind of being sold in the back end and is that paying for it? And again, we'll get to later on, there's nothing inherently wrong with selling your own data if you understand that, if you make informed decision about it. You don't want this happening if people don't realize it. So this kind of gets us into a point of what are the current regulations? What is HIPAA doing for us in this space? When does it apply? How is this happening? So certainly when we see a patient with SMI, I think all of us listening know that we, we are bound to protect people's privacy. The reason that people disclose sensitive information to us is they know that we're going to keep their information secure and private and there's various laws and regulations around it. So why is it seem to be a little bit different from the examples that we, we've talked about? And to review what HIPAA is, so it's that it applies to covered entities that are defined as three things, health plan, healthcare clearing houses, and most relevant in this conversation is healthcare providers who electronically transmit any health information in connection with transactions, which health and human services has adopted standards. So HIPAA applies to health plan, healthcare clearing houses, and healthcare providers. And HIPAA was kind of, the high tech act, the details of it kind of widened the scope of privacy and security protections under HIPAA and increased the potential kind of legal liability and fines for noncompliance. But without getting to details of the legislation, it's important to think about what is a smart phone app? Is it a health plan? No. Is it a healthcare clearing house? No. Is it a healthcare provider? And we're going to get into details of how apps are kind of really don't fall under any of these buckets of kind of some of our central federal privacy laws. And because apps are saying that they're not healthcare providers, even though they sometimes may look like it, in some ways this helps them get around a lot of the regulation that kind of all of us come to expect and almost would say, of course, if an app is claiming to offer medical services, claiming to help people with SMI around their symptoms, that sounds like they're offering healthcare and they're serving as a healthcare provider. But we're going to kind of talk about the nuance of it. But the broad picture is kind of is looking at the question, is an app a healthcare provider? And I said, I don't know about you guys, but sometimes spelling HIPAA, does it have two P's or one P can be tricky. So I said the HIPAA HIPPO can help you remember that. So in the absence of an app says, actually, we're not a healthcare provider, we're not providing healthcare services, a lot of the kind of privacy protections that patients of SMI that you expect, that families expect, don't really apply. So what kind of governs what an app can or can't do with your data, in absence of HIPAA, because again, the app is saying the law doesn't apply to us, we're not a healthcare provider. Really it's the privacy policy that an app covers, that app offers you. That actually begins to determine what happens to your data. That kind of becomes the next default to consider. And I took this screenshot off the internet, and it says mobile app privacy policies for app developers. It says, you can create a free custom privacy policy for mobile health apps in just minutes. And it says they've generated over half a million privacy policies. So I think just from this one picture, getting the idea that perhaps app developers, and not all, there's some that really do take privacy well, but they're not taking these privacy policies very seriously. We actually just published a review paper in BJP Psych Open that came out actually last week. And what we looked at is we said, let's look at apps for, in this case, anxiety, schizophrenia, depression, diabetes, addiction, hypertension. And the column I want to focus on is the second one, sorry, the row, the second one is presence of a privacy policy. We actually just looked, do the apps offer a privacy policy? Is there a privacy policy that even exists? And you can see what I had in red is for schizophrenia, we went through some of the top apps on the Apple store, and we went through some of the top apps on the Android store, so covering both iPhones and Android phones. And half of the apps for schizophrenia really did not offer a privacy policy. So they weren't even telling users anything about what happens to their data. And that is a very clinically actionable red flag to you, or to anyone looking at an app to say, hey, this is an app that we most likely don't want to use. We really don't know what's happening to the data. You can see in depression, it was 85%, but still, there's some apps that aren't even offering this kind of bare bone privacy policy. And again, we're going to get into what do these privacy policies say, what do they do? But if there's no privacy policy, and that's pretty easy to check, that's certainly a red flag that you probably don't want your patients with SMI or really anyone using something like this. And so as we alluded to earlier, what about HIPAA? And I took an example from Headspace's privacy policy, I said, we're not commenting on Headspace as an app at all, just using this as an example of their privacy policy. So Headspace, if you kind of look at their privacy policy, says, Headspace is a provider of online mobile meditation content in a health and wellness space. We are not a healthcare or medical device provider, nor should our product be considered medical advice. Only your physician or other healthcare provider can do that. And it said, basically, going to the bottom, Headspace makes no claims, representations, or guarantee that the product provides a therapeutic benefit. So I think when you kind of look at this language, and this is kind of legally binding language in these apps, the Headspace is saying, we don't fall under the bucket of a healthcare provider. And they're kind of explicitly saying it in very clear language, they're saying, we are not, we are not, we're in a health and wellness space. And that means, again, that different rules and regulations apply to them. And they can follow different standards than kind of what our patients may expect, or what we expect at first glance with this. So it's worth keeping in mind that, again, the information is there, it can be sometimes buried, but again, it's disclosed there. And what's interesting is, the Department of Health and Human Services actually wrote a very nice report, the title is actually in that pink circle in the middle, of what does a lack of HIPAA oversight mean for many mHealth apps and wearables, and kind of when the privacy policy is determining what happens to your patient's data, where it goes, who can access it, how it's sold, versus kind of having HIPAA, our kind of national federal standard. Health and Human Services wrote that there's differences in individuals' rights to access their data, there's differences in security standards that are applicable to data holders and users, there's differences in understanding the terminology about privacy and security protections, there's differences in reuse of data by third parties, and Health and Human Services said there's inadequate collection use and disclosure limitations. So this is almost a three-year-old report by Health and Human Services, but it really highlights the kind of risks that happen when these things are outside of kind of oversight. And it's tricky, because as we talked about in the prior webinar that you can find on our SMI Advisor website, there's a lot of these mental health apps out there, and as we're talking about now, they look like mental health apps, they claim to be offering services relevant to people with SMI, but are they really mental health apps, or are they kind of health and wellness apps? Where is that line between something that kind of does need to offer these protections and something that doesn't? And that brings us to begin to discuss the FDA's evolving role in regulating digital technologies for SMI care. So kind of looking of what does the FDA, is that something that should kind of be regulated? If you look at Section 201H of the Federal Drug, Federal Food, Drug, and Cosmetic Act, they define kind of a device as intended for use in the diagnosis of a disease or other condition or in the cure, mitigation, treatment, or prevention of disease in man or other animals or intended to affect the structure or any function of the body of man. This is a very broad scope of kind of what the FDA considers as a medical device, something that falls under their jurisdiction, it has to kind of follow all these federal privacy rules. And you can imagine, depending how you read this, you could say if these apps are trying to help people diagnosed or helping people for mitigation of their symptoms or offering treatment, maybe they do fall under the scope of this. And what the FDA has said for mobile medical apps is that a subset of mobile medical apps are going to be the focus of the FDA's regulatory oversight. And the key word here is focus, because we talked about, again, prior webinar, if there's 10,000 of these mental health-related apps available today, it's very hard for the FDA or anyone to keep up with 10,000 of these apps that are constantly updating, constantly changing. And later in this talk, we're going to talk about the new approach the FDA is taking to this. But for now, the FDA has said, look, there's a lot of mobile apps out there. We can't physically, we don't have the resources to look at every single app. And that said, you can report apps and you can flag apps as concerning. But the FDA said, we're going to focus on some core apps that we think are most, that really need the most oversight. And they said, the first thing that we're really going to put our attention to in apps is an app that makes the smartphone becomes a regulated medical device by performing patient-specific analysis and providing patient-specific diagnosis, treatment, or recommendation. So apps that are really working to give personal recommendations that kind of are gathering personal information and kind of tailoring recommendations to a person. The FDA said, we're going to be looking at those apps, especially those who are top of the list that we want to regulate versus an app that kind of says, people should exercise. Exercise is good for everyone. A different category, the FDA is really going to look at extensively apps that are extension of one or more medical devices. And that kind of relates to number three, which is transform a mobile platform into a regulated medical device by using attachments, display screens, or sensors, or by including functionality similar to those currently in a regulated medical device. So if you have something that turns your smartphone into an EKG, which we recently saw that Apple basically now can use the Apple Watch. They're saying that it can kind of get some type of heart rate data that may be clinically relevant. Apple actually worked with the FDA on kind of getting the Apple Watch cleared as a medical device for the purpose of collecting EKGs. So these are the things that the FDA is most interested in. And what becomes especially relevant for all of us as clinicians working in SMI is the FDA has said, well, those are the things we really are gonna focus on. They said there's some things that we're not going to focus on as much. And they said, so mobile apps, which the FDA intend to exercise something called enforcement discretion, which means that the FDA does not intend to actively enforce the requirements of what is a medical device for an app. And the key word is they don't intend to actively enforce it. So everything we talk about theoretically could fall under the FDA scope, but the FDA has kind of said, for now, we're not going to be enforcing it on these different categories. We could, and we'll show some examples where they have, but the seven categories, we go over each of these in detail, so don't worry if it's a lot, is the first category of enforcement discretion where the FDA thinks it's lower risk and they're not gonna be as actively enforcing it is apps that help patients self-manage their disease or conditions without providing specific treatment or treatment suggestion. And we'll go into examples soon. The second one is apps that provide patients with simple tools to organize and track their health information. The third is basically these types of apps or programs that provide easy access to information related to patients' health conditions or treatment. The fourth is those that help patients document, show, or communicate potential medical conditions to a healthcare provider. The fifth is that automate simple tasks for healthcare providers. I lumped six and seven together, they're separate, but one that, an app that kind of lets you access electronic health records, that kind of lets you access data, and one that transfer, store, and convert data. So if you want to use your iPad as the way to kind of display information from your medical record. So this also brings to the question of, again, other low-risk devices are exempt from pre-market review, but are subject to general controls, including registration and listing, good manufacturing practices, and adverse reporting. So again, just because something is going to fall into one of these buckets, it's not completely exempt. It's not completely off hook. So I don't want that impression to kind of come across that the FDA is saying, if it falls into these buckets, nothing matters. You can do whatever you want. There's no rules and regulations. There still are. And, but to give an idea, again, of what are these apps that fall into enforcement discretion, especially as they're relevant to SMI patients, I want to walk through each of the examples. And this is actually using language from the FDA's Act. It says, apps that coach patients with conditions such as cardiovascular disease, hypertension, diabetes, obesity, and promote strategies for maintaining a healthy weight, getting optimal nutrition, exercising and staying fit, maintaining salt intake, or adhering to predetermined medication dosing schedules by simple prompting. So you can imagine an app that's offering general health tips. And I think you can see a lot of these, this would be very relevant to many of our patients SMI who have these different medical conditions. We're trying to get them to have healthy weight. We're trying to help them eat well, manage diet, take medication. So these are apps that are going to fall into enforcement discretion. They're going to be deemed as low risk currently by the FDA. The second category of apps would be apps that provide simple tools for patients with specific conditions or chronic diseases to log, track, or trend their event measurement and share this information with their healthcare provider as part of the disease management plan. So this really would be in essence symptom trackers, right? If you want a patient to track their sleep, to track their hallucinations and schizophrenia, to track their mood, to track if they're having a manic episode and bipolar, these symptom trackers would be apps that really kind of fall under this enforcement discretion. So the third bucket would be apps that use a patient's diagnosis to provide a clinician with best practice treatment guidelines for common illness or condition. So basically apps that are more clinician focusing that may kind of say, here's guidelines. In essence, that almost could be the APA's DSM app, right? It kind of gives you guidelines on the app. The DSM app doesn't tell you what to do, but it certainly can give you information on diagnosis of different conditions. I have a screenshot of one app. Again, we don't endorse any particular app, but Hippocrates is a popular app that people can use for checking drug-drug interactions or allergies. So a different category of enforcement discretion can be apps that serve as video conferencing portals, specifically intended for medical use to enhance communications between patients and healthcare providers. This would be kind of using a HIPAA-compliant platform to do video conferencing, or apps that are specifically intended for medical use that utilize the mobile device's built-in camera or connection. So again, if you're using the app to communicate with patients, and you can't be using platforms that don't encrypt or secure people's data. So if you're using something, say, like Google Hangouts or Skype, enforcement discretion is not going to protect you here. If it's a program that, again, lets you talk through a secure channel to your patient, that's the type of app this is applying to. So it's not saying anything, if you just wanna start using different video channels, no, but if you're doing it through your app in a way that has a secure platform, that falls under enforcement discretion. So the other one is, again, general medical calculators would fall under enforcement discretion. So things that can help you calculate body mass index, Glasgow Coma Score, stroke scales. And then the sixth bucket of enforcement discretion is apps that provide patients and providers with access to health records or enable them to gain electronic access to health information stored within kind of personal health records or EHR systems. So recently, some of you may have noted in some hospitals who are taking in it, people with Apple's iPhone can potentially get access to some of their medical records if their hospitals are taking in the program. So that kind of falls under this enforcement discretion bucket as well. And I said, the seventh one is the use of mobile apps that include those that are used as a secondary display to a regulated medical device when these apps are not intended to provide primary diagnosis, treatment decisions, or to be used in connection with active patient monitoring. So this may be a more specialized use case, but again, if you wanna kind of be sharing information across different displays and screens, can be it. But you get an idea of a lot of disenforcement discretion for things like tracking mood, tracking symptoms, offering general recommendation of best practices for SMI care. That kind of is gonna fall into this enforcement discretion bucket. And I wanted to pull out some language that the FDA wrote that really applies to psychiatry and SMI. And again, the diseases here are ones that the FDA put in their language as examples. You can imagine this applies equally well to schizophrenia, to bipolar disorder, and to depression. But the FDA actually wrote mobile apps that help patients diagnose a psychiatric condition and they let them out, maintain their behavioral coping skills by providing a skill of the day behavioral technique or audio messages that the user can access when they experience anxiety. So that's something the FDA feels they would not want to actively regulate right now. The FDA also says mobile apps that provide periodic educational information, reminders, or motivational guidance to say smokers trying to quit or patients recovering from addiction or pregnant women. So again, this is something the FDA also says would fall under that enforcement discretion bucket. What's interesting is the FDA also says mobile apps that use GPS location information to alert asthmatics of environmental conditions that may cause asthma symptoms or an alert and addiction patient. And they write substance abuser when near a pre-identified or high risk location. So the FDA is saying if you're using this location information from the phone and it's being used to kind of help a patient learn about environmental risk, that currently would kind of fall under enforcement discretion. And then they say mobile apps that use video or video games to motivate patients to do physical therapy exercises at home. You can certainly imagine patients to kind of be more behaviorally activated. That would also fall under enforcement discretion. So the point being you can see the FDA has written a lot about enforcement discretion given us different buckets that you can categorize things as but in some ways it can be tricky to interpret because GPS location is offering patient specific information. If the app is trying to warn me that I'm going to a location that's a trigger for me that almost as a personal information that's kind of providing patient specific recommendations which doesn't quite match with the initial enforcement discretion thing that these things should be offering general information. And I think partially it's difficult to draw lines in the sand on kind of what is an app that's going to be kind of higher risk or higher regulated and what is not. So in the last category, sorry, I forgot to read those mobile apps that display at opportune time images or messages for a substance abuser who wants to stop addictive behavior. So the point being that enforcement discretion doesn't mean that these apps are not regulated. It just means that kind of FDA is saying for now we're very busy. We have a lot to be working at. We can't be looking at everything. These are the ones we think are lower risk. And it also doesn't mean that the apps are exempt from other FDA requirements. I said certainly just because you're using an app to video chat with a patient, the video chatting still has to be encrypted. We actually did a research. We did a review with some lawyers at UCSF Hastings and some faculty at UCSF over in California. And we tried to see have there kind of been any lawsuits around this where apps are kind of taking too much information from patients, if apps are offering wrong advice, if apps are missing important clinical information. Has anyone around SMI kind of filed a lawsuit? And we didn't see that yet. I imagine that, again, given that these laws are a little bit ambiguous, as you just saw when we went through it, and that there is the potential for harm, there is a lot of patient data being taken, that we're gonna probably see some apps, unfortunately, move toward, there's gonna be some adverse events around this or some confusion around the law. There's gonna need to be clarity. But as of our 2018 search for new active lawsuits, I wanna bring up a older 2016 story, so this is actually now three years old, where Lumosity, some of you may have heard their ads about brain training, that they were basically saying, if you use Lumosity, you can improve your cognitive function. And I bring this up because they actually settled with the FTC, not the FDA. And the FTC went after Lumosity, as you can see from the headline, because of deceptive advertising charges. And there's basically agreement right now, informally, that the FDA is kind of focused on bringing apps to market, making sure there's rules for apps kind of getting out into the world, but it's actually the FTC that may currently be doing some of the more post-market work. If the app is kind of out there, if people are using it, and it's kind of making these false claims, it's overextending its boundaries, it may be the FTC that is looking at these apps. And that's certainly the case of how Lumosity was fined $2 million. So we're seeing that the FDA is not the only regulatory body in this, it's also the FTC. And I said, the FDA is constantly updating a lot of this because it's changing. So one question that came up is, people ask, well, what if my smartphone app really does a lot of things that are not a regulated medical device, but has one function it is? So this was just an example of the FDA kind of believes the same principles apply to assessment of all multiple function, of all multiple function products that contain at least one medical device. So you can't kind of say, this app does 99% non-medical things and one medical things, and because it does a majority of non-medical stuff that shouldn't be regulated. So really, I think people keep raising interesting new issues and points in the FDA is doing a very good job keeping up with the dynamic app space. And I really wanna bring your attention now to, this is a screenshot of the FDA's website, it's called the Digital Health Software Pre-certification, I'll call it pre-cert program. And this is the FDA's current solution to impart a lot of the confusion about kind of regulatory discretion, what app is regulated, what apps aren't regulated, what do you have to do, what do you not have to do? So we talked about in the first part of the talk that there's potentially a lot of your patient's data is leaving these apps, going to parties, but we don't fully know what you're doing with it because these apps fall outside of HIPAA and different privacy laws. The FDA is saying, it's really hard to keep up with all the apps, we're gonna have this enforcement discretion that will help kind of us guide which apps we think are higher risk, which ones we're not gonna regulate. But the FDA's current efforts around thinking artists pre-cert program is in the pilot stages of what pre-cert is, but it really is the FDA kind of saying, let's rethink how we evaluate what determining software as a medical device. And in the context of this talk, we'll say software as a medical device really can be smartphone apps for what we're talking about, it does have a broader scope. And we're gonna quickly run through how the pre-cert program is looking. It's important to know it's in the pilot stages right now. So the FDA is actively piloting it and are always soliciting feedback. And I said, certainly the American Psychiatric Association has been providing feedback through its committees, different medical groups have been providing feedback. But let's run through what does potential new model that the FDA may say is how these things are gonna be regulated. And what the FDA says is, let's categorize SAMD as software as a medical device, if you see that there. So let's, and again, for this purpose, we'll call it an app. The app is saying, let's kind of understand what is the possible risk related to this app. And they kind of go on a one through nine scale. And they say, this app is just trying to inform clinical management in a non-serious situation. And it's saying, it's important to exercise for everyone. That would probably be a risk level one. If a risk level nine would be the app is trying to kind of treat or diagnose something critical. So if someone is acutely suicidal and the app is trying to really, the FDA is kind of trying to say, there's a lot of apps. We wanna kind of divide them into what is the risk? What are they doing? What is the risk level? How serious is what they're treating? And to quote again from the FDA, the FDA envisions that the future regulatory model will provide more streamlined and efficient regulatory oversight of software-based medical devices. Again, like apps developed by manufacturers who have demonstrated a robust culture of quality and organizational excellence and who are committed to monitoring real world performance of their product once they reach the US market. What the FDA is doing is, and so we talked about, they wanna look at the risk of the app. They then wanna kind of look at who is building the app. If the American Psychiatric Association is building the app, if a hospital is building the app, they wanna say, whoever's building the app, they wanna look at these five principles and commitment to patient safety, product quality, clinical responsibility, cybersecurity responsibility, and a proactive culture. The FDA is currently defining what these mean, but the FDA in essence wants to certify the app developers. And this is a screenshot. It's a little blurry, but this is from the FDA model. And now with the two pieces of information I told you, this will actually make a lot of sense and you can understand what the FDA is proposing. So level one is excellence appraisal and certification. So that's you as a app developer, your hospital, your organization, your private practice, your software company, you get kind of approved as a FDA developer. And then step two is that review determination. And it's hard to see, but that they're gonna look at what is the risk? We kind of looked at one through nine scale. So the FDA is gonna say, you as a pre-approved developer of app are low risk. And if you're low risk, you're gonna take that blue arrow. Can I kind of have a streamlined, you're gonna have a pretty fast review. If you're higher risk, you're gonna have a streamlined review but then you're really gonna go into this real world performance. And what the FDA is proposing is, they're gonna gather data from the real world. The details are still emerging and they're gonna look at, is the app helping people? Is it hurting people? Based on this data, the FDA is gonna say, oh, we need to review again. We need to pull the app. We need to check it again or it's okay. So in essence, the FDA is saying, it's very difficult to keep up with all of the health apps out there. If we think there's over 10,000 related to mental health, there's probably over a quarter million across all of health and that's certainly too many to keep an eye on all of them. So in essence, this is a model that says, let's put the regulatory burden in part on the people making the app and less on the apps themselves. So I said, this is only a proposal that said there's a lot of momentum around it. At the FDA, there's a lot of interest in it and they're currently piloting it. So this is a model that we're gonna hear about for apps and certainly apps for SMI would fall under this. And I said, this is actually one of the FDA's kind of, this was their 2018 timeline and you can see the progress they're making. And you can see that at the far right, they kind of have programmed that next steps for 2019. And now it is 2019 and they're kind of actively piling it with select companies. Some of those select companies include Apple, there's some smaller startups that are part of it, but they're trying to basically see how does this work. But there's a chance that this may become the new way that again, software is a medical device and these apps are regulated. That said, there's certainly, there's gonna be more debate about this. But it's certainly important that we as the SMI community have some input into how this new regulatory system is emerging that could be working around these apps. And I think I wanna bring up, we do have the American Psychiatric Association app evaluation framework. Our app evaluation framework doesn't say any one app is good or bad. It's not about certification. We don't, it's not related to the FDA. But I think while the FDA is developing its processes, while it's kind of learning what will be the new law of the land, I think our APA framework does offer some pretty good practical hands-on benefits for having informed discussion around risk benefits and using these apps with SMI patients. And again, this is on the APA's website. We linked to it from our SMI advisor website, but the point being, we talked a lot about risk in this talk. We talked about it's important to look at the privacy policy. It's something you may not be used to doing when you're discussing treatment interventions. When you're giving someone counseling, you don't really think about, is there data being taken? Who controls your data? Is this a regulated medical device? This is new for SMI care. And so everything we kind of talked about today really fits into risk benefit. We'll have future talks on evidence, making sure how does this work? Does it make sense? We'll have future talks on usability and adherence, how to actually help SMI patients really stick with these things and use them for maximum benefit. And finally, meaningful data sharing. How are you integrating these apps and technologies into your treatment plan, into your clinical visit so that you're not fragmenting care, that you're kind of improving your therapeutic alliance and using these to help patients get better. So I said, this is not, it's different than I said, I'll go back to FDA's plan, but the FDA's plan is still in action. And I do think our APA app evaluation framework offers a very nice model for guiding and informed decision conversation with patients. So thank you for your attention and tuning in.
Video Summary
In this video, Dr. John Torres discusses the challenges of security and privacy when using digital technology tools in the context of serious mental illness (SMI) care. He explains that while digital technology tools can offer new benefits in the field of mental health, they also present new risks, particularly in terms of data privacy. He discusses a study that found poor privacy protection in diabetes smartphone apps and highlights the lack of privacy policies in many mental health apps. Dr. Torres then describes the current regulatory landscape, explaining that the Health Insurance Portability and Accountability Act (HIPAA) does not apply to smartphone apps and that the FDA is primarily focused on regulating apps that act as medical devices or have high-risk functionalities. He also mentions the Federal Trade Commission (FTC) as another regulatory body involved in overseeing app privacy and advertising practices. Dr. Torres introduces the FDA's Digital Health Software Pre-Certification (Pre-Cert) program, which aims to provide more streamlined regulatory oversight for software-based medical devices, including smartphone apps. He explains that the program involves categorizing apps based on their risk level and assessing the app developers' commitment to patient safety and product quality. Dr. Torres concludes by highlighting the importance of evaluating the risk and benefit of using apps in SMI care and referencing the American Psychiatric Association's app evaluation framework as a resource for making informed decisions with patients.
Keywords
Dr. John Torres
security and privacy
digital technology tools
serious mental illness care
data privacy
HIPAA
FDA regulations
app evaluation framework
Funding for SMI Adviser was made possible by Grant No. SM080818 from SAMHSA of the U.S. Department of Health and Human Services (HHS). The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, SAMHSA/HHS or the U.S. Government.
×
Please select your language
1
English