false
Catalog
How Do We Know What Works? Understanding Evidence- ...
Lecture Presentation
Lecture Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello and welcome. I'm Dr. Benjamin Druss, a professor of Health Policy and Management and Rosalynn Carter Chair at Mental Health at Emory University, as well as a member of SMI Advisors Clinical Expert Team. I'm pleased that you're joining us today for today's SMI Advisor webinar, How Do We Know What Works? Understanding Evidence-Based Practice and Evidence-Based Medicine in Mental Health Services. SMI Advisor, also known as the Clinical Support System for Serious Mental Illness, is an APA and SAMHSA initiative devoted to helping clinicians implement evidence-based care for those living with serious mental illness. Working with experts from across the SMI clinician community, our interdisciplinary effort has been designed to help you get the answers you need to care for your patients. Now I'd like to introduce you to the faculty for today's webinar, Dr. Sandra Resnick. Dr. Resnick is an Associate Professor in the Department of Psychiatry at Yale University School of Medicine, Deputy Director of the Department of Veterans Affairs Northeast Program Evaluation Center in the Office of Mental Health and Suicide Prevention, and Editor of the Psychiatric Rehabilitation Journal, a journal of the American Psychological Association. Dr. Resnick is an expert in evidence-based practices for individuals with psychiatric disabilities and is passionate about data literacy and helping others understand how and how not to use data. Dr. Resnick, thank you for leading today's webinar. Well, thank you so much for having me. This is a real treat to be here and to talk about something that I am really quite passionate about, which is helping to demystify these terms for people who have been hearing them maybe your whole career and might not be entirely sure all the time what they mean. So with that, I will go to the next slide and just say that in terms of disclosures, I have no relationships or conflicts of interest related to the subject matter of this presentation. So the learning objectives for this webinar is really to discuss how evidence is used as part of routine clinical decision-making and in policy decisions. We're going to talk about the different levels of evidence generated by research and mental health services. And really, this is about explaining processes, how mental health practices are determined to be evidence-based, and some related information about evidence-based medicine, measurement-based care, and some ways of thinking about evidence. So what I'm hoping is that you might be able to talk to your clients about evidence, or if you're an administrator, talking to your staff about evidence and why it's important, but also understanding, even though I'm a big, big believer in using evidence to make decisions, that client preferences really matter, and that in the absence of strong preferences, I'm hoping to convince you that you might choose an evidence-based practice when those preferences don't exist. So why should we care about evidence? Well, I think that's really what this whole webinar is about and what I'm going to try to prove to you. But to start us off, here's a quote from Bob Drake in 2005, and I really like this because he says, the mental health field has a moral and ethical obligation to learn from the experiences in other areas of medicine and to adopt the philosophy and practices of evidence-based medicine. And I really believe that. I do believe that it is a moral and ethical obligation. But with any obligation, it's important to understand this obligation fully so that we can act upon it. So let's start with some of the basics. What exactly is an evidence-based practice? And I like to start here, even though maybe many of you in the audience feel like, oh, I know the answer to this. But when I teach my fellows here at Yale, what I often find is I ask this question and they are like, yeah, I know. And then they start to answer and they realize they're not exactly quite sure. And that's really, really common. Most people haven't given a lot of thought into what an evidence-based practice really is. And I think some of us have this image of tablets coming down from on high with our evidence-based practices fully described upon them. But that's really not quite how it works. So evidence-based practice, first of all, you have to have a practice, something that's really a well-defined model of practice that you can describe, replicate, and that you know when it's happening. So for many of our practices, those are described in treatment manuals or practice manuals. For some, there may be core components that are really clear that you know that when this component happens, this is part of this practice. And in the practices that I work with the most, we usually have fidelity scales so that if we went and did a site visit at a program, we would be able to see whether or not the practice that is described is really faithful to this practice in real life. And that's another way to know when a practice is a defined model. And then, of course, the evidence part. There must be some documented effectiveness. And this might be compared to another practice. It may be showing some improvement in a measurable outcome. And ideally, this is also clinically relevant. So you want to make sure that when you're thinking about an evidence-based practice, that it's something you can define very well and that there is documented effectiveness across these different areas. Well, so then why should we use evidence-based practices? So when you think about resources and being a good steward of resources, you want to think about what's going to make the biggest bang for your buck. And also when you're thinking about being a good steward for the client that's sitting in front of you, you want to think about what's going to work. You don't want to put people through unnecessary long treatments if you've got something that works. So an evidence-based practice oftentimes can be an easy place to start because for the average number of people, for an average number of people in these studies, and we're going to break down some of the lingo around the research studies in a minute, that we've shown that this practice is generally more effective than something else. And then similarly, in terms of resources, if you're a policymaker, you're a clinic administrator, and you've got limited resources, and I know so many of you have quite limited resources, it may be a way of helping you funnel those resources into the thing that's going to make the biggest impact. And so sometimes understanding the evidence can help you make decisions about the best way to invest in resources. So here's the big one. Who determines when a practice is evidence-based? So if they're not tablets that come down on high, that means that there must be some people who are making these decisions. And I like to think that any time you've got people involved, it's not always exactly as straightforward as one might think. So a lot of times, an academic expert might be the one, or groups of academic experts might be the ones to talk about when a practice is evidence-based. So sometimes you might find these academic experts coming together to create treatment guidelines, and we're going to talk about some of those in a minute. But the idea here is that you get a group of people together, and they look at the literature, and they synthesize it for you, and they come up with some treatment guidelines to help you. Another example are consensus treatments. So when academic experts come together, and they come to some sort of agreement about what they think is happening based on the evidence and what's really rising to the level of being able to be called. But in our systems, policymakers certainly have to make this decision all the time. They have to decide for themselves, given the resources we have, what are we going to invest in, and where is the evidence for our population, for what, you know, who we're seeing in our community? What's really evidence-based given our population that we're serving right here? I would also say that individual practitioners have to make this decision. And as we walk through some of the decision, you know, the ways to think about these decisions, you'll see that, you know, when you've got someone in front of you, you have to make a decision about what are you going to do with this client or with this group of clients. And so you have to decide for this population, what's going to work, and where is the evidence? So ultimately, I really believe that you're the one who gets to determine when a practice is evidence-based. It doesn't mean that we can't use experts and other resources to help us make those decisions. But I think what I really just, if I get nothing else across today, is to encourage you to be critical consumers of evidence and to not just assume that when this term, evidence-based, is bandied about, that somebody else is going to know better than you. Because we all have to make decisions, and the evidence is flawed. And so understanding that the evidence is flawed and understanding how this process happens hopefully will allow you to feel a little bit more confident when somebody comes up to you and says, well, this is an evidence-based practice, to be able to ask some questions to make sure that you feel confident in that. So I want to talk now about two different types of evidence. Most of us, when we hear the word evidence, we think about scientific evidence. So scientific evidence is empirical, which means that it is collected. And scientific evidence is collected systematically, purposely, with a plan. So you're thinking about hypotheses. You're setting up your research plan. You're making sure that your research design is going to support answering the questions that you have. You're thinking about all of the different risks to bias and working hard to put together this plan to get your scientific evidence. And so it's striving to be objective and as free of bias as possible. So that's what scientific evidence is. But there's also non-scientific evidence. Non-scientific evidence is personal experience or observation. It may be anecdotes. It could be intuition. Or it could be assertions by authorities. So here, there may be a little bit more gray. So we've all had the experience, I'm sure, of going onto some websites and seeing some assertions by somebody who's declared themselves an authority and thought, this is a bunch of bunk. I'm not going to believe this. This is not good evidence. I don't believe in this. But there is some real value to other kinds of non-scientific evidence. So we know that intuition, for example, oftentimes intuition is not just a random sort of mysterious, well, it feels mysterious, but it is oftentimes actually a lifetime of experience and observation that you've synthesized yourself and your intuition is telling you that something is not quite right or something should be a certain way. A great example of this is experienced nursing staff on inpatient units. So I've heard from friends who are attendings that when they have an experienced nurse who comes up to them and says, you know, this person just doesn't look right. They have learned to take that very, very seriously. It's not scientific evidence. They're not saying lab value is out of order. They're not saying, you know, that their monitors are going haywire. But they're just saying, there's something not quite right here. And experienced nurses can know even if they can't quite describe. Another example that I hear a lot about is in parenting where I hear my friends talk about their kids and say, you know, they're telling me there's nothing wrong, but there's something that just I know my kids and there's something not quite right. It's sort of the same thing. You're getting, you're using your personal experience or your observations. It's not the same as scientific evidence. It's subjective. It's subject to bias. Sometimes those things are wrong. So we have to make sure that we're judging them in the context in which they come up. So for example, that fake website or that authority who isn't really an authority, we want to make sure that we're just not taking non-scientific evidence just on face value. But my point here is that these two types of evidence are different. And they should be both used when you're thinking about what evidence is. So this is one of the examples of what we call a research pyramid. And this starts at the bottom with what is generally considered to be the weakest type of evidence, expert opinion and case reports, and moves all the way up through the continuum of research evidence to the top with meta-analysis. So just to sort of go through some of these in a little tiny bit of detail. So case reports can be really interesting because they might be alerting people to really new phenomena that we just haven't heard before and it's still experienced. So it may not be systematically collected depending on how it's done, but it may help generate thoughts and ideas. Case control studies, cohort studies, these are what we call quasi-experimental. So you're able to put some amount of thought and do some amount of bias here with these kinds of studies, but there are still lots of threats to your research design here that might make interpretation of what's going on a little bit harder. Then moving up to randomized people trials. This is when you will take people into the study and you're going to assign them to different arms of the study or different treatment conditions. Sometimes you have a waitlist position, sometimes you might have a placebo condition, but they are randomly assigned. So the idea is that by doing that, you're eliminating some of the bias that might come from the people themselves, because there's no rhyme or reason to how they're getting assigned to these different conditions. And so when you compare how people do at the end, you've reduced some of that kind of bias that can come from the specific people. Systematic reviews and meta-analyses are ways of taking all of the research literature and trying to consolidate them into synthesized information to help us better understand the research as a whole. So systematic reviews will have ideally some hypothesis about what they're looking for, some clear statements of what they're looking for in the research literature. They'll look at the research literature and grade it on its level of rigor and make recommendations about what the combined literature says. And the meta-analysis is sort of the strongest in this sense in terms of reducing bias and trying to keep objectivity by taking the data from all of these studies and putting them together and synthesizing the data itself as if it were just one big data set and looking at the meta-analysis of all of these studies together. And so this is generally the way that we have traditionally thought about research evidence and thinking that up at the top gives us the strongest support for something when you can have a meta-analysis that says this treatment is effective or this works better than something else. Whereas down at the bottom, if you just have a case report of something working or experimental studies, you may not be as sure that the treatment that's actually doing something. It could be something else about how the treatment was applied, about the people, about the environment. So the more rigor you put into the study, the higher you go up, the more the assumption is that it's actually the treatment that makes the effect as opposed to something else that's going on. So why not just use scientific evidence? Well, as I was talking about a little bit before, clinical expertise is important. We know a lot when we've had long careers and we've seen lots of people. We know things, and I don't ever want to imply otherwise that clinical expertise doesn't matter. I also think that client preferences are really important. We need to really think about what our clients want, talk to them about their treatment options, help them to understand what the treatments look like, what they mean, what can be expected, what the outcomes are. Hopefully there are choices, and if there are choices to then help the clients understand the different ramifications of them so that they can make an informed decision. So it's important to use your clinical expertise and client preferences in concert with the evidence as much as possible. I think in some ways, evidence is also really complicated, right? So it's hard to get your head around sometimes why things are the way they are. It's hierarchical. So as we just saw in that triangle, it's got all these different kinds, and we're saying that some kinds are better than others, but maybe there isn't research at the top of that pyramid, and there's only research at the bottom, and maybe there's a lot of research at the bottom and nothing at the top, so what do you do? Another big problem is generalizability or representativeness. So a lot of our treatment interventions are tested on people with a particular diagnosis or with a specific problem. What if the person sitting in front of you doesn't have those diagnoses or problems that were tested in that study? How do you generalize to somebody else when the study was for somebody that was for a different population and there isn't a study for the person that uses that intervention for the person sitting in front of you? We also know that clients in the real world are complicated. They often don't just have one diagnosis or one problem that they're working on. And so finding a research study that's really looking at the myriad issues going on with the client in front of you is often difficult. Another issue is diversity. So we treat all sorts of people, but our research pool isn't always as diverse as the people we're working with. So thinking about ethnic diversity or sexual orientation or physical disability, all of the ways that people can be diverse and interesting don't always get represented in the research studies that we publish and that we read. As I mentioned, it's possible that it doesn't exist, that there isn't scientific evidence for you to understand what is best for the person that you're working with, whether it's a particular condition or a specific treatment or practice. Maybe it's an emerging practice and there just isn't a good research literature on it, but you've been using it and you think that there's some value. And we're going to talk a little bit about how you can use evidence in those cases in a minute. And then finally, it depends on what you can measure. So when you do research studies, you have to measure things in order to see if they change or to see if conditions are different from another. And you have to have a definition of something in order to measure it. So that's a good starting point, right? So consensus definitions of everyone coming together and agreeing that we know what something is. So diagnosis, for example, there are consensus definitions for different diagnoses and we have them in the DSM. And then they have to be operationalized in a way that you can describe it and then measure it in some sort of reliable way. So, for example, how would you know if someone was angry? My guess is if you came across an angry person, you would very, very easily be able to identify that person as angry. But if you had to come up with a way of reliably measuring and describing and operationalizing and defining what anger was, it gets a little trickier. These are mushy. And here, I always like to talk about recovery, because we always believe in recovery and we want to have recovery oriented services and we want to be able to measure recovery in the people that we work with. So a lot of times when I talk about evidence-based practices, the question I get is, well, what are the evidence-based practices that support recovery? And when you start digging into the literature, it's not always clear if there's an answer to that, because we have a lot of definitions for recovery. And I would say that for the most part, as a mental health field, we really don't have a strong consensus definition that's being used in the research literature. We have some nice definitions that can be used clinically. So, for example, cancer certainly has a very nice definition that is well publicized and operationalized. But in terms of reliable measurement systems and knowing whether or not treatment is affecting change on that definition, we don't really have all that much going on yet. And a lot of the issues are because there's no consensus definition in the research community. We've got lots of definitions and lots of measures and lots of people trying to measure in lots of different ways. So remember that pyramid where you had at the top the synthesis of all of these studies? If you've got multiple ways of measuring something like recovery, it's really hard to put them all together because they're all measuring different things. And it makes it much harder to draw conclusions about what's effective on quote unquote recovery when there's really 20 different ways that people are defining recovery or measuring recovery in the literature, for example. So this is another limitation sometimes of our evidence base because it isn't always entirely clear that the things that matter to us as clinicians or more importantly, the things that matter to our clients are well-measured in our research studies. So this isn't to say it's all hopeless. We certainly do have measures of recovery. We have measures of functioning. We have measures of quality of life. We have all sorts of great things that we can study and that people are studying all the time. It's just, as I said, until there's clear consensus makes it really hard to synthesize and sometimes to put together and to meet packages, which is how we come back to the need to understand all of these concepts to help you make decisions when you look at the literature so that you can see what's missing, what's not missing and to make, draw your own conclusions. So what is evidence-based medicine? So I should also say here that these are the ways that I interpret the literature that these terms are used, that an evidence-based practice is a practice that has evidence of particular intervention and that this is my, what I see in the literature is evidence-based medicine. I'll just wanna alert you to the fact that sometimes people use these terms interchangeably or they use them in the opposite way. And so it's perfectly okay when somebody uses this term to ask them, what do you mean by that? Because it may be different than what I'm talking about here. So evidence-based medicine was a term that was coined by Sackett in 1996 and this is a seminal paper. And so since it's so important, I just put this quote here and I'm gonna read it to you right now so that you can hear the whole thing. So evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. So the individual patient, the person sitting in front of you. Integrating individual clinical expertise with the best available external clinical evidence. So the internal, your expertise, which we were just talking about was so important with the evidence that's out there. So by individual clinical expertise, we mean the proficiency and judgment that individual clinicians acquire through clinical experience. Expertise is reflected in many ways, but especially in the more thoughtful identification and compassionate use of individual patients' predicaments, rights and preferences in making clinical decisions about their care. So in some ways to me, evidence-based medicine is putting all of these important things together and stating, you know, sometimes we hear, you know, I don't wanna do an evidence-based practice because I don't believe in cookbook medicine. Well, evidence-based medicine is really the opposite. It's about making decisions that are based on experience, that are based on preferences, but are guided by evidence and using the evidence in order to help you best understand what's gonna work for somebody and to use those treatments in the best way possible for the person you've got sitting in front of you. So what is measurement-based care and why does this fit into this whole picture? Well, measurement-based care, at least in the Department of Veterans Affairs where I work, we've defined measurement-based care as a three-part clinical process, and we use this handy three-part catchphrase to help people remember all three parts, and it's collect, share, and ask. So in collect, you have clients complete, reliable, validated, clinically appropriate, and ideally self-report measures starting right in the beginning of their care repeatedly in order to help track progress. And then share, results from the measures are immediately shared and discussed with the client and other providers. So right after you've collected this information and you get it back, you sit down with the client and you go over it and you discuss what's going on and have a conversation about what that measure might mean. And other providers, if you're working in a team, it's a great way to share information about the client you're working with as part of a team so that everybody is quickly brought up to speed about what's going on. And then ask. Together, providers and clients use these measures to develop treatment plans, to assess progress over time, and to really shape, inform, and share decisions about changes to the treatment plan over time. So the idea here is that you have a couple of key things that you might be monitoring in addition to all of the other ways that you might assess client progress and what's going on with somebody, but that you have a couple of clear quantitative data points that you can track over time and you can maybe even graph over time and you can see how somebody's doing. And so here, how does this fit into all of this? Well, I was just talking about how you might not always know exactly what's going to work for a client. Or I mentioned the emerging practices. You've been trained in something that's an emerging practice and you think it has some potential, but you recognize that the evidence base might not be as strong as something else. So it's me, and this is how I was trained back in the day. I want to prove to myself that the treatment that I'm providing is working. And so by having somebody complete their own outcome measures throughout treatment, it helps convince me that the treatment's working. There are other really nice benefits to this and since this is a measurement-based care talk, I don't want to go on for too long about the wonders of measurement-based care, although it is certainly something that I believe really strongly and working to share this with as many people as possible. But it also helps engage clients in the process. It helps you to make decisions about what's going on together as a collaborative team, you and the client, or if you're working in a team-based approach or a larger practice, allows you together to talk about what's going on with the client and then to use as a springboard or a starting point to say, okay, if things aren't going well, it looks like this isn't improving the way that we had hoped it was, what's next? What should we do differently? Do we add something? Do we take something away? Do we make something more frequent? So it allows you to collect your own evidence for the treatment that you're providing in a very different way than asking somebody how you're doing. And I mentioned in the collect part that we often stress the use of patient-reported outcome measures as opposed to clinician-administered. And again, this is because you're getting a different kind of data point. There's bias in all sorts of different ways depending on how you collect data. But if you are already meeting with a client and you're asking them how you're doing, they may want to please you. They may want to, they like you. They want to say, you know, this has been a good process for me. And so they may not be as honest as they might be when they sit in the waiting room and complete a measure independently. You might get different information. So this process can often be used even with an evidence-based practice to ensure that the treatment is working the way you hope it is. But the idea, again, is combining different ways of knowing, different kinds of evidence, using quantitative and qualitative data together in order to ensure that the treatment that you're providing to your client is maximized. All right. So I get it. It's a lot of pressure, and you guys are busy. It's really, really hard. You're likely not going to be able to sit and read journal articles every single day to stay up on the literature. So what do you do? Well, you should use some resources and trusted experts. So I talked about clinical guidelines and treatment recommendations and research sympathies and independent literature reviews. It's okay. It's certainly okay to use those resources. I think the point in sort of leading up to this and to explain all of the specifics is just to point out that, again, people are behind this. These are people who are making these decisions. And people have their own biases, and they have their own ways of thinking about things. And so even a meta-analysis is subject to some amount of bias. Clinical guidelines may be subject to some amount of bias. And so the point is not that they're not useful and that they're not helpful guides and that you shouldn't believe in them because I do believe in them, and I don't want to give you the sense that they're not to be trusted. But the point is that oftentimes we, especially those of us who may not have as much experience with research or as much experience reading the academic literature, we might feel a little nervous about questioning certain things if they don't seem exactly the way we, you know, if something just hits you a little bit, but you're afraid to ask. My point here is not to say these aren't important resources, but to say that there's variability and that it's okay to ask questions and to be good consumers of this data. So I just want to point out one example of a really nice research synthesis to help make decisions about what is evidence-based. And this is the Schizophrenia Court. And their mission was to improve the quality of medical care by reducing variations in care by promoting the adoptions of treatments supported by strong scientific evidence. And you can see the reference down there. So the Schizophrenia Court originally was in 1998, and they did updates in 2003 and 2009. And their foundation for coming up with these recommendations was to do a full research synthesis based on the literature that was available at the time, combined with expert opinions. So finding experts in the field and together going through this research and determining what the recommendations should be. So the last set of recommendations had 16 recommendations in psychopharm, eight psychosocial recommendations. They came up with some key elements in target populations. And I say that it's considered conservative. And the reason why I say this is because they're creating the evidence. What's strong? What really rises to the top? That's what these experts were doing. They're looking at the research, and they're trying to make decisions about where the strongest evidence is. And when I looked at this, I thought, wow, they're really conservative. They're really making sure that most of this research that they're evaluating is really at the top of that pyramid, that the studies are strong, that they're rigorous. Whereas another group of people might have had a slightly more liberal take, and they might have included more things because they were allowing lesser grades of research to enter into their recommendations. And so this is one example of how groups of experts can look at the same research data and maybe make slightly different decisions about what rises to the top. So I believe that the schizophrenia report was a pretty conservative process, which means that the things that they recommended have very, very strong scientific evidence to support them. And so here are the psychosocial recommendations from the 2009 Schizophrenia Report, and these are hopefully all services that are familiar to you, so Assertive Community Treatment, or AST Team, the ICF Model of Supported Employment, Skills Training Approaches, Cognitive Behavioral Therapy, Token Economy, Family-Based Services, which are wildly effective, Integrated Interventions for Alcohol and Substance Use Disorders, and Interventions for Weight Management. All right, so I'm going to walk you through two examples, really, of how to think about this, or at least how I think about it, and some two examples in particular that I found really interesting when I think about my career and I think about the application of evidence to help people. So the first example I'm going to use is supported employment. So anybody who knows anything about me knows that I am a really, really, really big believer in supported employment, and I say that to get my bias out there early so that you know where I'm coming from. I've been involved in studies of supported employment. I've been involved in promoting it in the VA. So I'm all for supported employment. It's a thing I believe in, and that's another reason to always question evidence is because you want to make sure that people are clear about their biases when they are presenting their findings. So that's my bias. So what am I talking about? Well, in particular, I am talking about the IPS model of supported employment, and this is a... The reference here is to one of the first papers in 1994 showing the effectiveness of the IPS model of supported employment from Community Mental Health Journal. And what supported employment does, in case you are not familiar with it, is the goal is to assist people, mostly people with severe psychiatric disabilities, who have competitive employment as their goal. So I mentioned in the beginning that to be an evidence-based practice, it has to be a practice. Well, it is a very well-defined practice. There's a manual. There's a fidelity scale. There's principles. You know when you go into a place that's doing it, you can identify it, and when you go to a place where they're doing something else, you know it's not IPS. And there are many, many randomized controlled trials, and most of these are compared to brokered or transitional models of employment. Brokered meaning they're not integrated with the mental health care, or transitional meaning that they have a stepwise approach where they may not start in competitive employment at the beginning, but working in some kind of transitional workforce. So here's the question that I'm going to ask you and have you think through as I show you these next three charts. Which is better, the IPS model of supported employment, or something called the diversified placement approach, which was at the time of the study that I'm going to talk about, a very leading model of vocational rehabilitation that was used at thresholds, which is a really tremendous psychiatric rehabilitation agency in Chicago, Illinois. So I was involved in helping to set up this study when I was in grad school almost 20 years ago, and so I'm going to use this as an example of how to think about a little bit about how to think about some data. So I should mention Gary Bond here is one of the leading researchers in supported employment, and I was privileged to have him as my grad school mentor, which is how I got involved in some of these studies. And so I steal his slides as much as I possibly can. So this is one of his, and this is a slide showing the 23 randomized controlled trials of supported employment, where the black lines are the rates of competitive employment for the IPS arm of the randomized controlled trial, and the colored ones are for the controlled conditions, which are either brokered or transitional models. And you can see across many years and many locations that those black lines are higher than those red lines. So if your goal is to get competitive employment, then I would say that the evidence is pretty strong that you're going to want to be in an IPS program and that you're not going to want to be in a different kind of work program. So here's a slide from that study I mentioned, the threshold study, where they went to thresholds and they randomly assigned some people to the IPS model of supported employment and randomly assigned some people to get the CPA model that thresholds is using, which is often a transitional approach for some parts, lots of different kinds of vocational approaches. But in general, people made their way through this transitional model, starting maybe with some sheltered work or transitional work and working their way up to competitive employment. So what this chart is showing you is the blue line on the top are the monthly rates of competitive employment for the IPS model versus the GPA model. So you can see very clearly in blue and white that if you want to work competitively, you're going to want to be in the IPS arm because that's clearly going to get you competitive employment. But here's the monthly rates of paid employment. And you'll notice that they're awfully similar and that in the beginning, if you're in GPA, you're getting paid at higher rates than the IPS model. And it was significant in months eight and nine. And if I'm remembering this study correctly, I believe that across the study, if you were in the GPA arm, you actually made more money than you did if you were in the IPS model, the IPS intervention. So if your goal is competitive employment, it's pretty clear. But if your goal is get paid and that's what your preference is, if that's what's right for you at that point, you might actually not want to be in a supportive employment program, in an IPS program. And remember, here's me. I believe in this IPS stuff. This is something I've devoted 20 years of my life to. But you got to admit that if getting paid is your goal, that might not be it. Now, of course, these things are very complicated and people have lots of different situations. And we know, in fact, actually, that in many cases, IPS has better long-term outcomes as well. And so you would want to talk to your clients about, well, this is in the short term, you might get paid more, but in the long term, it might pay off. But again, this has to be a discussion about assuming you have the choice of different interventions to help your client choose. And I recognize that in some communities, there may not even be choices to even have this conversation. But if you're lucky enough to be in a place with resources where you have choices, that this is where preferences really do come in. And we're understanding the differences between the conditions or between the interventions can sometimes really help in setting the stage for what's going to work and expectations about effectiveness. So very quickly, this last example is psychodynamic psychotherapy for people with significant disabilities. So at least when I was trained, I was told that you never, ever, ever would do psychodynamic psychotherapy for someone with schizophrenia. And there's literature these days is a little bit more mixed, but there's a lot of people and a lot of review articles that would say it's a really bad idea. And so that's what I had in my mind when I was reading Ellen's book, where she talks about her own treatments and how unbelievably helpful psychodynamic psychotherapy was for her. And so for me, that was a really good reminder that even when the evidence might really strongly conclude something for a group of people, that there's always individual differences. And we don't always know exactly what's right for everybody, because the evidence sometimes is, evidence is dealing with averages more than it's dealing with outliers. And so there's likely going to be exceptions, even when the vast majority might benefit. It also, as I was thinking about this, came across some sort of seminal quotes. So here's one from Spalding and Milton, talking about the great Frank who concluded that the unifying role of the therapeutic relationship and other common factors is to instill in the client a sense of hope and the expectation that things can and will change for the better. And so much of what we do in recovery oriented services is instilling that sense of hope. And so sometimes we can get at that in different ways. And so when a client has a strong preference in front of you, still thinking about what these common factors are and working to make sure that the treatment still provides the sense of hope and expectation that things can and will change for the better. And then I think I will leave you with this really nice quote from Sue Estroff. The Western scientific community has persistent problems with tolerating complexity, multi-causality, non-linearity, and whatever cannot be measured. So in some ways, this is a reminder that while going back to what I was talking about in the beginning, as people have to make decisions about resources in the absence of strong preferences or from clients, we really are going to use our resources best when we take evidence-based practices. But we also have to recognize that our world is pretty complex, has lots of things going on, oftentimes changes non-linear and that our scientific evidence isn't always quite in line with the complexity that we have in front of us. And so we have to view this evidence with that lens and try to maximize what we know and then have a real good appreciation for what we don't. So getting back to this, I hope that I've convinced you that talking to clients and providers and understanding evidence is really important, but that preferences do matter and that we shouldn't let evidence always be our sole guide. But in the absence of strong preferences, choose an evidence-based practice if it is available. And thank you. I appreciate the time.
Video Summary
In this video, Dr. Sandra Resnick discusses evidence-based practice and evidence-based medicine in mental health services. She emphasizes the importance of understanding the different levels of evidence generated by research, as well as the role of clinical expertise and client preferences in decision-making. Dr. Resnick notes that evidence-based practice involves having a well-defined model of practice with documented effectiveness compared to other practices. She also highlights the value of both scientific evidence and non-scientific evidence, such as personal experience or observation, in informing decision-making. Dr. Resnick discusses how evidence-based medicine involves integrating individual clinical expertise with the best available external clinical evidence to make decisions about patient care. She also introduces the concept of measurement-based care, which involves collecting and discussing patient-reported outcomes to inform treatment plans and assess progress over time. Dr. Resnick acknowledges the limitations of evidence, including its complexity and variability, and encourages viewers to be critical consumers of evidence and to consider different types of evidence in their decision-making process. She provides examples of evidence-based practices and discusses the challenges of applying evidence in real-world settings.
Keywords
Dr. Sandra Resnick
evidence-based practice
evidence-based medicine
mental health services
levels of evidence
clinical expertise
client preferences
measurement-based care
patient care
Funding for SMI Adviser was made possible by Grant No. SM080818 from SAMHSA of the U.S. Department of Health and Human Services (HHS). The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, SAMHSA/HHS or the U.S. Government.
×
Please select your language
1
English