false
Catalog
How Implementation Science Can (Help) Solve Longst ...
Presentation And Q&A
Presentation And Q&A
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello and welcome. I'm Dr. Amy Cohen, Program Director for SMI Advisor and a clinical psychologist. I'm pleased that you're joining us for today's SMI Advisor webinar, How Implementation Science Can Help Solve Longstanding and New Challenges in Behavioral Health Services Delivery for Persons with Serious Mental Illness. SMI Advisor, also known as the Clinical Support System for Serious Mental Illness, is an APA and SAMHSA initiative devoted to helping clinicians implement evidence-based care for those living with serious mental illness. Working with experts from across the SMI clinician community, our interdisciplinary effort has been designed to help you get the answers you need to care for your patients. And now I'd like to introduce you to the faculty for today's webinar, Dr. Heather Gotham. Dr. Gotham is a licensed clinical psychologist and clinical associate professor at Stanford University School of Medicine. She is the director of the Mental Health Technology Transfer Center Network Coordinating Office. That's the MHTTC-NCO, a SAMHSA-funded national network of training and technical assistance centers. Dr. Gotham's research focuses on implementation science, including factors affecting implementation and training and education of healthcare providers across a range of evidence-based practices for adolescents and adult substance use and mental health disorders, co-occurring mental health and substance use disorders, and screening, brief intervention, and referral to treatment. Heather, thank you for leading today's webinar. I have no disclosures. Thank you, Amy, for that introduction. I'd like to start by acknowledging Dr. Mark McGovern, my colleague at Stanford and PI on the MHTTC Network Coordinating Office. I am excited to be here talking about one of my favorite topics, implementation science, implementation geek that I am. I really think that this newish field has some exciting implications for the practice of mental healthcare, and especially for people with serious mental illness who have such urgent service needs. There's three learning objectives for this talk. One is to review some of the explanatory frameworks that we find in dissemination and implementation science and how they relate to the work that all of you do with people with serious mental illness. We'll also talk about the difference between clinical interventions and implementation strategies, which are the activities that are used to help clinics implement the interventions themselves. And we'll talk about how to differentiate patient level from implementation-related outcomes. I hope that all of these will help you feel more comfortable reading and engaging with D&I research and considering how you can use it to change your practice. I'm going to start talking briefly about four specific longstanding and new challenges in providing behavioral health services for people with serious mental illness. So one longstanding challenge is related to a lack of access to services. And this graph from SAMHSA's National Survey on Drug Use and Health shows very high rates of unmet need for treatment that have increased over the past decade, especially in younger adults. So if you look at the orange diamond line that goes across the chart, you'll see that by 2019, overall, about 48% of people aged 18 and older with serious mental illness reported needing but not receiving treatment in the past year. Younger adults had even more unmet need, 63% of those aged 20 to 25. So that's the blue squares. Didn't receive needed treatment, 48% of people aged 26 to 49, and then 35% of people aged 50 and older didn't receive needed treatment. So a huge, huge treatment gap. Another long-term challenge in behavioral health service delivery has to do with racial disparities. So here we're looking at a review from Mora, excuse me, and Weissman Damani from 2017, and they found disparities in diagnosis where racial and ethnic minorities were either over or under diagnosed with SMI. In addition, minority groups typically have less access to services, use services less, and were provided effective treatments less often than white and other populations. These disparities have the roots in systemic racism that's rampant through the development of the health and mental health care systems in the U.S., and it's thankfully something that has seen increased attention and hopefully resolution. So in addition to lack of access and disparities in access to treatment, other longstanding challenges relate to lack of access to effective treatments. This slide from the recent Interdepartmental Serious Mental Illness Coordinating Committee, or ISMICC, their report presents information gathered by SAMHSA showing that only a fraction of people who receive services through state mental health systems receive effective services. About a third receive medication management, only a fifth received illness self-management, and one in 10 received dual diagnosis treatment. Very few people receive other effective treatments such as assertive community treatment, support employment, and supporting housing. A fourth set of challenges relate to patients being exposed to evidence-based treatments, but ones that aren't provided or delivered effectively. This slide shows one of the early seminal studies of treatment fidelity, meaning whether a treatment is delivered as intended, and it shows that a lack of attention to how an effective treatment is implemented leads to negative patient outcomes. So in this study by Greg McHugo and colleagues, they were looking at differences in patient outcomes by whether an assertive community treatment ACT team, whether that team was delivered with high fidelity to the model or what we'd call strong implementation, and so that means if the ACT team had all the components or provided close to all the components of assertive community treatment that were in the original model versus a low fidelity or weak implementation ACT team, we see some significant differences in patient outcomes. So treatment dropouts in the high fidelity teams, patients dropped out, or 15% of patients dropped out versus double that in the weak implementation teams. Substance use being in remission is much better in the high fidelity or strong implementation teams, and hospital admissions are much lower in patients who were served by the high fidelity or strong implementation teams. So this is one of the first studies, large studies, that really looked at this idea of fidelity in relation to patient outcomes, and what we see is that if you implement an evidence-based practice without fidelity to the original model, then the treatment will not work as well as in the original research, and I think the key here is there are certainly times when adaptations need to be made and should be made and can be made, but that's something that also needs to be measured. So all of these previous examples of challenges are what we'd call dissemination and implementation gaps. Implementation gaps occur when we have an evidence-based practice but don't implement it or when we have effective interventions but don't pay enough attention to how they're implemented. So in that previous slide with the ACT example, the issue, of course, wasn't that ACT isn't effective, it's that the way it was being implemented with those weak implementation teams wasn't as effective, and that's an implementation problem. This is also called voltage drop. When an effective intervention is implemented in an inconsistent or insufficient way, then you'll see that the outcomes in patients are not as high or as positive as they were in the original research, voltage drop. With those implementation gaps in mind, let's turn to some basics about dissemination and implementation science. We'll talk about definitions, frameworks, strategies, and outcomes with some examples of SMI-related research studies. Here are some common terms used in D&I science. Earlier in the history of this new science, there was a good deal of controversy and what some referred to as a tower of babble related to terms, frameworks, and theories, which happens with any new science. At this point, there's a general consensus about a number of the terms. Dissemination refers to a passive or uncontrolled spread of a new intervention. It's just what happens when a new intervention is developed and starts to be talked about or spread through word of mouth. Dissemination refers to a range of specific approaches that are used to spread the evidence-based intervention to targeted audience via very specific channels and very planned strategies. So dissemination and diffusion, you can see the difference. Dissemination is very planned. This might include publication of articles, development, and mailing out of clinical guidelines or training manuals, and the spread of basic information and awareness raising through such as through webinars. Implementation is really a process of putting to use or integrating the evidence-based interventions within a setting, and there are specific set of strategies that can be used to really push or accelerate the implementation of research into practice. And finally, sustainment is the process of maintaining or continuing the intervention beyond a more active implementation. We know that even after a new innovation has been used in clinical practice for some time, it still needs some care and feeding, meaning that resources and other activity need to happen in order to keep clinicians using it and keep it moving in the organization. D&I research is the scientific study of processes and factors associated with the successful integration of evidence-based interventions. It has to do with how you get evidence-based practices into routine settings so that more people can receive the best care. And then over time, how do you keep the practice in place? As I said, D&I science is relatively new. It draws on research over many fields, including public health, communications marketing, evidence-based medicine. The first work was done in the 1930s and 40s in Iowa related to the diffusion of hybrid seed corn. So it literally began, research began in the field itself. It's important to understand that D&I science is not the same as process or quality improvement. We get that question a lot. It's really aimed at, sorry, improvement science is really important. It's aimed at quality, safety, and value of healthcare. Whereas implementation science looks at the uptake and use of evidence-based interventions. The graphic here is an adoption curve that was popularized by Everett Rogers, one of those researchers out in the fields of Iowa. Originally, he developed this curve showing that with any innovation at the beginning, there are some early adopters, and then the innovation becomes more popular, more people start using it and adopting it to the point where you just have some laggards at the top of the curve who are the last people to choose to adopt an innovation, and then it reaches a plateau. This curve was popularized some time ago now, but still fairly recently by Malcolm Gladwell in his book, The Tipping Point. Any emerging or new science needs rigor and reproducibility in order to become a science. During the past 20 to 30 years in implementation science, this has led to the rise of many theories, frameworks, and models. Everyone seems to have their pet model or theory or framework about how the world works. We're going to talk about three different types of theories or frameworks that can help us understand and make sense of implementation science, again, with a focus on how it can be helpful for you and your clinical practice. For each of these, I'll give an example of one of the frameworks, and then we'll talk about a specific SMI-related study that uses one of the frameworks. Determinant frameworks focus on trying to understand what are the barriers or enablers of implementation, what things affect whether or not an evidence-based practice gets implemented. You can think about your own experience, and likely you participated in examples of failed implementations when people tried to make changes and they didn't stick, and you can think about, I'm sure you have ideas about what were the barriers or enablers there. Maybe it didn't fit with the clinical practice. Maybe it took too long. Maybe the patients weren't interested in it. One example of a determinant framework is called the CFIR, the Consolidated Framework for Implementation Research, developed by Laura Damstruder and colleagues about 10 years ago. This is now the most widely used framework. It posits that there are barriers and enablers of implementation related to five different domains. If you think about whether or not an innovation is likely to be implemented, you can consider the factors related to individuals, so individuals involved in the implementation, meaning the clinicians or staff involved in providing the service. There are factors related to the inner setting, for example, the characteristics of a clinic. Characteristics are factors in terms of the outer setting, which might be the hospital or system level or a wider national level, such as billing or licensure rules. There's characteristics related to the intervention itself and whether or not it's adapted for a specific population, and then characteristics related to process or how it works. Let's turn to a real example of an implementation study that uses the CFIR model. So this study by Cabasa and Stefanik looked at a peer-led healthy lifestyle intervention for people with SMI in supportive housing. The researchers conducted qualitative interviews with the decision makers in supportive housing agencies and used the CFIR model to help identify what contextual factors were perceived to affect the implementation of the intervention. They found factors related to two of the CFIR domains. So the outer setting that we just described, those are those system level factors, how related to funding for the program, how marketable it was, policies and regulations, changes that needed to be made within the agencies in order to provide this. And then inner setting, there were two sets of factors, that organizational level and client level. So at the organizational level, there were potential barriers and facilitators related to leadership support for the program, whether people felt that this peer-led healthy lifestyle intervention fit with the organization, whether the staff bought into it and were enthusiastic about it, and how much burden it would be administratively. And then at the client and patient level, there were perceived factors related to whether the intervention could be adapted to client's needs, whether it would meet client's needs and whether clients would buy into it as well. A second type of framework is a process framework. These explore the stages or phases of implementation of a new practice. So one example is the EPIS model. Greg Ahrens and colleagues developed this model, and this suggests that any implementation of a new innovation or evidence-based practice has four stages or goes through four stages or phases. So exploration refers to awareness of a need within the patients or consumers or community or a need for a change in practice, and it includes the decision to adopt a new practice. In the preparation phase, tasks need to be done to get ready to implement the new practice, such as staff training, building a community coalition, changing protocols and practices, and it's gaining internal and external support. In the implementation phase, that's where you're beginning to provide or use the new practice, putting the program in place. And then in sustainment, that phase is about maintenance of the practice over time, hopefully delivering it with fidelity. I find these process frameworks to be really helpful in setting expectations with clinic and program staff and leaders when you're implementing a new practice. When you're faced with this new project, it's really helpful to explain the practice change using the stages and helping people really understand the roadmap for how the implementation will play out. Another stage model is by Dean Fixon and Karen Blasey, and they break that implementation phase down into two pieces. There's an initial implementation where things aren't going so smoothly, right, once you start a new program, those first few months or weeks. It's like the awkward teenager phase. People are unhappy, staff are unhappy, clients might not be happy with the change. But when you're in full implementation, then the program should be humming along. And just explaining that to staff, to leadership, that, hey, there's going to be this awkward phase, and then we're going to work through it, and that's going to be fine, that can really help in terms of setting expectations and managing change. Here's an example of a study that uses not the EPIS model, but a phase model. The study by Molly Finnerty and colleagues looked at implementation of a web-based shared decision-making system in two specialty mental health clinics. The researchers looked at which implementation strategies, which are methods to help implement the intervention, which of those strategies would be useful at each stage. And the graphic shows what activities were engaged in at each of the stages. Interestingly, too, the study also used the CFER to look at barriers and facilitators. And you'll see when you read implementation science research that often will have, studies will use multiple of these constructs that we're talking about. The third type of framework within implementation science are evaluative frameworks. And these frameworks get at how one can know whether a new practice and how it's being implemented actually work. One example of an evaluative framework is called RE-AIM. RE-AIM is a model for evaluating an intervention or program or practice and evaluating those outcomes, as well as the effect of the implementation strategies, which we call implementation outcomes. So RE-AIM model has five pieces. Reach refers to the number, proportion, and representativeness of individuals who are willing to participate in a given initiative or program. So do the patients who need and want it get it? Is the target population receiving the intervention? Effectiveness looks at the impact of an intervention on individual outcomes. So this is what you typically think would be measured in an effectiveness trial. Does the intervention work? And this includes potential negative effects as well, quality of life, cultural, and economic outcomes. Implementation refers to the absolute number, proportion, and representativeness of settings and intervention agents or clinicians and staff who are willing to initiate a program. So this is more of an implementation outcome. Are staff and programs actually using the intervention? Implementation outcomes refer to the intervention agents or, again, staff, clinicians, clinics' fidelity to the various elements of an intervention's protocol. This includes consistency of delivery as intended, adaptations made, and the time and cost of the intervention. So is the intervention being delivered properly? And then maintenance includes two pieces. At the individual or intervention effectiveness level, what are the long-term effects of a program on participants' outcomes or clients' outcomes? Usually six or more months after the most recent intervention contact. So how do people continue to do over time? And then at the organizational level or the implementation level, the extent to which a program or policy becomes institutionalized or part of the routine organizational practices and policies. Is the intervention delivered over the long term and how? And we know that often we'll have an implementation of a new evidence-based practice. At the beginning, people really use the model to fidelity, and then over time, there's drift. So these maintenance outcomes really get at that idea of drift or things continuing to be delivered as they're intended. Here's an example of an SMI-related study using the RE-AIM framework. Maskayano and Lisa Dixon and colleagues looked at the implementation of the OnTrack New York Early Psychosis Program in 21 sites, and they examined five-year outcomes. So again, RE-AIM is a framework by which they organize the outcomes. We see that the reach included about 1,200 patients enrolled over five years. In terms of effectiveness, in terms of effectiveness of the patients who were enrolled, they saw that education and employment increased significantly, hospitalization rates decreased, and general functioning improved. For adoption, 98% of clinicians reported comfort in implementing the practices and approach covered by the training that was provided as part of implementing the program. In terms of fidelity, through fidelity ratings, they found that all sites met almost all the domains of the coordinated specialty care model that is OnTrack New York. And in terms of maintenance and outcomes, they reported on the importance of having a viable financing model in order to continue providing the service. They didn't have any post-discharge patient outcomes to examine. So again, the RE-AIM framework is really important because it focuses not just on the patient outcomes, but also on the implementation outcomes. If you're only looking at patient outcomes, you're really only getting half the story. So if you recall the implementation gaps, the patient outcomes don't tell you what intervention was really received. And so you need this other half, the implementation outcomes, such as fidelity, in order to really understand what intervention the patients received. And then you're better able to match, like, okay, the patients really were receiving, you know, a high-fidelity version of OnTrack New York, and so here are their outcomes. It's better to contextualize those outcomes or to understand if something's not working. Maybe it's because the program is being implemented at a really low fidelity level. So as a bit of a summary, we've talked about terms, and we've talked about some of the models. Jeff Curran developed this teaching tool to try to make implementation science simple. So let's start with positing that the intervention or evidence-based practice, such as for people with serious mental illness, is the thing. So the intervention or practice is the thing. So effectiveness research, where you're looking at patient outcomes, that kind of research looks at whether the thing works. Again, the thing is the evidence-based practice. DNI research looks at how to best help people or places do the thing. What can we do to accelerate the use of the thing, being the intervention? Implementation strategies, which we're going to talk about next, are the stuff we do to the things to help people or places do the thing, do the intervention. And then implementation outcomes are how much and how well the clinicians and the programs do the thing, do the intervention. Let's talk a little more about implementation strategies. So these are activities, actions, and causal agents that we engage in in order to scale up or scale out or sustain an evidence-based practice. As an example, I'm part of the MHTTC network, which is a training and technical assistance network funded by SAMHSA, similar to SMI Advisor. The training and technical assistance interventions or activities that we do as part of the MHTTC network or this webinar and other activities that SMI Advisor does, those are all, we can consider those to be implementation strategies. These are strategies that we're providing, training and technical assistance interventions we're providing in order to get folks to implement evidence-based practices. Scale up, just as a point of clarification, scale up refers to efforts to expand an evidence-based practice to similar settings and populations as those included in the original research. Scale out refers to efforts to expand the intervention to different mental health program settings and subpopulations, so scale up versus scale out. Scale out is a newer term. Unfortunately, the current state of research related to implementation strategies is it's unclear. We don't have a lot of precision and documentation about how we report and deliver implementation strategies, so that's an area where implementation science is needing to do some work in-house to make things clearer. As we're thinking about how to accelerate the implementation of an evidence-based practice and get it used in the clinic, there are a number of implementation strategies that can be brought to bear. Again, if you think about those implementation efforts that you've participated in, most likely they included training. That's a big one. Everyone says, okay, we're going to implement a new practice. We need training. I think, unfortunately, sometimes we miss some of the other strategies, very specific strategies that need to take place in order to get something implemented. Research by Byron Powell and colleagues has classified these implementation strategies into six main categories here, plan, educate, finance, restructure, manage quality, and attend to policy. If you think back to the EPIS model, the strategies listed here can be used across the four stages of implementation, although some are more likely to be used during certain stages. Let's look at an example. If you look at planning strategies, planning strategies can include building buy-in within the organization or developing relationships necessary for successful implementation, both within the organization and outside the organization. So, for example, building a relationship with an opioid treatment program might be important for a community mental health center that's interested in implementing co-occurring disorders treatment. Quality management strategies include developing data systems or reports that will allow implementation teams to monitor how the evidence-based practice is being implemented and whether patient outcomes are improving. So, for example, the same community mental health center implementing co-occurring disorders might track how many of their patients are being assessed for opioid use disorder, whether patients who are prescribed buprenorphine to treat an opioid use disorder are being retained in care. How do you choose which implementation strategies to use when you're trying to implement an evidence-based practice? How do you know that you have enough, or which ones would be best? If you're the clinic or the practice, you might ask yourself, well, what kind of help do we need to implement this? Unfortunately, again, this is the part of the science that's been fairly imprecise up to now. I think historically we've all used the train and pray approach, where we send people to a one-day or one-hour or a two-day or five-day or whatever training, and we expect and hope that they learn something and that they're going to come back and magically transform the clinic to use this new evidence-based practice. I think that's pretty old thinking. One of my MHTTC colleagues calls this the we pretend to train you and you pretend to change approach. The other three graphics on the slide are all fairly similar, throwing everything in the kitchen sink at the problem, meaning to use a variety of different implementation strategies to make sure that some of them stick. Having a one-size-fits-all approach, where regardless of the setting or context of the implementation, we use the same set of implementation strategies, like training and coaching and consultation, because those are standard, or the it seemed like a good idea at the time approach. What we want to move toward is something called precision implementation, which is similar to precision medicine, where we'd like to assess the context, the clinics, have an understanding of the barriers and facilitators at the clinics or the programs, and then be able to design and select strategies based on those barriers at the particular set of organizations or clinics. Then we'd need to evaluate the strategies to see which ones worked, including what sequence worked best, and then model what happens in terms of implementation outcomes and costs. There's some work being done to match barriers to implementation strategies. For example, with this type of targeted tailoring, you could imagine identifying a number of barriers at your agency, and then using relevant implementation strategies to try to overcome those barriers. For example, if it seems like there's a lack of knowledge regarding the evidence-based practice, then you'd want to use interactive education sessions, that training piece. If there seems to be a lack of motivation among the staff to change or do things in a different way, you might consider using incentives or sanctions. If the issue seems to be barriers to workflow or time, you could do a process redesign in order to help the evidence-based practice fit better into the clinic. Here's an example of a national randomized trial that uses what we call adaptive implementation. In adaptive implementation, you're trying to better understand how much or how many or at what level of intensity these implementation strategies are needed in order to get the job done, to get the evidence-based practice implemented. This is a study by Joanne Kirchner and colleagues through the VA. They're looking to implement Re-Engage, which is an outreach for veterans with SMI who've been lost to care. This was a big study. They looked at 158 facilities. Many implementation science studies are hampered a bit by the number of facilities or organizations that they're able to look at. As in other kinds of research where the patients are the level, the unit of analysis, in implementation science research, often it's the facility, the organization, the clinic. You need quite a few of them in order to really be able to understand and examine outcomes across facilities or organizations. This was quite a large study, 158. Again, in this adaptive implementation trial, they were looking at how much implementation support is needed. How many of those implementation strategies do we really need to use in order to get this new evidence-based practice implemented? They divided the sample and some of the facilities received what's called the standard implementation strategies. This would be kind of the usual care if we were looking at patient outcomes in this piece. The standard implementation included an implementation manual, some training and technical assistance. That's pretty standard. Then the other sites received those three pieces, plus an enhanced external facilitation strategy or bundle of strategies. External facilitation is a bundle of strategies. It's very intriguing right now. There's quite a few studies ongoing about this. This external facilitation was developed, again, by folks through the VA. It includes an expert in implementation who facilitates the movement of evidence-based practice into the facility, such as helping the facility garner regional and local support, identifying barriers and facilitators, developing an action plan, providing feedback, and linking to resources. Then they examined differences in implementation based on whether facilities received the standard package or the enhanced package of implementation strategies. They were also able to look at did that level of support, the effectiveness of that level of support, vary by organizational culture or climate? And here again, effectiveness refers to how well the evidence-based practice was implemented as well as then examining patient outcomes. Now that we've reviewed some terminology and we've looked at a number of different models, we've talked a bit about implementation strategies, and so you have some background in implementation science. So let's move in this last section to talk more specifically about how it can help you solve some of these longstanding and new challenges to service delivery for patients with SMI. First, let's go back to some of those systems barriers we looked at at the beginning and specifically these gaps in care. So the gaps in care included lack of access to effective treatments, lack of access to effectively implemented evidence-based practices, and then racial disparities in access and treatment. Regarding lack of access to effective treatments, we can use DNI science to understand how to most efficiently implement new programs. As we just saw, we can look at what are the least resource intensive implementation strategies that are needed. So how much outside help would your organization need in order to implement an evidence-based practice? This is especially helpful when we're thinking about scale up. So if you're looking to scale up a new evidence-based practice across the state or system, how much outside help do you need? Because those can be really resource intensive and costly. We can also look at what types of clinics or organizations respond to which implementation strategies so that we're able to do that kind of tailored targeted implementation with different types of clinics. In terms of lack of access to evidence-based treatments that are provided effectively, the research we discussed on fidelity is really helpful here. So how can we best understand kind of what are the limits to fidelity? So for an ACT team, for example, knowing that which are the most important components of an ACT team that need to be implemented in order to have a high fidelity model in order to have those really positive patient outcomes versus some pieces of the evidence-based practice might not be as important. And so there's research ongoing about adaptation that's really important. Shannon Wiltsie Sturman here at Stanford has done some great work to help folks identify when you're going to adapt a program. It's important to be really mindful about how you do that, to document it, to really understand if you're going to be changing the function of the different components of the evidence-based practice. And then another piece here is tying this to ongoing quality improvement. So, again, continuing to measure the implementation as long as you're continuing to measure patient outcomes over time. And that will allow you to decide like, oh, hey, we've drifted too far. We need to get back on track. You can use some quality improvement strategies at that point to get the providers back on track and providing the intervention with higher fidelity. And then related to decreasing healthcare disparities, we can use D&I science to understand how to most efficiently implement new programs for specific populations. And that can include monitoring how implementation is working for a specific population or monitoring how implementation is working in providers from specific populations. So, you know, perhaps an evidence-based practice, you know, either doesn't fit well with the patient population, or maybe the evidence-based practice doesn't fit well with the provider population. And there's some really great article recently by Hendrix, Brown and colleagues about looking at implementation science and that kind of racial disparity or mental health equity. Some other solutions that I think are really important and a bit broader, one has to do with data. We really want to develop a public mental health system, our public mental health system in the United States, into a learning health system. And a learning health system uses data to track gaps in the resolution, and D&I science can definitely help with that process. In terms of practice, I think we should be using D&I science at the national state systems levels to implement evidence-based practices. So I think state systems need to consider this. Even a local care system needs to consider D&I science when they're looking at implementing a new practice. Training and TA is really important as well. So for example, the MHTTC network, SMI advisor, you know, we're big training and TA centers funded by SAMHSA. And trying to infuse D&I science into the work that we do, I think is going to help move the field forward to close some of these gaps. And lastly, research. We want to accelerate the innovation development to implementation pipeline. So there's a pretty long lag between when an innovation is or a new evidence-based practice is being developed through basic research and applied research all the way through the implementation pipeline. So understanding, you know, at the far end, how best to implement an evidence-based practice. That research pipeline is pretty long. One way to fast track that is looking at hybrid effectiveness implementation trials. So a last data slide, last study slide here. This is a study by SMI advisor faculty Alex Young and Director Amy Cohen and they're looking at, this is a study protocol article that came out recently, looking at the effectiveness of a medical home to improve the health care of people with SMI. So these types of hybrid studies can really cut down the length of time in the research cycle that it takes to examine both effectiveness of the intervention and the best ways to implement the intervention. So in this study they enrolled three clinics and then they divided the clinics between usual care and one of the clinics that's going to implement SMI PACT. So they're all implementing PACT but this one in particular was examining the intervention SMI PACT specifically. Then they enrolled patients with SMI and followed them over the course of a year and so typically right you do patient outcome and usually you'd only do patient outcomes like so we're patients in one kind of clinic or the other more likely to do better. The implementation piece that they've bundled into this study is also looking at implementation barriers and facilitators in the SMI PACT side. That kind of fidelity monitoring will then help be able to look at over time if other programs are going to implement this type of medical home how can they best do the implementation piece. So that this kind of bundle there's a number of different types of bundled hybrid trials and this is this is one of the examples. Finally I want to end with some points very specifically about how DNI science can help you in your clinical practice. So here's some considerations if you're thinking about implementing a new practice or a program apply some of the basic DNI science that you just learned. So you want to step back and think about the problem you're looking to solve. I think sometimes people rush to like oh there's a new evidence-based practice let's let's do that without stopping and looking back and understanding that implementation occurs across right those multiple stages and you want to start with exploration and that means carefully reviewing the research on the evidence-based practice including its implementation to understand is it really going to fit is this the right thing and even before that I think you want to start with what's the problem we're trying to solve with this new practice you know is there is there a gap some kind of gap going back to the gaps we talked about earlier are our clients not able to access the services are are we not providing evidence-based practices are we seeing that our clients aren't doing very well in a particular area you start there and then move forward toward how can we solve that is there an evidence-based practice we can use to solve that you want to put together an implementation team with people at all levels in your organization that are going to move this implementation forward so it's not just reliant on you know the one person you send out to training which was a an old school model and use an implementation plan so a very specific written down plan with you know smart goals and objectives and a timeline and people who are responsible we know that that's the the best way that things get done consider hiring an intermediary purveyor organization IPO to help provide implementation support so that could be that implementation facilitator that external facilitator we talked about earlier and pay as much attention to the process of implementation as you do to the practice itself I think it's also important to consider ourselves as all having a role related to DNI science and so as clinicians with knowledge and training and DNI science and research we can be better consumers of research on on evidence-based practices so I think that's part of what my goal was today is to give you some of those background on the frameworks and models so as you're reading literature you can have a better understanding of kind of what's going on and why they're looking at the particular outcomes or methods we can also view clinical practice change with a larger lens so it's not just about provider behavior it's you know it's not just that our our clinicians aren't you know doing what they're supposed to do it's certainly not that at all we can look at what's the outer context what are the policy implications here what what are the factors related to the patients what are the factors associated with the intervention because all of these things together reflect on how well and evidence-based practice is implemented and what the what the patient outcomes are you can also assist in implementing evidence-based practices in your practice I think again managing the expectations of leadership and staff about the change process those implementation strategies we talked about before is a really really helpful and important important piece you can be the the champion the local opinion leader and and help move things forward and then serving on quality improvement teams because you know certainly quality improvement and implementation science go hand-in-hand especially over time it's also imperative to include DNI science and research in our clinical training programs across disciplines so that the next generation of clinicians and researchers can stand on our shoulders and eliminate those gaps in care for people with serious mental illness thanks Heather thank you so much for such an interesting presentation and so thorough so I know the questions are coming in Heather I thought you and I could have a little bit of a chat because we both have struggled to make you know change improvements in mental health and we know it's not easy I thought maybe we could talk for a minute about the importance of mixed methods and talk a little bit about qualitative interviewing and the importance of that and helping us understand the side the implementation side that's it that's a great point Amy I think that as we talked about you know this the implementation is so complex right you have the it's complex and it's complicated so you have such a range of factors that might be affecting whether or not and new evidence-based practice even gets implemented and how people are responding to it so it's extremely important to use that kind of mixed methods strategies to understand what's really going on and and you might be really surprised by you know what you find out so for example I was involved in a implementation of screening brief intervention and referral to treatment or expert in a medical surgical unit at a clinic and and so we're looking at just trying to understand the screening piece even and seeing you know if you look at the data you'll see that people aren't getting screened so it's really important at that point for example to understand like what's really going on with the screener so we know the screeners in the EHR people have been told to do the screening they've been trained on how to do the screening you know we've talked to the nursing staff and they're awesome and so kind of what's going on and then it's only through that qualitative research that maybe you can find out like oh well we did provide the training but then there was a change and it's the medical assistant or it's someone you know up the food chain three steps earlier at the admissions level who really sees the that screen in the EHR which is something that you wouldn't know if you just looked at the data that said you know people aren't being screened so I think those those kinds of things the same thing with with patients and patient preference it's so important to include patients in your implementation team when you're planning to do an implementation to include them to keep them informed and to find out from them how things are going so you know it's it's totally within the ability of researchers if they get too far afield from patients to develop something that patients don't find very acceptable it might be even effective but it might not be acceptable to patients and so we really need to make sure we have that that patient voice that's that's an essential component I think to implementation science right and I think what's different from other controlled trials when you're doing an implementation trial is like you've mentioned several times I just wanted to highlight the checking in so it's not the kind of thing where you do an interview at the beginning and you do an interview at the end right so sometimes we we check in periodically maybe we do a group visit we visit the clinic maybe we actually do individual interviews and we sort of say how's it going and like you said before if adjustments are made you're allowed to do that you just want to make note of that when it was done and why it was done so it's not something like on day one we say this is how this is going to go and then we check in a year later instead we know that what we end up with a year later as the method is very likely to be different than what we set out right exactly exactly yep it's a it's a constant process of checking in rechecking and making sure that we really understand kind of what's happening on the ground right so in many ways I one of the things I've always liked about sort of health services research implementation science is the idea that it's just very very applied and very reactive to the context in which we are implementing the idea is not to come in and say we're going to do it X way everybody adjusts but rather to say our idea is to do it X way what do you think about that you may say well that wouldn't really work well there's no way that that the clerk could do that or there's no way that the unions would be okay with the nurses doing X you say okay how could we do it and then you begin to measure that mm-hmm yeah and I think that that's part of that importance I was mentioning near the end about really starting with what's the problem that you're trying to solve with a new evidence-based practice you know what is what is the is there a gap and again I think people rush sometimes to thinking they need to implement something and maybe it's even they think they need to implement it because they're being told to implement it but it's still you still want to go back and do the first piece about the need and the needs assessment and understand why or how this can be helpful because then you'll really have that grounding in okay maybe we're being told by Jacob or whoever that we need there by the state that we need to implement this new evidence-based practice but you really want to help understand what's it going to solve for you and and that'll also help you build buy-in amongst patients amongst your clinicians amongst the system one of the things that struck me in your talk that made me chuckle a little bit thankfully my mic my microphone was off was this idea of we pretend to change to train you and you pretend to change and it made me think a lot about some of the recent work in implementation science about sustainability and that with all of this effort around implementation science that still at the end when you pull out there is maybe some drag back to the old way of doing things and you know we thought a lot about that in one of the studies I did and one of the things that we ended up doing which was sort of an interesting task that I had never done before which was the clinic at the end decided to rewrite some of the position descriptions to better include the new roles that had been developed as part of the implementation so it was really interesting to think about the fact that they said we really the way to solidify this in our minds is to change position descriptions so that people understand this is an ongoing be part of their annual reviews if they're doing these things and I and I it was a really interesting exercise at the end of a at the end of a implementation to think about that kind of permanence to to support sustainability I don't know if you've done anything around sustainability in any of your projects yeah there's a I think there's a couple parallels here so one one is what you're talking about or what fixi and fixin and Blasi I just combined their names fixin and Blasi would call implementation drivers and so those kinds of they have a great implementation monograph from 2005 but it is not dated at all it is still quite applicable and in their model right everyone's got a model and they're all they're all good right so fixin and Blasi's model I love it because it's really applied and they talked about implementation drivers which are kind of the things that that you can you can take that model and be at your clinic and say okay these are the things we need to pay attention to and one of those pieces is about the how you're hiring staff for your evidence based practice and I think that's part of what you're getting at is as you're hiring staff you need to consider what's what's the role what's the job description is this someone who's interested in change is this someone who's going to be flexible and adaptable is it someone who is really open to learning new things and so there are pieces that implementation drivers like that that I think again really applied so it's a great model if you're the clinic that's doing the implementation on the sustainment side there's a number of new studies I think and and pieces of sustainment people are really starting to examine what are the components of sustainment I think you just hit on one of them and again that that fixin and Blasi implementation drivers piece although it's not not too specific to sustainment I think can also hit on some of those things how are we going to keep this alive over time what resources are needed and that has to do with personnel it has to do with funding I think sometimes administrators might think well we've got it up and running we're good and may not pay as much attention to you know there's clinical supervision that needs to go on over time you know with some evidence-based practices or you're going to get that drift or there's still fidelity pieces that need to happen goodness knows there's a lot of turnover in our field and so having a very specific plan for staff turnover are we going to get new staff trained in a timely way who's going to be providing that training what are cost-effective ways to get that training that's a huge piece of it as well mm-hmm I was just thinking of an example from that SMI PAC study that you presented and we were really trying to you know get more people to have some preventive health care so people with SMI who were gaining a lot of weight and we knew that they were you know people at the clinics were dying from the same things the rest of us and others are dying from heart attacks diabetes etc that they were not dying from psychosis per se anymore and you know the world knows this and people were trying to really monitor these side effects but we was still were having a lot of problems with with diabetes weight obesity exercise with this group and so we decided to compare having integrated care where they would get their physical and preventive care in primary care and would have a psychiatrist as a consultant so these were people who were maintained on meds with serious mental illness but we're not having visits more than every quarter to psychiatry so we said let's move that care over to primary care and have a medical home for this group a team in primary care that's really going to be focused and comfortable with this population to treat to really focus on their physical health but also be able to renew their meds with some psychiatry input versus people who were having non integrated care where they were still going to specially mental health and physical health and one of the things that was really interesting about that project was a surprise result that we got which was about psychosis actually decreasing in the group that had the integrated care so why was this surprising what was super surprising because these people were no longer going especially mental health they were getting their psychosis care from their primary care physician with consults to the physician from psychiatry so we were like how can that be you know as mental health professionals how can that be who's gonna steal our jobs you know yes exactly but what we realized was that communication actually improved between specially mental health and primary care in the newer model because the the consultation from psychiatry was you know might I say mandated as part of it so every week psychiatry was checking in with primary care and saying who's coming in what's going on how are you feeling about their symptoms whereas in the non integrated care although they had both psychiatry and primary care they were not talking to each other mm-hmm and so what you were finding was people were going to primary care and they were saying you've got to lose weight and these medications are part of the reason the pain and even though primary care of course was not saying to go off those medications sometimes the clients were confused right like and and so the integrated care which actually changed the standard of practice of communication which is not what we thought necessarily the problem was but as we did the qualitative interviews we realized primary care was saying in the in the integrated model oh I'm talking to psychiatry much more mm-hmm you know I have a set time with them every week and also I know I can call them anytime and I know who to call and it really was an interesting example of the power of interviewing stakeholders of being open to barriers that you may not have been aware of and I really to me it was a very powerful example of of systematically looking at the implementation and being open to the kinds of changes that might be needed so it was it was an interesting project in that way mm-hmm yeah that's a great example Amy of the all of all the things we've been talking about really the power of implementation science strategies and methodologies to dig down deeper into those clinical outcomes and really understand where they're coming from right we'll wrap up now with our Q&A I've so enjoyed talking with you this morning Heather
Video Summary
In this video, Dr. Amy Cohen, Program Director for SMI Advisor, discusses the importance of implementation science in addressing challenges in delivering behavioral health services for those with serious mental illness (SMI). Dr. Cohen introduces Dr. Heather Gotham as the faculty for the webinar and lists the learning objectives for the session. Dr. Gotham explains the role of implementation science in mental healthcare and the implications for individuals with SMI. She discusses various challenges in behavioral health services delivery, such as lack of access to services, racial disparities, and lack of access to effective treatments.<br /><br />Dr. Gotham emphasizes the importance of understanding explanatory frameworks in dissemination and implementation (D&I) science and how they relates to clinicians' work with individuals with SMI. She highlights the difference between clinical interventions and implementation strategies, as well as differentiating patient-level from implementation-related outcomes.<br /><br />The transcript also explores the gaps in care and the role of implementation science in addressing them. Dr. Gotham discusses the use of determinants frameworks, process frameworks, and evaluative frameworks in understanding barriers and facilitators to implementation, as well as evaluating interventions and assessing implementation outcomes.<br /><br />Dr. Gotham mentions different implementation strategies, such as planning, educating, financing, restructuring, managing quality, and attending to policy. She highlights the need for precision implementation, tailoring strategies to specific contexts and barriers.<br /><br />The video concludes with the importance of mixed methods research and qualitative interviewing in understanding implementation processes and addressing challenges. Dr. Cohen and Dr. Gotham have a discussion about the importance of sustainability and hiring staff with the right skills and attitudes for implementing evidence-based practices. They also talk about the need for ongoing communication and collaboration between different healthcare settings to improve patient outcomes.<br /><br />Overall, the video emphasizes the role of implementation science in addressing challenges and improving behavioral health services for individuals with serious mental illness. It provides insights into various frameworks, strategies, and outcomes relevant to implementation science in mental healthcare.
Keywords
implementation science
behavioral health services
serious mental illness
SMI Advisor
Dr. Amy Cohen
Dr. Heather Gotham
dissemination and implementation science
clinical interventions
implementation strategies
implementation outcomes
Funding for SMI Adviser was made possible by Grant No. SM080818 from SAMHSA of the U.S. Department of Health and Human Services (HHS). The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, SAMHSA/HHS or the U.S. Government.
×
Please select your language
1
English