false
Catalog
The Early Psychosis Intervention Network (EPINET) ...
Presentation and Q&A
Presentation and Q&A
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
We have really a terrific panel today that's going to be talking about a terrific project. We have Drs. Susan Azarud, Dr. Abram Rosenblatt, Dr. Howard Goldman, and Dr. Deanna Perkins. Drs. Susan Azarud has been introduced a couple times today, I think. She's the Unit Chief in the Early Psychosis Prediction and Prevention Research Unit at the NIMH and has been instrumental in the development of the EPINET from its original inception up through its implementation. She's a great colleague and has been terrific to work with. Abram Rosenblatt is the Principal Investigator for the National Data Coordinating Center at Westat. I'm wondering if I should be saying advance the slides. Next slide, please, which should be Abram. I've known Abram for many, many years, and much of his work has been around systems of care interventions in the children's mental health space, and now he's sort of moving to more to transition-age youth. Prior to joining Westat, Abram was a professor at the University of California at San Francisco. I've also had the great pleasure of knowing and working with Howard Goldman for many, many years. I think Howard's probably well-known to the field. Pretty much every important research demonstration program that has been completed in the United States has had Howard's involvement, and he's been just a leader in the field for many, many years. Finally, Deanna Perkins, I've not known for a long time. Deanna is a professor of psychiatry at the University of North Carolina and has done a great deal of work in terms of early intervention. She heads a clinic there around early intervention in psychosis and has been part of the Naples Research Project, which is a 2.9 million five-year endeavor around early intervention identification of people early in the course of illness. So it's a terrific panel. I think a very important topic. We're all very excited about the EPINET. Now I'd like to turn it over to Susan, who will give us the NIMH perspective. Oh, I need to do disclosures. I'm sorry. Rosenblatt, Asrin, and Goldman have nothing to disclose, and Deanna discloses a couple of projects she's working on that have pharmaceutical industry sponsorship. So with that being said, I think I've covered my basis. Susan? Great. Thank you, David. Maybe I'm supposed to describe the learning objectives as well. Go ahead, Susan. Thank you, David. Thank you so much for those introductions. And hello, everyone. My talk with you today is about a science to practice success story that you are all a part of and how it created the momentum for a learning health system for early psychosis. Next slide, please. So I apologize in advance for any ill feelings my U.S. state map slides may induce. Prior to the positive mood I'm trying to create here, but something remarkable has occurred in the U.S. in the care available for people with early psychosis. Next slide, please. In 2008, we had a dozen community-based early psychosis programs with Oregon and California lazing the trail. Next slide, please. Now, just 12 years later, the U.S. boasts more than 360 evidence-based early psychosis treatment programs, at least one in every state and in four U.S. territories. And the numbers here do not refer to electoral college votes, but to the number of early psychosis programs in that state. So the larger the circle, the more programs in that state. And these programs have all self-identified as early psychosis treatment programs in the EISA, National Early Psychosis Directory, which is available online. Next slide, please. So to paraphrase the talking heads, how did we get here? What accounts for this rapid expansion of early psychosis treatment programs in the U.S.? And next slide, please. So first, the science. Many of you are familiar with NIMH's RAISE, Recovery After an Initial Schizophrenia Episode Research Initiative, launched in 2008. But just to recap, NIMH supported two research studies, John Kane's RAISE Early Treatment Program Study and Lisa Dixon's RAISE Connection Study. And they each tested a team-based, multi-element treatment approach for early psychosis that we termed coordinated specialty care. The RAISE studies demonstrated that coordinated specialty care programs for early psychosis are more effective than care typically available in communities, they're feasible to deliver, and they're scalable in the United States. And then Vinod Srihari's STEP trial came to similar conclusions. Next slide, please. So as RAISE findings became available, enthusiasm grew, and many, many U.S. actors partnered to create a national momentum around coordinated specialty care. Several agencies, including SAMHSA, NCMS, and ASPE, stakeholder and advocacy groups like Mental Health America, One Mind, Strong 365, and NAMI added their support. Mental health professional and policy leader organizations, such as the National Council for Behavioral Health, the SMI Advisor, the National Association of State Mental Health Program Directors got involved. And HEPNET has helped stimulate a large learning community of people interested in training, supervising, and optimizing coordinated specialty care. In addition, mental health block grant funds specifically for early intervention in psychosis became available, providing funding for new programs and training. So these partnerships among a broad coalition of stakeholders fueled a huge expansion of evidence-based early psychosis treatment programs in the U.S. Then in September of this year, the American Psychiatric Association published its practice guideline for the treatment of patients with schizophrenia. And the APA now recommends that patients with schizophrenia who are experiencing a first episode of psychosis be treated in a coordinated specialty care program. The bottom line is that coordinated specialty care has now become the standard of care for early psychosis. The science has truly been transformed into practice in the work that you all are doing every day. Next slide, please. We've made much progress, but much remains to be done. How can we leverage and empower this coalition of early psychosis care stakeholders and the hundreds of coordinated specialty care programs nationwide? Next slide, please. Our answer, HEPNET, the Early Psychosis Intervention Network, a learning health system for early psychosis. The mission of HEPNET is to accelerate advances in early psychosis care, recovery outcomes, and scientific discovery through a national early psychosis learning health care partnership. I've talked with many of you and know there's a lot of curiosity about HEPNET and what HEPNET might have to offer you. And that's what I'm going to talk about now. Next slide, please. HEPNET is built on the principles of continuously learning health care. So, continuously learning health systems strive to provide the best care possible, to measure the results of that care, to use that data to improve the services and the quality of care, to examine the variation in the care delivered and the outcomes of that care, and to use that data to launch new research. So, HEPNET is a continuously learning health system for early psychosis with coordinated specialty care at its heart. Here you see the coordinated specialty care or CSC model, which is familiar to many of you, and its primary treatment components delivered within a shared decision-making framework. The young client is always at the center. The CSC programs in HEPNET represent a variety of CSC models that you may also be familiar with and include Navigate, OnTrack, First, ESA, and Peer, among others. Next slide, please. HEPNET's methods and culture are based on the continuously learning health care model based on the continuously learning health system model outlined here. The gist of it is measurement-based care, unified informatics approach to study variation in care and outcomes, rapid sharing of tools, data, and practices, and fostering a collaborative research culture. And I'll say more about each of these shortly. Next slide, please. So NIMH has funded eight HEPNET hub-and-spoke regional networks. Some are state-specific, and others span multiple states. So we have EPICAL in California, HEPNET Texas, HEPNET Minnesota, OnTrack New York, LEAP, which is in Massachusetts, and we have EPINET AC in six states, Indiana, Tennessee, Louisiana, Michigan, New York, and Illinois. We have Connection LHS in Maryland and Pennsylvania, and we have ESPRITO covering four states, Oklahoma, Michigan, South Carolina, and Florida. Next slide, please. So each of these hub-and-spoke networks will conduct at least one embedded research project that addresses a pressing need in early psychosis care. And you see these research topics here. Reducing the duration of untreated psychosis, preventing suicide, the effectiveness of telehealth delivered CSC, enhancing treatment engagement, increasing medication adherence, reducing substance use, improving cognition and motivation, and determining optimal doses of the initial CSC based on patient subgroup or characteristics. So additionally, in line with the principles of a continuously learning health system, the data routinely collected in the course of care delivery and the findings on practice variation that emerge will serve as engines of scientific discovery for new research projects. Next slide, please. EPINET is, in essence, a network of networks. The eight hub-and-spoke networks are connected by the EPINET National Data Coordinating Center, led by Abram Rosenblatt and Westat. The eight hubs will send data to the EPINET National Data Coordinating Center, or ENDCC, as we like to call it, and the ENDCC will aggregate the data and submit it to the NIMH data archive, where the data will be made publicly available. So the process itself is a bit more complicated than that, and Abram will be able to tell you more. Right now, EPINET consists of 105 CSC clinics in 16 states. Next slide, please. A major goal of EPINET is to capitalize on the large amounts of standardized patient-level data collected on patient characteristics, treatments, and outcomes. These large data sets will allow us to study personalized treatment, conduct randomized quality improvement projects, and rapidly pilot new approaches and evaluate rare events, such as suicide, with statistical power. Next slide, please. An important aspect of EPINET is creating a culture that values measurement-based care and communication among patients, clinicians, and stakeholder communities. Leadership and appropriately aligned incentives will be key to supporting that culture. Next slide, please. EPINET represents a national early psychosis learning healthcare partnership, and that partnership involves the larger early psychosis care community and any CSC program interested in measurement-based care. EPINET will make early psychosis care practice tools, data, learning, and best practices available to the public through the National Data Coordinating Center, and Abram and Howard will say more about that as well. So we are so pleased to have this chance to talk with you about EPINET. How can EPINET be of value to you? Please let us know. Next slide, please. Standardized assessment tools are essential to carrying out measurement-based care and fulfilling the goals of a continuously learning health system. So I'm happy to say EPINET has just recently established its core assessment battery, and this battery is now available to the public, and you know who, Abram and Howard, will also be able to tell you more about that. And Deanna will share her thoughts on using measurement-based care and the core assessment battery in her North Carolina early psychosis treatment program network. So in closing, I hope everyone feels a personal sense of accomplishment at being part of this science-to-practice success story, providing the most effective early psychosis care to clients and families in your community and the best opportunity for recovery. And I hope you'll want to connect with EPINET and our extended learning community and check out the new core assessment battery that you'll hear more about next. Next slide, please. And I want to acknowledge the many people who have made EPINET possible. Of course, the National Data Coordinating Center, the regional networks, my colleagues at the National Institute of Mental Health, in particular Dr. Robert Heinsen, who provided the vision for EPINET, and our colleagues at SAMHSA. Thank you, Susan. So it's Susan, by the way, it's the award for doing the most of anyone. She dashed from the last session to our session virtually. She didn't have to physically run, but I think mentally it required a lot of effort. So thank you for doing that, Susan. So I'm going to be talking with you about the role of the EPINET National Data Coordinating Center, and I'll take you through a bit about the National Data Coordinating Center, who we are, what our core assessment battery is about, and Howard will be providing commentary here and there at the end as well as a critical member of our team. If we can get to the next slide, that would be great. So like you've already had enough maps, here's another one. I can't help but bring maps up, and this shows where the National Data Coordinating Center is. We're in Rockville, Maryland, and you can see the sites all lit up here, and I'll get us off the map right away and on to our next slide. After the last week, I think we've all had enough of maps. Next slide, please. So the NDCC is led by Westat, which is a research company here in Rockville, Maryland. However, we're really a team effort. You can see that I'm the principal investigator, and Susan Azarin is a scientific collaborator and very much an integral part of everything we do. Howard is our steering committee chair and co-chair, and so that's what he is doing here. And then we have a number of folks I'd like to especially mention. Preeti George is our project manager and keeps everything moving and running and a tremendous writer and contributor in her own right. And then a team of folks who have expertise in both clinical work, research, and information technology, importantly, which we'll have a little bit more to say about in a little bit. Next slide, please. We have partners. Most of you have met this morning virtually or already know Steve Nielsheim from Stanford University. Ted Letterman is from the NASPID Research Institute. As Susan was saying, all of our work here is really designed to be a learning healthcare system. It's important for us to work with all stakeholders, and, of course, that includes mental health program directors and administrators in the states where these programs are implemented, and Ted has his finger on the pulse of all of that. Hesta is Dave Shurm, our fearless moderator. And Wilson Pace is from the Dartnett Institute. The Dartnett Institute is involved in data aggregation and healthcare with very large-scale data sets and, as many of you will know or will see, we're going to have a very large data set out of all of this and we're making use of Wilson's extensive expertise and of the Dartnett Institute as well. So this is a partnership between Westat and our external partners and collaborators in NIMH. Next slide, please. So what are our aims for the Data Coordinating Center? This is an interesting effort. We have a number of different aims and it has a fair number of different components. We do planning and infrastructure development, so we facilitate the work across the hubs to develop planning infrastructure so that the hubs can communicate and we can communicate with them. We facilitate processes to create measures and to do data harmonization, which is a sort of current term, harmonization, of ways of integrating different kinds of data sources so that they can be looked at across the different hubs and clinics. We're building a data infrastructure, both to store data and to share data, which is really important, and Aim 4, which is the EpiNet Analyst Zone, which sounds a little bit like ESPN, which is how we kind of got there. And the Analyst Zone will be designed so that researchers and scientists and practitioners and others can get maximum value out of the data that's produced in the EpiNet effort. And then a public website, and we'll have a wide range of dissemination efforts. The public website is launched and available, and you'll see the link to it as we get to the next set of slides. We encourage all of you to come to the website and to make use of it. Next slide, please. So a big part of our work at the Data Coordinating Center is to aggregate data across that. You can see we have eight hubs, and we have the Data Coordinating Center. The Data Coordinating Center will produce an aggregate database, and we will also produce a national data archive for the National Data Archive. The NIMH National Data Archive is a publicly available site, and so the data archive will allow a wide range of researchers to make use of the EpiNet data. And the EpiNet aggregate database will be building tools for not only the EpiNet sites, but also for all of the external research community and practice community and consumer community to make use of. Next slide, please. So let me say a bit about the EpiNet Core Assessment Battery. Next slide, please. So the Core Assessment Battery, or CAB for short, serves as the basis for common data collection across the EpiNet clinics. It was designed for a resource that can be included in data collection efforts within the coordinated specialty care clinics, and it can be aggregated in a database that will have significant statistical power. You'll see the details as we go through the next set of slides. Next, please. It was developed through a 12-month consensus process by the EpiNet Steering Committee, which was co-chaired by Howard and myself. And we have our scientific collaborator, Susan Asrin, as a part of it, and then the PIs and co-PIs of the first five set of funded hubs. In addition, we have workgroups that were composed of really about 20 early psychosis researchers and clinical experts that were nominated by the hub principal investigators, and they provided input on specific topics. So this was a consensus process across the hubs and with external consultations. Next slide, please. So we began by looking at the proposals to NIMH by the hubs to see what they're planning on measuring. And you can see across the different hubs across the top and the PhenX toolkit on the end that there are a number of potential measures that could be included, and in some cases there was commonality. For example, you see the CSI across all the sites, and the CSI is short for the Colorado System Index. And then there were a number of other measures, and there wasn't always agreement, as you can see, across the different measures. So a big part of the process for all of us was coming to agreement about what the measures should be and about what should be measured. And we were successful in that process, which we're very proud of. Next slide, please. We came to agreement about the domains that needed to be in the core assessment battery. You can see them listed here. I won't read through all of them. They are the ones you would typically expect, like symptoms and demographics and diagnosis and things like that, but also ones very specific to early intervention, such as duration of entry to psychosis, number five for DUP, and pathways to care. And then we tried to have a wide range of different variables that we could look at in things like education, employment, all of which are generally linked to the kinds of services provided through coordinated specialty care. So these were the domains that we came up with. Next slide, please. Thank you. And the core assessment battery that we ended up with is composed of a mix of standardized measures and then specific questions that we ask that aren't found in standardized measures. You see here the list of the measures, which is, I think, really helpful, both just to give you an idea about what standardized measures we're using and also to make sense of the alphabet soup that you'll see appearing throughout this presentation periodically. We have measures of cognition, two of them, and the sites are able to choose which one they're going to use. We have several measures of functioning, both of role and social ratings, occupational scales, medication side effects and treatment adherence, recovery, shared decision-making, you can see them, stress, trauma, adverse childhood events, and then, of course, symptoms based on different scales. Next slide, please. I already saw in the chat, because I've been looking as you guys type, that you wanted to know what the URL was for the National EpiNet website, and here it is. It's nationalepinet.org. Please go to it and have a look. We really want to collaborate and work with all of you. The core assessment battery in all of its glory is on the EpiNet website. You can go in and take a look. You can download the full CAB and user guide. You can download individual items. You can download measures by domain. The measures are broken down by domain. There are PDFs for you to look at, and you can also just download certain sections if you want to take a look. It's completely available to the public. It's for all of you to use as you choose, and we also, as you'll see as we get to the end of this, we welcome your feedback, your comments, your thoughts. We're interested in collaborating with many of you who might want to enter your CAB data if you decide to use your parts of your CAB data into the EpiNet National Data Coordinating Center system. This is a process that is designed not just for the EpiNet sites, but for all of you and all of the community that's working in early psychosis, so please come to the National EpiNet site. We're trying to get that elevated, so when we do Google searches, it'll show up sooner and be easier to find, and also we just hope it'll be very, very helpful for all of you, and it'll be very helpful for us to learn from you what you think. Next slide, please. So the reason we're talking with you in part is EpiNet's a learning healthcare system. You can see we have a number of hubs you've already heard about. Our goal is to disseminate data-driven knowledge to improve care to external stakeholders and partners, including research practitioners, clients and families, and others, and this isn't a one-way street. We really want to have this be a loop where we hear from clients, families, researchers, practitioners, identify different kinds of problem solutions, different kinds of needs for data research, for analyses, and then feed those back in, so this is a continuous learning process, not just for the EpiNet sites and hubs and the Data Coordinating Center and NIMH, but again, for the broader community. Next slide, please. So we've got a, you know, if all goes well, we've got another four years on this round, and so this will continue to mature. We're just beginning to get data collection going for EpiNet, and so as this goes, it will contribute to a national data set of unprecedented size. We expect several thousands, thousands as you noted from Susan, of observations in this that will give us the power to answer key and more specific research questions than you typically can in smaller studies. Again, I want to emphasize the data will be accessible to EpiNet researchers and to researchers outside of EpiNet, and the clinics outside of EpiNet, again, they contribute data to this database, so please, when you take a look at the CAB, when you think about that, let us know if you're interested. We'd like to talk with you. Next slide, please. Okay. So, before we get to the next part of the presentation, Howard, I don't know if you'd like to add a few words before we move on. Thanks, Abram. I really didn't have any special prepared remarks. I didn't develop any additional slides, because when we planned this, Abram was going to cover all the bases, and you just did that. But I did want to say a few words about the process, the pretty elaborate group process that involved, as Abram illustrated, not only the PIs, but the PIs went back and talked to people within their hub to develop ideas and reaction to what the steering committee was doing about items and domains for the CAB. And I just wanted to reflect on what a collaborative and generative process it was. I wanted to thank all the collaborators in the process, both who were immediately in the steering committee and those beyond the steering committee. And I appreciated and resonate with your invitation, Abram, to hear from people outside as they have experience with the core assessment battery. It was both generative and very pragmatic in its intent, but it was a committee process. And I was reminded of the old aphorism that defined a camel as a horse designed by a committee. I kind of think it's an unfair slight on the camel and on committees. The camel, after all, is a beast that's pretty well adapted to its environment. And while it might lack the conventional beauty and flash of the horse, it really is suited to its purpose. And we like to think that the CAB is like that. And if I can wax even more romantic, there is something graceful about a silhouette of those camels going across the desert in a caravan, maybe on the Silk Road. Stop me before I get carried away here. But I do think that the CAB is like that beast. I looked a little further into where the aphorism came from, and it's further instructive as a metaphor. It turns out that the statement that a camel is a horse designed by committee came from Sir Alec Issigonis, who was an automobile designer and was responsible for the design of the Mini Cooper, which most of you know is a small, efficient, economical automobile. But while it's iconic, it isn't really very well suited for a lot of tasks that we ask for our automobiles. Some of those tasks are better served by an SUV, or Abram would say a sports car as an aficionado of sports cars. And in a way, I really would just conclude by saying that we hope that the field comes to appreciate the CAB designed by a committee, but sees it like the practical camel that's best suited to the rugged terrain of the behavioral health services environment. Nothing more to add, and I'll just turn the mic over to Deanna Perkins, who's going to talk about the implications and use of the CAB in a real-world setting. Deanna? Yep. Well, thank you. Can you all hear me? I hope. All right. Well, hello to all. My hope today is to describe North Carolina's EPI-NC, or Early Psychosis Interventions in North Carolina program. In this presentation, I want to emphasize the lessons we've learned from implementing our quality assurance, data collection, and reporting initiatives. And I also want to include how our program hopes to utilize and benefit from EPI-Net's efforts as described by the previous presenters. So if you can have the next slide. As a way of background, about 15 years ago, at the University of North Carolina, we started Outreach and Support Intervention Services, or OASIS, following the Coordinated Specialty Care Service model that was developed in Australia by Pat McGorry and his colleagues. From the very beginning, we had an interest in collecting QA data to inform us about our program's effectiveness. However, looking back, it was clear that what we did, our data collection battery, was overly ambitious, probably because we were coming at QA collection from the perspective of researchers rather than clinical program administrators. So it soon became clear to us that our goal with this QA database was to be relevant to clinicians, patients, and the state, and to minimize burden. If I can have the next slide. So the North Carolina Division of Mental Health, Developmental Disabilities, and Substance Abuse Services used the first episode psychosis carve-out from the Federal Community Mental Health Services Block Grant to fund three coordinated specialty care first episode psychosis service programs, Encompass in Raleigh, North Carolina, SURE in Wilmington, North Carolina, and EGLE, more recently EGLE, in Charlotte, North Carolina. If I can have the next slide. So in addition, the North Carolina Division of Mental Health, Developmental Disabilities, and Substance Abuse Services, and I'll just call that the North Carolina Division onward, funded early psychosis interventions in North Carolina advisors, or FBNC. Our scope of work was fourfold. First, it was to, and our main, our main task was to help develop coordinated specialty care services in these locations and monitor fidelity to the coordinated specialty care model of care. We also were charged with developing, maintaining, and reporting QA measures to the sites and to the state division. A third was to enhance community clinician skills and recognizing early psychosis via seminars and workshops. And finally, to develop innovative programs that might address unmet needs. And I'd like to say that tomorrow at the 4 p.m. plenary session given by Mario Varas-Jimenez, he's going to discuss the Horizons program, which is actually one of our innovative program initiatives that's being led by David Penn, who will also be speaking about the Horizons USA, which we're implementing in our North Carolina program. Okay, if I can have the next slide. So, for the rest of the talk, I'm going to emphasize the first two aspects of our scope of work, which is the development and monitoring of early psychosis program fidelity and our, how we develop, maintain, and are doing reporting on our QA measures. So, you go to the next slide. So, when we were starting, this was really early days of, we just had the RAISE data was becoming available, and programs were beginning to sort of learn from what happened with RAISE. So, we did survey existing fidelity measures for first episode psychosis programs, as well as other disorders. But having learned lessons regarding improving the relevance of our QA database and minimizing burden, we set a goal of using the QA database itself to monitor program fidelity. So, here's one example regarding the requirement that programs use active engagement strategies. So, we set standards for what was acceptable, what was approaching the standard, what was meeting the standard, and we provided indicators that we hope were concrete and clear. But we also specified how we were going to measure these requirements, and we tried as much as possible to use our database to do this measurement. So, here's an example of measures from the QA database that we used to measure engagement strategies. So, we have a discharge form that includes a reason for discharge. We look at service delivery. One of the measures that we use, which is also part of EpiNet, is the Collaborate, which looks at medical management, whether or not medical management uses a collaborative model. There's similar items on our program satisfaction and report form, and we review these with the team leads annually, as well as the therapists, and discuss what the data looks like, what And discuss what the data looks like, what they're actually doing, and talk with the programs about what perhaps they would like to do better if, in fact, we identify problems with program engagement, for example. So, the next slide. So, this gives you just a…I'm going to have a few slides now that kind of overview the kind of data that we report in our…we actually give a report semiannually. Sometimes quarterly to the state and to the programs. And this is one of the items that we provide back to the programs, and that's just data about the census. How many patients are in the programs? What is the flow of folks in and out of the programs? How many referrals are reported? Evaluations, program admissions, and program discharges, and looking for trends. For example, if a site has high levels of discharge, that might just be chance, or it might reflect something about their program fidelity and their outreach and active engagement. Go to the next slide. The state needs to report back to SAMHSA certain data, and so we collect all of that data for the state, and that includes demographics. For example, the basic, you know, ancestry, patient age, patient sex on active clients during the particular month. You go to the next slide. We also report actual delivery of program services. So, we set some standards about the frequency of visits that we thought should be happening over the course of the initial part of the program and later part of the program in terms of medical management, individual therapy, family therapy, peer support, and supported employment and education services. This is actually where the electronic medical record has been helpful for a couple of sites, because this data can be generated automatically from the billable services that are documented in the electronic medical record. Next slide. So, a major goal of first-episode psychosis-coordinated specialty care services is recovery, and we recognize that recovery may have a somewhat different meaning depending on if you're the client, the clinician, or a member of the community, or family. So, in fact, I was gratified to see that we have incorporated several of the instruments recommended by the EpiNet Core Assessment Battery. into our assessments. So, for example, we try and capture client-patient perspectives with the questionnaire on the process of recovery, as well as self-report, competitive work, school, social contacts. For the clinician, clinicians are concerned about symptoms. So, we have a clinician-rated, abbreviated Re-Psychiatric Rating Scale and the self-report Colorado Rating Scale, both of which I believe are included in the core battery. We also collect data on safety monitoring. This is especially important for the medical providers. We also look at the community perspective. So, disability benefits, insurance coverage, hospitalizations, and law enforcement. You go to the next slide. So, here's an example of what I think is one of our key outcome variables, which is the proportion of clients who are employed or enrolled in school at baseline, and then at follow-up within the first year of program participation, and then within the second year of program participation. And you'll notice that the bright colors are whether a person is working half-time or more, working less than half-time, in school half-time, less than half-time, doing both. And the gray is the people who are not working or in school. You'll see that as people came into our program, over half of the clients were not in school or working, and that over time, the longer people were in the programs, the less likely they were to be not working or in school. And I want you to note here that we include the missing data in these reports. And this is something that we added relatively – we really just added in the last year's report. And you'll notice that the missing data for this particular variable is really low. That's not true for all of our variables, and that's something I want to discuss later. You go to the next slide. A second example is the proportion of persons receiving disability, because the state's, of course, very interested in that. One of our goals is to prevent disability or to minimize disability. And here you see that people did not come into our program on disability, people who were within the program within the first year. Some were on disability and working, and some were on disability and not working. And then this proportion remained pretty stable in the second year of the program. So there were some people who felt they were disabled and applied for disability services. And you'll note that this variable has a fairly low missing data rate. Go to the next slide. And this is one example of how we report data to clinicians. This is the brief psychiatric rating scale. The medical providers are particularly interested in this. And we track these clinician-rated symptoms over time. And you'll see if the scale is actually – people come in pretty – doing pretty well. So this is a – the scale is on a 0 to 4, 1 to 5 scale. And you'll see that most people – the average is less than 1. So people come in with fairly low levels of symptoms. We do see symptomatic improvement over time. And these stabilize. So if you want to go to the next slide. Okay, we've – we have – one of the key challenges we've faced, one of the – one of the – the thing that's been the hardest has been the percent of missing data. And so this is something that we've been monitoring and following over the third and fourth quarter of 2020. But we're showing you the data that kind of made us aware of that – the severity of this. So here you see issues with, you know, missing data rates approaching 60 percent. We had been doing better. But the COVID situation was very disruptive to the clinical programs. And we'd actually expect to see that, you know, for example, people move from telehealth – move from in-person brick and mortar to telehealth. And we had really no, we had to figure out ways of collecting the data via the telehealth, both for the self-report and for the clinician's data. And we were happy that the missing data rates have gone down as we've been working with the sites to adapt to telehealth. You wanna go to the next slide? So we are very interested in learning from and leveraging what Epi-Net is doing. We plan to align the Epi-NC quality assurance database with the Epi-Net core battery. And as I just learned, we're excited about the potential to contribute to the Epi-Net database. This will allow us to compare key outcomes from North Carolina programs with those from the Epi-Net programs as a whole. Learning what we're doing well and where we need to focus in terms of improving outcomes. We also would like to learn more about establishing learning healthcare networks and making them really work for clinicians and for patients. And a third example would be to actually look at the Epi-Net database to identify and possibly address service gaps in our program. So if you can, we have the bibliography as the next slide. And then if you wanna go to the following slide. It's important to acknowledge that the work we've been doing with Epi-NC is a team effort. So these are the current team members. And I mentioned David Penn, who's developing the Horizons program and handles the psychotherapeutic services, training and monitoring fidelity. Sylvia Sade is a crucial member, working with sites around family therapy services and how to administrate these programs. And then the staff, Chad Jones, our program administrator, Jennifer Neary, who works with the Innovative Programs and Jenna Barbee, who manages the database. And she's really a very, very wonderful person, a key member of our team. But we also, the North Carolina Division of Mental Health, Developmental Disabilities and Substance Abuse is key. And so Eric Harber, especially, thanks so much. Nicole Cole has been our direct liaison. Jimmy Trez has a special interest in supportive employment and education, and he has been extremely helpful in helping us improve our SCE services. Thanks, Deanna. Terrific set of presentations and terrific vision, I think you represent the opportunity for collaboration extension and the power of that by hooking your very powerful measurement set and approach and aligning that with the EpiNet. We have several questions and about 20 minutes to address them. Susan, I'm gonna start with you. There were two questions about site selection. One, wondered whether or not Alaska wasn't included because of cultural differences that might be represented there. And a second from a colleague in Arkansas, wondering if there are gonna be any more research sites added. Why don't you talk a little bit about site selections and future plans? Sure, thank you for those questions. So just a little background on how these hubs came to be. So at NIMH, we put out a funding opportunity announcement where folks put together applications and submitted them. And so it was a competitive process. So there were actually two rounds. Initially, five of these regional hub and spoke networks were funded. And then we got money a little later and we sent out another funding opportunity announcement and got in a whole nother round of applications and funded three more. So what we funded came out of that pool of applications and a competitive review process. So we don't mean to leave out anyone. However, we do intend for EpiNet, again, to be a resource and a partner to the entire early psychosis care practice community. So we're just getting started now, but what we're building to is ultimately any coordinated specialty care program that wants to do measurement-based care, wants to use the core assessment battery, just some of the core assessment battery measures. We hope to be able to set up through Westat and the National Data Coordinating Center, a process where CAC programs are outside of EpiNet can submit their data and essentially benchmark. How are they doing against other similar coordinated specialty care programs? So that's probably more than the questioner asked for, but hopefully that answered, at least the initial question gave some sense of how these hubs came to be, the EpiNet hubs. Yeah, so it's a competitive process and- Very, very competitive, I should say. Very competitive process, yes, as is the case with many NIMH grant funds. Some questions about the measures specifically. One is that, and I think this relates to Susan's comment too, about wanting to sort of build out the network with other clinics becoming involved. And of course, Deanna's presentation underlined that as well. But a person responded, Abram, that he's a very busy clinical psychologist working in a community mental health center. And this is overwhelming to imagine. I mean, if you're gonna use the CAB, do you have to use the entire thing or can you select measures that you think are practical for your clinic? I saw that question come in, Dave, and it made me smile because I very much understand that question from my prior clinical life. And the quick answer is, no, you certainly don't have to use the whole thing by any stretch of the imagination. It's really a resource and you can use items, you can use instruments, you can use any part of it that you choose. Obviously, the EpiNet sites are funded to do research, the research sites, and we know that there's a lot more limitations in sites that may not have that kind of funding. But we're perfectly happy to receive data that's what we kind of call Swiss cheese data that has lots of holes in it because using the statistical power, statistical analysis, we can still make good sense out of those kinds of data. So we really encourage people to make use of it in whatever way makes sense for them. If I could add, one of the challenges that's happened with our providers is that it's time consuming and we used a strategy of actually integrating the assessments into the clinical note. So the BPRS is the physician's mental status exam. And so that we actually have to pull the data by hand, but we were hoping to work within the Epic system to have it pulled automatically soon. So that's one way I think that helps with a busy clinician is just plus there. Yeah, I think it's important to emphasize the extent to which state and local mental health practices figured pretty prominently in EpiNet thinking and the members of the steering committee were very much guided by what their state was doing, what would be practical, recognizing that the research needs might want a more comprehensive set of measures than the state would want, but wherever it was possible, we were looking for harmony between the state and local data collection and then harmonize that across hubs. And so it was a very challenging process, but it was done with a constant attention to what seemed practical, doable, and would be supported by the states, which are really essential to the delivery of these services. State resources supplemented by SAMHSA's block grant set aside for first episode. I don't think we can emphasize enough the importance of those contextual forces that helped to shape the core assessment battery. I think that's a really important observation, Howard, and sort of speaks also to kind of the partnership in the National Data Coordinating Center as well with Westat, many researchers in the research team who have a lot of experience working in state mental health systems and then having NASHPD, the organization that I represent in this project, as a partner that has direct linkages to all the states. So I think it's architectured just just right. Someone noted that all of the suicide measures were our clinician administered, and they were wondering if there was any consideration of having any self-report items around suicide, which we know is suicide's very important and a real risk factor in first episode psychosis. Actually, the Collaborate has a suicide question, and it's partly why it's very important that clinicians review the Collaborate. I mean, it's not sorry, the Collaborate, the Colorado scale has a suicide question, and it's really important that the clinicians are able to see the scale and to review any items that have to do with patient safety. And it's actually been pretty valuable for the clinicians, like getting that self-report measure that helps them do their assessments. So that's terrific, Dan. I'm familiar with the Colorado Symptom Index and had forgotten about that, but it's like, you may be sort of clear that, the cab is sort of a camel, and it's also like the Italian cross ragu, it's in there somewhere, it's in there. It's a joke in the Robert Wood Johnson story. We even imagine like at the kitchen sink, and we're listening to a great history of that. That goes back 30 years, yeah. There's a question, maybe for a student. I think that's a clinical question. I'm sorry, please. Sites need to, people that are using this data need to think about that issue, as if you're collecting something, you need to look at it. It needs to be, because it can reveal things that really a clinician needs to act on, and that you may not have remembered to ask about. Suicide's one example, but there are several examples where it's important to somehow try and integrate that into what you're doing. And these instruments, I think, are very valuable in day-to-day management of patients. This information can be very helpful. Yeah, we were just talking this morning about how we would think about going about creating practice-level dashboards for clinicians, and what that might look like, which is a fairly daunting concept when you think about it, but along the lines of what you said, Deanna, trying to make it useful for clinicians. One of the things that became obvious in our conversations within the steering committee were that some measures that we call clinician measures, maybe most of them, are derived from what we learn from the service user, herself or himself. So we talked about various ways of collecting this information, talking about using a kiosk where service users can enter their observations, but they would be connected to what might be then reported by the clinician in what is traditionally a clinician-reported instrument. And some of the biggest conflicts within the steering committee had to do with were we talking about burdening clinicians with filling out this information? How much of it, which originates with the service user, should just be recorded by the service user at the point of service? So there's a lot of complexity underneath the surface of just the listing of the items and the components of the CAB. Lots of stuff to think about. Kind of, from my thinking, a little bit of an unusual question, and maybe this one's back for Susan, and maybe talk a little bit about the role of the NIH vis-a-vis the rest of government. Because the question related to whether or not a tangent administration would have any effect on the future or the functioning of the EpiNet. Well, I have to admit, my first thought is, I hope not. But I'm not seeing that. The, you know, with, you know, government funding, it's, you know, typically, you know, year-by-year basis, if push comes to shove. But our intention and really strong expectation is that they will continue to be funded for the period of the grant. They're all grants. The hub-and-spoke regional networks are all grants. And the EpiNet National Data Coordinating Center is a type of grant called a cooperative agreement. That means the government is more involved than usual. And I'm sort of the embodiment of that involvement. On this cooperative agreement. And so the first round of five hub-and-spoke networks were in the Data Coordinating Center funded for a five-year period. And then the second round to have sort of the same timeframe were funded for four years because they came later. And so we don't anticipate any change to that at all. And of course, it's our hope that, you know, that this will be phenomenally successful in achieving goals of EpiNet, and we'll be able to continue. So that's a concern. The larger world that's swirling around us isn't a concern for me in that regard. Yeah. And it's been my sense, too, that the NIH has been relatively insulated and maintained kind of itself. And it's been my sense, too, that the NIH has been relatively insulated and maintained kind of its status of scientific integrity over the course of the last several years, perhaps less so than some other parts of government. But that's a good answer. The thing is, you know, it's not an entitlement. But hopefully there will be a lot of excitement about this. I know how much work and thought went into developing that core assessment battery. Someone wants to know, are there plans to add anything else to it? It's probably, yeah. Will it be augmented? It already has two humps. I think two humps is the limit. Yeah. And it will evolve a bit. I mean, we're calling this the CAP 1.0. You know, it's inevitable that over time we'll learn some things that are working and, you know, find some things that we kind of went, well, how did we ever forget, you know, not end up with that in it? So it will obviously change around the margins as we go along. So it's possible that there will be some shifts. Yeah. You know, beyond what's necessary. Kind of the nature of research, too. You know, as you learn things, there are better questions and that might involve having some different measures included. Someone wondered, Abram, if any of these measures could be administered remotely. And I guess I'm thinking about, you know, the way the fidelity assessment was done in the national evaluation was kind of a modified approach to collect fidelity information without doing site visits, to do it remotely with staff. Is there any thought about the degree to which these might be, for example, used in more rural settings, things like that by other participating entities? Well, just on the technology side, we are creating an electronic version of this so that it could be completed, you know, on a tablet or a phone, a tablet or a computer and completed, you know, by clinicians remotely that way or by respondents or clients that way. So we do plan to have a technology available version that should help with those kinds of administrations. And could I add to that? For some hubs, they could decide that a centralized assessment approach is the way to go, that that's more efficient. And so I think most, if not all, of the measures could be administered that way. Yeah. Or in combination with telehealth. I mean, obviously, as you heard from Deanna and in general, all of the epi net sites have been needing to learn how to do telehealth over the course of the last six to eight months in a more intensive way than they imagined. So it's certainly possible that you could do this in a combination of, you know, technology through a web based system with a phone and people talking the way we are and zoom or whatever the platform is, video methods. So I noticed that we have a question from Bob Hinson who works with Susan and who's been a terrific champion of the first episode programming generally in the epi net specifically. And Bob's wondering if there are examples in epi net, community programs that have incorporated measures into routine practice sort of as Deanna was describing, they use the measures in North Carolina. Yeah. So I'm thinking Bob might have an inkling of the answer here, but I would love folks to know about this too. So the, I believe it's the majority of the, of the 105 clinics, our community based early psychosis treatment programs that are not, you know, they're not based in academic settings. They're, they're based in the community, which of course, has a huge advantage in terms of, you know, generalizability to all the other coordinated specialty care programs in the community. And while April mentioned, they are getting additional funds, of course, through epi net to, to do that's meant there is still a, an obvious real world element there and being able to carry out these assessments like the burden on clinicians and, and patients in the real world setting as part of routine data collection. So we feel like that's a real strength of epi net. The, you know, large number of community-based treatment programs. And then in terms of is your predecessors epi net community-based treatment programs using your measurement based care. Obviously there's Deanna Perkins, who is our living example, who told us all about doing it in North Carolina and, you know, some of our epi net hubs before they became epi net hubs, or we're already doing this like on track, New York, they have extremely strong, you know, state system, state support. And, and that's highly advantageous in, in terms of the support for the measurement based care, but it also poses some challenges for epi net in that there are certain things they're required to measure, you know, they, there are certain assessments they like they need to do it a certain way. But kudos to, to on-track New York for having such a strong data collection system already in place and already doing measurement based care and having created that, that culture and, you know, EpiCal by Taranindam in California, it's much more centralized than, than New York, but they also have a strong culture of doing measurement based care or developing culture, I should say, but again, the decentralization there makes it super challenging because each coordinated specialty care program things a little, little differently, but we're trying to, and I've heard everyone saying, you know, in this direction of the core assessment battery, here's like a limited number of measures we can all agree on. And of course, that will allow us to fulfill the scientific discovery piece of epi net. Thank you all for a great presentation. And I think it's generated a lot of enthusiasm for this project and for its potential to grow.
Video Summary
The video featured a panel discussion about the EpiNet project, which focuses on early intervention and treatment for psychosis. The panel included Drs. Susan Azarud, Abram Rosenblatt, Howard Goldman, and Deanna Perkins. Dr. Azarud provided an overview of the EpiNet project and its goal of accelerating advances in early psychosis care. She emphasized the importance of measurement-based care and the use of standardized assessment tools. Dr. Rosenblatt discussed the role of the EpiNet National Data Coordinating Center, which oversees data collection and analysis. He explained that the center is responsible for aggregating data from the various EpiNet hubs and ensuring its accessibility to researchers and practitioners. Dr. Perkins shared her experience with the early psychosis intervention program in North Carolina and highlighted the importance of collecting quality assurance data to inform program effectiveness. She also discussed the potential for collaboration between EpiNet and the North Carolina program. Overall, the panelists emphasized the significance of the EpiNet project in improving early psychosis care and its potential to contribute to scientific discovery and advancements in the field.
Keywords
EpiNet project
early intervention
psychosis
panel discussion
measurement-based care
standardized assessment tools
data collection
researchers
practitioners
North Carolina program
scientific discovery
Funding for SMI Adviser was made possible by Grant No. SM080818 from SAMHSA of the U.S. Department of Health and Human Services (HHS). The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, SAMHSA/HHS or the U.S. Government.
×
Please select your language
1
English