false
Catalog
The Benefits and Opportunities for Clinics Outside ...
Presentation and Q&A
Presentation and Q&A
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, good morning, everybody, or good afternoon. I wanted to welcome you to today's webinar on the benefits and opportunities for clinics outside of the Early Psychosis Intervention Network, EPINET, to become partners. I am Judith Doberman, the Program Manager for PEPNET at Stanford University School of Medicine, and with us today is Dr. Kate Hardy, who is a Clinical Psychologist and a Clinical Associate Professor in Psychiatry and Behavioral Sciences in the Stanford School of Medicine, and also joining us is Dr. Stephen Adelsheim, who is a Clinical Professor at the Stanford Department of Psychiatry and Behavioral Sciences, the Associate Chair for Community Partnerships, and the Director of the Stanford Center for Youth Mental Health and Well-Being. Both Dr. Hardy and Dr. Adelsheim will be co-facilitating with our presenters your questions today. Today's webinar is brought to you as a partnership between PEPNET and SMI Advisor, which is a SAMHSA-funded initiative implemented by the American Psychiatric Association. We will be offering CEUs for physicians and psychologists for the live presentation today, and we will share with you at the end of the webinar how to claim CEU credits. A couple of logistics items. You will find it should be in your sort of a float panel here, a chat feature. If you go to the Zoom chat, you'll see it says to everyone. If you would like to post comments and questions to everyone in the webinar, please select everyone. Sometimes people select panelists and only we see the questions, so that's fine too, your choice, but if you would like to introduce yourself as well, please select everyone. And now I'm going to turn the webinar over to Dr. Kate Hardy, who's going to introduce today's presenters. Thank you, Judith, and hello to everyone. Thank you for joining us today. I am going to give an abridged version of the bios that were submitted because we have such an excellent panel. If I read out all the accomplishments, we would be here for a very, very long time, so apologies for making this slightly shorter. But on our panel today, we have Dr. Abram Rosenblatt. He is vice president at Westat, where he's a sector lead for child welfare, justice, and behavioral health within the behavioral health and health policy practice. Dr. Rosenblatt is currently principal investigator of the NIMH-funded Early Psychosis Intervention Network Data Coordinating Center. We also have Dr. Howard H. Goldman, who is professor of psychiatry at the University of Maryland School of Medicine. He is a mental health policy researcher who has been evaluating demonstration programs for nearly 40 years. Recently, he has focused on demonstrations and early intervention in mental disorders to prevent disability. We also have Dr. Calkins, who is an associate professor of psychology in the Department of Psychiatry at the University of Pennsylvania. She is currently co-director of the Pennsylvania Early Intervention Center that oversees education, training, resource development, and evaluation of 14 coordinated specialty care clinics in Pennsylvania. And back to the correct slide, Dr. Tara Needham, who is an associate professor in psychiatry at the University of California, Davis. She is the executive director of the UC Davis Early Psychosis Programs, both EDAPT and PsychEDAPT clinics, and has developed four early psychosis programs in Northern California based on the coordinated specialty care model of early psychosis. Thank you to all of our panelists, and I will hand it over now. Thank you, Kate. Hi, everyone. This is Abram Rosenblatt, and I'm very glad to be able to join you all today for this webinar. We have some learning objectives for all of you, which is to go over how we're going to do this. And what we're going to do today is just a little bit different, hopefully. We're going to have a more standard sort of slide presentation for the first part of the webinar, and I'll be doing that to give some background on EpiNet so you can get a sense of how we developed our core assessment battery and how we're using that for research in early psychosis. And we'll also want to demonstrate with you how we use the CAB among the hubs and the National Data Coordinating Center and how the EpiNet clinics can contribute to CAB data if you're not an EpiNet participant. So the first part will be a slide presentation, and then Howard and Monica and Tara will be doing an interactive discussion later on in about half an hour or so, and that'll hopefully be very engaging, and we're looking forward to that. So I'll be taking you through some background to get us started. If we could have the next slide, please. So many of you know this, but just to really set the stage, we're going to be talking today about early psychosis, really particularly first episode psychosis. This is a condition where a person loses contact with reality and may experience a number of really troubling symptoms, including paranoid illusions or hallucinations. Typically, psychosis begins in the late teens, mid-teens, late teens to mid-20s. It's about 100,000 adolescents and young adults experience the first episode every year, and early treatment increases the chance of successful recovery, and this whole initiative is about early treatment to enhance chances of recovery. Next, please. Coordinated Specialty Care is currently the best documented effective team-based intervention. It combines first episode psychosis, combines a number of really well-established services, including assertive case management, psychotherapy at the group or individual level, support in employment and education, family education support, pharmacotherapy, and then these are all closely coordinated with health care. There are often other components to this, including things like occupational therapy, so it's a coordinated effective team-based intervention with quite a bit of background in the research area. Next, please. So with regard to the growth of Coordinated Specialty Care, something really remarkable, as we say, something really dramatic happened since early 2000 when there were no Coordinated Specialty Care clinics in the country, and you can see here there's nothing on this map. Next. In 2008, there were Coordinated Specialty Care programs in two states, Oregon and California, and then something happened here. Next, please. In 2014, SAMHSA funded the Mental Health Block Grant Set Aside, which set aside funds for early psychosis treatment, and you can see what happened here as far as the expansion of states with early psychosis intervention plans and the number of clinics, and so you can see that the states went from a fairly small number and then in 2014 grew to almost all the states having a early psychosis intervention plan. Next slide, please. Same thing happened with the actual programs. In 2020, there were 300 plus clinics. They were in all 50 states and four U.S. territories, and you can see that there's just been tremendous explosion of Coordinated Specialty Care and early psychosis programs across the country more recently. Next slide, please. So to EpiNow, the Early Psychosis Intervention Network. This was established through the National Institute of Mental Health a year and a half ago in 2019, and just before all of this COVID hit. EpiNet links Coordinated Specialty Care clinics through standard measures and participant-level data collection. We encourage you all to look up and go to the National EpiNet website where you can find a great deal of detail about EpiNet and a lot of background, including our core assessment battery and some resources, and this website is under continual development, so check back often because we'll be adding things as we move along. Next. EpiNet includes the Data Coordinating Center, which I'm proud to represent and pleased to represent, eight hubs, 101 Coordinated Specialty Care clinics across 17 states. So this is a major effort. It goes coast to coast with representation across the country. You can see the list of the hubs here along with their locations on this map. Next, please. The idea behind EpiNet is that the National Healthcare System for Early Psychosis, and this is really meant to be a collaborative, interactive-type process. It's not a straight science-to-dissemination-type model. Rather, we have the EpiNet hubs and then the clinics located throughout the EpiNet hubs and the National Data Coordinating Center. Data comes from the EpiNet hubs to the National Data Coordinating Center, and that creates a national data set across all the hubs. The idea here is to collaborate with clients, families, researchers, practitioners, other key partners, potentially policymakers, for example, in the process of scientific discovery, where data is shared back and input is given back from clients, families, researchers, practitioners, and so there's a back-and-forth communication pattern to disseminate data-driven knowledge to improve care. This is really an interactive process. It can involve, as we'll see as we move along, not just the EpiNet hubs but others as well. Through this process, we try to identify problems and solutions in an interactive manner. This is a back-and-forth kind of way of developing a knowledge base and improving care. Next, please. So, through collaboration, we're going to establish standardized measures of various clinical characteristics, interventions, and early psychosis outcomes. You'll see an example of that in the core assessment battery. We use the informatics approach, look at variations in treatment quality, clinical impact, and value. We're going to be building in a number of mechanisms for rapid sharing and data learning, and so we're in the process of doing that, and this webinar is one of the early steps of trying to engage the broader community and treatment community. And we want to cultivate a culture of collaborative research and participation in academic and community early psychosis clinics. So, again, this is meant to be a broad-based effort that involves practitioners and policymakers and clients and their families across the nation, and it's really a culture that we're trying to build for collaboration. So, again, that's part of why we are here. Next. So, I want to say a few words about the core assessment battery, which is one of the very first products out of the EpiNet process. It consists of standardized measures of clinical characteristic interventions and early psychosis outcomes, as well as a number of questions, key questions. Again, you can download an EpiNet flyer or request the flyer by emailing us. I'm at endccwestat.com. You can go to the website, as I mentioned, and see the core assessment battery. Next. This is really the common data collection across all the EpiNet clinics. It was designed as a resource that could be included in data collection efforts within community-based CSC clinics. We're going to consolidate it in a central database. This will give us a large, in the end, a relatively large database that will give us power to answer research questions and, very importantly, answer questions across various subgroups and more focused types of questions where you really need thousands of cases. So, we expect to have thousands of participants in this data collection process where we'll be able to ask relatively sophisticated questions and learn a great deal about treatment and coordinated specialty care. Next, please. So, again, we're capitalizing on what is euphemistically and accurately called big data. Again, there'll be personalized treatment. At the hubs, there's a number of quality improvement projects going on in the hubs. There's the rapid piloting or fielding of new approaches as data develops and also the capacity to evaluate relatively rare events because we have, again, large samples. So, this will be an aggregate of a number of different studies, far more than three, across all of the hubs that are either underway or will develop over time. So, this is a really exciting opportunity to learn a great deal in community-based coordinated specialty care clinics. Next, please. So, currently, this is how we're set up. There's 101 clinics around the country, as I mentioned. We have the hubs, which are named here, and the data coordinating center. The data comes from the hubs to the data coordinating center. It will then be sent to the National Data Archive at NIMH. The National Data Archive will get the consolidated data and the hub data, and that'll be released to the public for use about one year or so after EpiNet ends, or at least this round of EpiNet funding ends, which is around 2025. And in that way, other researchers and researchers who are not part of the data coordination center part of EpiNet currently or part of the EpiNet hubs will be able to access the data set and will be able to ask and analyze their own questions, again, with the idea of sharing. We're also going to create, in partnership with the hubs through the National Data Coordinating Center, what we call the Virtual EpiNet Research Dissemination Infrastructure, or VERDI for short. Everything has to have a catchy name, so we have one. And this will be a series of ways of disseminating the results from EpiNet and visualizing what the results may look like in a way for hubs and for others out there to go ahead and query the database in various ways to get data results back. So this will be a way of sharing back data, not just to the EpiNet hubs and sites, but also to the broader community, to all of you, potentially, if you're interested. Next, please. So a few words about the core assessment battery. We began this work in 2020 and we had a 12-month consensus process. We used the PhenX Early Psychosis Clinical Services Toolkit to help us come up with some of the measures and some of the items. And we had five work groups that helped us around particular topics that included about 20 or so early psychosis researchers and clinical researchers and clinical experts all provided input. You can see here the participants from the National Data Coordinating Center, the Scientific Collaborate at NIMH, Susan Nazarin, who's our wonderful colleague at NIMH, and then, of course, our tremendous colleagues at the hubs. And then there were many people who also worked within the hubs. This group that you see here constitutes our steering committee. You'll be hearing from some of the members, Howard Goldman, who's the co-chair of the steering committee later, and, of course, Tara, and also you'll be hearing from some of our newer members, Monica, who is one of the newly funded hubs. So the original five funded hubs participated in this process to come up with the core assessment battery. Next, please. We, of course, won't go into all the detail about what the core assessment battery looks like. The last thing you want to do is go through 30 or 40 pages of individual items, webinar, but there are domains. They're listed here. I won't read through all of them, but these are the standard kind of domains we came up with that we wanted to collect data on for EPINET. They range from things like cognition and substance use, suicidality and symptoms, medications, legal involvement, employment, education. Some of these have standard measures. Some of these domains, some of them don't, for example, demographics and background. And for those, we wrote and drafted items to collect those data across all of the hubs. So these are the domains that the hubs will be collecting commonly across all of the clinics that are part of EPINET. Next. I mentioned that there are standardized measures in the CAB. They're listed here. Again, we have standardized measures for cognition, for functioning, for medication side effects and treatment adherence, for recovery, for shared decision making, for stress, trauma, adverse childhood events, and for symptoms. And the measures are listed here, and also they're available or references to them are available at the website, National EPINET. Next, please. The CAB is available to anyone, and it's at the, again, at the National EPINET website. You can download the full CAB. There's also a user's guide, which you can download, and you can download individual items and measures by domain. So just a few words to help help you work through that process. Next. You can see here, this is what it looks like when you go to NationalEPINET.org. If you're interested in specific measures or items, you can scroll down the page and you can pick by domain. There are usually baseline and follow-up versions for some measures because they differ, varying on timelines, but there are some cases where there's no difference between the baseline and follow-up, for example, someone's demographics. And so in that case, there's only one version and we don't have a baseline or follow-up. We currently have Spanish versions of the client self-report measures and items. However, we don't have other languages currently, but over the summer, we'll be working to translate this into some other languages as well. So you can go ahead and pull this up at any point, and we'd encourage you to have a look. Next, please. Again, here's the user's guide. It provides some background around how to administer and score each of the CAB measures and items. Next. So currently, only the 101 EPINET clinics associated with the hub are contributing data to our consolidated data set. So this is a closed loop of the EPINET clinics as it was originally envisioned. However, next. You can see again, the original clinics. Next. However, over the summer, we're interested in bringing in non-EPINET clinics to contribute client data to EPINET. We think this would be a way of really broadening the scope and reach of EPINET, and we'd like to invite clinics who are not currently part of EPINET to contribute data. Now, we're primarily focused, of course, on the EPINET clinics and hubs because that is what we were funded to do, and that's the primary focus of EPINET. However, we would like to incorporate some non-EPINET clinics, and because we're trying to do this in the most efficient way, we do have some guidelines around participation just to keep this efficient both for the, hopefully, for the clinics who are interested and also for us at the Data Coordinating Center. Next. So for that, we would like, if you're outside of EPINET and you're interested in doing this, in order to do that, we'd like you to participate in an orientation meeting with us so we can help work through how to do this, be helpful for you and helpful for us. There's some background information that we'd like to have that would help us interpret and understand the data, and then for each client, we'd like to have the demographic and background items completed so that we could break information down by different types of clients and different characteristics of clients. And then to make it kind of work all of our while, we'd like at least two standardized measures that are in the core assessment battery be used by those clinics that would like to participate who are not in EPINET. We don't mean for this to be at all prohibitive. Again, we're really encouraging participation. We just want to try to maximize the value of the data we collect and maximize the value, hopefully, for the coordinating special care clinics that are interested in participating with us so that we actually have something useful to share back. Next, please. So again, over the summer, check back at our website. We're not quite ready to jump into this, but we will be over the summer. Or you can, at any time, please email us at the endcatwestat.com, and we'll put you on our list. Clinics could include newer existing clients in your database. They could be administered in several ways. We're going to create a web-based core assessment battery for this process so that we can get data in a very consistent fashion. And you will have to use our web-based data collection system to be included in the database. Again, not to be prohibitive, but just to make this more feasible and reasonable for us. But you can imagine if we got data in a wide variety of formats from a lot of different clinics, this would be very difficult. So the web-based CAB would be set up to allow you to do that. So you could do it in a number of different ways. You could do a pen and pencil versions, and then enter them into the database by staff, where staff could read the questions and complete it as you go. And so we just ask that when it's ready, you use the EpiNet core assessment battery web-based version to, again, make this feasible for us in handling the volume that we hope to have for non-EpiNet clinics. Next. So there are benefits for participating. Obviously, while I'm sure many of you would be interested in participating just out of the value of it, we also know you would need to get something back. We will consolidate the data with the national EpiNet database of 101 clinics who are part of the EpiNet hubs. If you contribute data, you'll have access to training regarding best practices for administering the CAB, training on how to use and interpret the scores on the measures, a secure portal where you can download your own data and use it for client monitoring and quality assurance. And over time, as we get on board, I mentioned that Verde system, we will have a dashboard where you compare your data to regional and national data being collected by the EpiNet clinics, and also access tools that you can use to generate infographic and reporting type information so that you can use it to look at your clinics against the national data, and also various ways to look at your clinics in terms of graphical and visual representation of data so that hopefully that will be engaging and useful. Next. So the next part of our presentation, we're going to have an expert panel, and they are indeed experts, discussing about adopting CAB as part of clinical practice. And I'm going to kick it over to Howard, who will be asking the questions. And we will also have at the end of this a chance for question and answer, and I see some of you are already beginning to send those, so we'll be sure to get to those as we move along. So Howard, take it away. Thanks very much. Sorry about that. Thanks for the background, Abram, and for throwing it over to me. You've presented us with the background. We want to get below the surface. So we have Tara and Monica, our experts here today, to get below the surface and talk about their own motivations for participating in EpiNet and to reflect on what others might think about as they consider participating. Before we go into that part of the conversation, I did want to ask Tara if she could reflect a little bit on the relationship of the CAB to other standardized batteries like Phenix, and also just think about what the process was like, what some of the considerations were, before we go into some of the issues related to the decision to participate. Sure. First, I want to say thanks so much for inviting me to participate in this. It's really an exciting project overall, and I think it's a great opportunity for other sites to participate as well. So some of you who are joining us today may be familiar with the Phoenix Toolkit, may not. This is a collection of different, widely, usually freely available assessments that can be used to cover different domains. So there are sections for suicide or for alcohol use and there's also a section for early psychosis. And many experts in the field contributed to the selection and curation of this toolkit. So when we developed our battery, it seemed like a very natural thing to look towards the Phoenix Toolkit and say, hey, what can we use here? And we were already familiar with many of the measures there. And so that was what we used as the basis for where we said we would start the CAB. I think what changed from there in the discussion with the other sites and the PIs was as well as gathering stakeholder feedback. In California, we did a very large stakeholder feedback process across all of our sites, including leadership from the state and the county, clinic staff, as well as clients and families. And we did it in both Spanish and English. And that really helped us to see some of the areas that were missing from the CAB. And I think that was where the collaboration with the other EPINET PIs was really central to finding ways to address measures that didn't really capture maybe what clients felt was their experience or that didn't, like we were missing whole domains. And so I feel like the Phoenix Toolkit, as Abram's slide showed, was a great foundation. But we really did try to build on that using our experience as providers, as well as stakeholder feedback. Yeah, it was a really generative process. We'd have these iterative meetings. We'd review the Phoenix measures. We'd think about other measures. But then you'd bring us back to the reality in the field and what stakeholders were saying. It was quite an impressive project. And the CAB is an ongoing project. I mean, we've set it in place for the moment, but we're already reconsidering measures and getting input from other sources. So I wonder if we could switch over to Monica and give us a little bit of your own thinking as you joined with the group at the University of Maryland and thought about bringing your programs into a hub and then bidding on becoming a part of EPINET, how you thought about the CAB and all the other things that you were considering. Yes, thank you. Hi, everyone, and I'm very glad to be here today. So in Pennsylvania, so I'm from Pennsylvania, the University of Pennsylvania. And we had early on nine programs that were funded for FEP. And now we've grown to 14. And our OMSAS, so our Pennsylvania OMSAS, was very interested in program evaluation early on. So back in 2017, we were actually funded to develop a core assessment battery and to implement it. And we started to roll that out in January 2017. So we had about three years under our belt with that process. Our programs are only two university-based and then the rest are community-based clinics. And so we were spending a lot of time working with our community partners in terms of implementation and trying to make the battery that we were developing as user-friendly, have clinical utility, not be burdensome, but still have some validity. So we were delighted to partner with the University of Maryland for the EpiNet in order to expand our efforts and to align with what was occurring at the national level. Fortunately, some of the measures we had selected were from the Phoenix Toolkit were already aligned, but there were many others that we were needing to harmonize. And so we did that together with the University of Maryland and just launched our version of the CAB, which is the CAB as you saw it, but also including some additional things that are of interest to our programs, to our participants. As Tara said, that was very important to make sure. Monica, how stressful was it for the sites to, you know, you've got them on board doing some collection already, and now they're having to look at a new set of measures. They're close, but they're not exact. What was that process like for the sites and for you as the leader in Pennsylvania? Yeah, so it's an ongoing conversation. I mean, I think, you know, our sites are really committed to this effort. So, you know, people are chipping away. You know, we tried to, as when we had, we're adding things to also remove some things and reduce some measures that we were collecting that were not necessary for the CAB. But it's definitely a process. And I think we're trying to talk, you know, have regular meetings with all of our sites to address questions, to address concerns, again, to make it as easy as possible for people to do this and to be as useful as possible. So I think that's one of the things that our sites are really interested in is being able to use the data in a variety of ways to support their care. And so we're working with them on that. Right. Now, Pennsylvania and Maryland both have pretty strong central mental health authorities at the state level. I'm interested in contrasting that with California, Tara, and how it was with a group of county programs that do CSC somewhat differently. They have different rules about data collection. Can you share something about that? Because some of the people listening may be from states that don't have quite as strong a top-down organization as Pennsylvania and Maryland do. Yeah. I do think that was one of our biggest challenges. And we also, so we had community sites, but we also had university sites as well. And so just very diverse populations all up and down the state. And so I think that's, again, why we started with stakeholder engagement is we really wanted to go out to the sites, show them the options, have them tell us what their priorities were as providers, as clients, as family members. And so I saw a question from Vanessa in the chat, which I think is really a core question, why would a clinic decide to collect this data? And for us in California, I feel like we kind of came to an agreement. It was surprising how much agreement there was across sites about what was highly valued. And we really pitched it as, what do you want to know? What do you want to know when you're working with your clients? Clients, what do you want to know? Families, what do you want to know? So that we created something that matters to them. And I think that's really well reflected in the CAB. I'll give you an example. For example, our clients and family members were really concerned about side effects. And now while we've been looking at adherence, we hadn't really been looking as consistently at side effects from the client self-report perspective. And that's in the CAB. So I think to Vanessa's question, being able to use this assessment battery, this is again, something where a lot of input and feedback and thought has gone into what's there right now. And I'm sure if you looked at it, you could see components that would be of value to a variety of stakeholders in your community and to your clinic. So I think for us in California, it was really just trying to create that shared sense of we're doing something that everyone values and prioritizes, and that helped us bring together the clinics. But I think it also helped to motivate the staff to do the work because they're the ones doing the work to collect the data from the clients and the families and then input data themselves. Well, that was certainly the impression from the interactions we had on the steering committee as you come back and report to us. Monica, did you have stuff you wanted to reflect on there related to this topic? Yeah. So I think all the points that Tara raised in terms of informing clinical care for participants directly, tracking changes in symptoms and function and engagement in care, I think those are all things that locally our sites can use the data for from the CAB. But also I just wanted to mention that some of our programs have used data summaries in their outreach efforts, so they can kind of show community and referral sources how the program is working, looking overall at their program outcomes, which is individualized by the program. I think it can not only bring awareness to the important work, but also generate referrals more broadly. It can fight stigma, so showing the community out there that people with psychosis can and do get better. I think those are all important ways that our sites have used data summary reports in their outreach efforts. Also, our directors have used the summary data to bring to their organizational leaders in order to leverage for resources in discussions about how well the program's working. That's another way that I think they benefit. And for people who don't have necessarily state-level funding, I think it's an opportunity to develop potential conversations about funding at a broader level when there's some data that you can show as sort of pilot data that is important to, again, garner more resources for the important FEP work that we're all doing. My impression is that while we emphasize consolidating the data across sites, what you're reflecting is that sites have an opportunity to compare themselves with the rest of their state and now with other states' approaches to first episode work and coordinated specialty care. You're both nodding. Is there something you want to elaborate on about that, Tara? I mean, I think that those are important pieces. I think certainly being able to show the impact of the work that you're doing, that was a big motivator for our sites to join in to the project for EPICAL. Again, we are not a state that has a top-down approach to early psychosis care, and so I think many of our sites were motivated to show the impact of the work that they were doing to advocate for that level of support. But I also don't want to lose the piece, too, of using this data with clients. I think we've been, like Monica, we've been doing data collection on client-centered and family-centered measures for 10 years now. Really, the work has been how do we use that data to inform care for this client, for this family today? Being able to show a client's improvement or continued struggle in a particular area has been very helpful for my clinicians in making sure that they're considering all aspects of the client's life and all aspects of their needs, because sometimes we can get really focused on symptoms or one particular piece of a client's recovery goal, like substance use, and forget that they're not going to school or working, or they don't have friends, or they don't really engage with their family, or all of these other components that are really important. I think for us, the data, having a standard database for each client has helped us stay grounded in the variety of outcomes we're trying to move. I think many clinicians, that idea of using data in care feels really strange, but once you start to do it and you do it with the client and the family, you can see how it really helps to inform the conversation in new ways. The client, refine their goals or realize they weren't thinking about something. I don't want to focus on this at a super high level. I think it impacts all of the levels of care and outcomes that we see for our clients. Sorry about that. It's a call coming in for me while it's playing out. Monica, could you reflect on the challenge of interacting with sites about the research goals? You've emphasized the role of this data collection as a clinical administrator. Tara's brought us back to see it as a part of measurement-based care, but what about research? Were there challenges in dealing with the sites about that? No, I think all of our sites have an appreciation for research and what it can tell us. It's the bigger picture of what we do. I think that sometimes there can be a reaction for some people looking at a measure that you've never used before or even an interface that you've never used before. We use RedCap to collect data. Here, there would be a different type of interface than people may be used to. Especially if you have a long-within-the-clinic type of site, and especially if you have a long list of measures that you've never done before, I think it can seem potentially intimidating or just uncomfortable. I think that's probably a place where we really try to listen to our sites and reactions about that. I think, like Tara said, emphasizing all the ways it's useful takes away from fear about or any potential apprehension related to the research goals of it. Do we just have to accept that the research goals are something that we have to put up with? I've worked with Tara in sites in California, and I feel like the sites understand that participating in research has benefits. I think of people who volunteered for the vaccine trials related to coronavirus and preventing COVID-19. What's that experience been like, Tara? I think there is a tension between community and research, where oftentimes, I think it's been more like this with research here and community down here. I think the challenge for us as researchers is to try to do this and maybe find some balance where we're working collaboratively. I think that means we really need to listen. I think we can have a hundred lofty goals about what we want to explore, but I think for me, the power has really been in listening to the questions and the concerns that are coming from the people who are doing the work and how do we help them do their work better? How do we help them get more clients back to school or back to work? How do we show the impact of peers? We have amazing peers in our program, and they just are like, we just want to be seen and valued. I think that's where for us, they love the research when it matters to them, when it speaks to things that are important to them. I think our job as researchers is to partner better. It's great that we're going to have this huge data set, as Abram pointed out, to be able to capture low incidence issues, morbidity, mortality, horrible outcomes that we're really working hard to prevent, that you don't get huge numbers in a single site or sometimes even a single state. We're going to be able to do bigger analyses of these important questions, which is amazing and so important. I think one of the beauties of this opportunity is that we can do many things. We can answer many questions. I see these new sites coming on as just increasing our community in which we can gather more questions and provide more answers and collaborate better across the U.S. in this amazing work that we're doing. Great. Thanks, Tara. Do you have anything you wanted to add on that score, Monica? No. I mean, I agree. It's a balance. I think it's really important, as Tara said, to keep the eye on the ball and especially keeping clinical…it's not just about clinical relevance, but we are talking about people who are working hard and their primary focus is the care of patients. Really keeping that in mind is important. Great. Thanks. Is there anything we haven't thought about to make it more attractive for a non-epi net site that can contribute data? I know when we prepared for this, we talked about ways to accommodate sites and make the data flow easier. Do you have any further thoughts you want to add? One thing we talked about was, you know, people could potentially just do a few measures based on what they were most interested in, what might be most informative or have the most clinical utility for them or for whatever reason. I think sometimes that can be easier to start small, get staff used to it, you know, make it a little bit more comfortable. Our state funders have been very interested in particular areas like occupational functioning, like school functioning, and so those are areas that, you know, again, if there's conversations about, you know, further sustainability of FEP programs where there's a particular interest, maybe that would be a measure from the CAB to pick up that would be useful for that kind of purpose. Tara, what are some of the ways that a site could incorporate these CAB measures into their usual workflow? You have the experience of working with, as you've said before, a diverse set of coordinated specialty care programs. You want to help us to think that through? Yeah, I mean, I think Monica has a good point, you know, to sit there and to think about what is it that you want to, what's important, and I would encourage sites to do that both at their provider level. What are some of the places where they're just, they feel like they're not getting adequate data or they're particularly interested in and they want to do better? And then ask your clients and families what matters to them, you know, hold a little Zoom community event and have people come and tell you those things. I think you will be, it'll really help you to see the places where your consumers are enjoying the things they're getting from the program, and maybe some areas where you could be investing more time and resource. I think, you know, as a clinic director, when I approach my staff with these new ideas, it's like, how could we do this? What is the, what's the best way? As opposed to coming in and being like, I want you to do it this way, and so sometimes we've integrated things, you know, on a clipboard when the client comes in, and we try to catch him, you know, within a six-week period, and so there's two forms they fill out, like the Colorado symptom inventory, self-report checklist that can be captured, you know, in the waiting room. Other things, though, can really create a very important conversation with the client. You may want to go through a side effect questionnaire with a client, and that can be done by a peer, that can be done by a case manager, by a clinician. It does not have to be done by the physician. The information should be shared with the physician, but they don't have to collect the data, so I really like to say, like, how could we do this? Let's be creative. Let's think outside the box, and then that helps you to inform your implementation process, as opposed to being prescriptive. That would be my suggestion, and start small. Again, as Monica was saying, start with the place that you want to do better, and collect that data. Yeah, I'm sorry. Yeah, I was just going to invite you to go on, but I stumbled into you. Sorry. Darn Zoom. Okay, so, yes, I, yeah, I think, as far as, like, incorporation into the workflow, I think that's a super important place that we have had lots and lots of conversations with our sites about. You know, all of our sites are set up a little bit differently in terms of who does what, and, you know, the, you know, who's on staff. I mean, there, you know, there's some core components, but, you know, there's a lot of different structures, so I think that making a plan has been one of those things, like a proactive plan, like, you know, in terms of distributing the burden and minimizing the burden so that, you know, we have a form that we do, the admission form. There's questions in the CAB that are, you know, includes questions from the CAB, and we kind of split it up among a couple different people depending on who it makes the most sense to complete. You know, the self-report measures, you know, you make a plan for, you know, distributing them across time a little bit. You know, you want them within a window, but not all at once necessarily, so there's ways to kind of distribute the work load associated with any one of the measures in a sensible way, and for each site, figuring out in advance how that's going to go, I think, makes it easier for people to think about and to work through. Are there sites in Pennsylvania that are seeking to become a part of your hub, and what are the constraints on adding sites that you're experiencing? Yeah, so actually every site that's funded by the SAMHSA grant goes, is part of our hub, so that's our Pennsylvania OMSAS is structured that way, so they're, when they apply for the funding, they are automatically enrolled in our hub and participating in program evaluation. They do have a call out for funding of up to maybe five more sites, and so we don't play any role in that, in the selection of that, but we would be onboarding any new sites who came on in that. That includes not just the program evaluation, but fidelity evaluation, training, and all of that. Great. What about in California, Tara? Oh, yeah. Well, we've got counties who are really excited about the opportunity and are joining our network. And so we're very open to having sites join our network. We believe in strength in numbers and really value the diversity. California, we've got a super diverse group of folks who are participating and we want that to be emphasized and we hope that we can capture the diverse needs of different communities in our hub and spoke. So we're looking forward to anyone in California, might even consider outside California if people are interested, who are willing to join the hub. Thanks, it's worth saying that not all of the hubs are entirely state-based. Some that are state-based are not comprehensive, some like Pennsylvania are state-based and comprehensive, but others are multi-state, which poses a different kind of challenge for standardization. You know, in Abram's presentation and much of the public conversation about the core assessment battery, the emphasis has been on standardization. And I think that that's natural. It's natural for the research purposes for the items to be standardized, to have as much reliability and validity as possible. But I think it's worth saying that it isn't so rigid that the data collection approaches and procedures have to be identical and not everyone is collecting every item in exactly the same way. I don't know, Tara, if you wanna share a little bit more about that rough and tumble. So yeah, so in my network, you know, the stakeholders really wanted to focus on client and family voice. So the majority of our implementation is self-report from client and family. We have a smaller clinician rated cab that is kind of more basic stuff that the clinical team would have, you know, most information on, although the way we've designed it is that can be entered by anyone on the clinical team from the chart. So it's not, as Monica said, really just trying to be respectful of the diversity of our programs and how they have broken up duties in a way that works best for them. You know, we've tried to find ways that we can integrate with that. So yeah, our approach is to create something that clients and families can come in and go through in a way that feels comfortable to them. We've designed a tablet application that's supporting that approach with client and family feedback. So I think California is unique in that way. Some other hubs are really doing a much more clinician centered data collection approach because that's what has worked well in their system historically. And so I think we've done a nice job of trying to design the question to allow for flexibility and data collection while also ensuring that you're getting quality data. You've done nothing if not insist on some flexibility in this whole part. So that's great. Monica, you wanna add something and then we're gonna expand things to get questions from somebody other than from me. Yeah, just following up on the point that Tara made, I think some, especially for the clinical measures that are in the CAB, and this was something even we were thinking about like with the, again, to decrease burden on our clinicians for any clinician rated measures. Most of the measures are those that, as Tara said, can be completed based on information that's already gathered in a typical intake process. And so, and they are relatively standardized. So if you have a clinician who is used to asking about symptoms, there may be a few additional questions they would need to ask in order to make a particular rating on the Compass 10, for example, or in the role and social function scales. But for the most part, people are having these conversations anyway. So it's really just gathering the information and then putting it on the scale as opposed to asking a lot of questions that out of nowhere that you wouldn't ordinarily be asking anyway. Great, thanks a lot. Thanks to both of you for engaging with me in this back and forth, but it's not over yet. I think Steve and Kate have been gathering questions, been kind of amusing to sit here and try to focus entirely on questions back and forth with you, watching the chat function go. I know that lots of people have been engaged in this. So let's hear more from them and then we'll get your responses and add Abram to the mix because some of the questions may really come to him. I'm even willing to take the question, although I suspect that the others will have much better answers than I. Steve, Kate, you want to take over from here at least for time? Yeah, absolutely. And it is indeed a task in attention shifting, being able to present and look at the chat at the same time. So great job there. So I've got a question here from Kristen Woodbury about what protections are in place for sites with low numbers in which a string of demographic information could be identifiable? I suspect that's mine to answer, huh? I didn't see anyone jumping up to answer that one. It's a great question. We actually have to worry about this even with some of the EpiNet sites because some of the clinics are small. So there's a couple of things here and I don't want to go too deeply into it. One is you can provide data in a way that's safe harbor for HIPAA purposes and then we don't have to worry too much about that issue. And so we can give some guidance on that. And the other is we can give some guidance from expert determination around whether the cell sizes get too small so that we could identify someone. So we'll be able to provide, depending on your clinic, some direct guidance here through expert determination which we can conduct and give you a sense about what the sample size or the size would need to be to avoid problems like you asked about. Or if that doesn't work, we can also go through and give you a way of giving us data that meets the safe harbor requirements under HIPAA. Thank you. Sorry, go ahead, Tara. No, we've thought a lot about this because we have some very small clinics in California. And so I do appreciate Abram's perspective. And so we're making sure that we review the data and the cell size to ensure that we can't identify somebody. But I think this brings up a question because some of you may be starting clinics or the clinics may be small. And so you might be like, ooh, can we participate even though we're small? And the answer is yes. Again, we have small clinics in the network already. So just because you're small, you're small and mighty and that's important too. And we want you in the data. So don't worry about that piece. And if you have questions about what's self-report or any of those logistics, all of this information is on the website, the EMDC website, which maybe we could put in the chat. Again, it was on the slides, but Judith, maybe we could throw that in the chat. So if folks have questions about whether something can be self-report or something needs to be clinician administered, that's also available in the CAB manual. Thank you. And Kristen goes on from there to also ask, I'm also interested in whether non-EPINEP programs, if they are contributing data, would have any early opportunity to propose a research question for the full dataset on which local research could take the lead. Any consideration of using an open science approach to questions being asked with the data by existing EPINET sites? I could start and see what you guys think. I think yes, I think yes. You know, we do want this to be an interactive approach as I showed in that one slide where we get things from the community and go back and forth. Sorry, I've got ambulances going outside my window. Sorry about that. So the answer is yeah, yeah, we really do. We really are interested in interaction and in addressing questions. So there's a couple of ways this could happen just pragmatically. We do have a website. We do have a email address for the NDCC. Many of you know people in the hubs. You can also, I'm sure, check with the hubs and ask if that's something that may work in terms of asking a question of the research. Many of you may have ideas. I'm sure many of you do that aren't obvious to the EpiNet hubs or part of EpiNet research that may be able to happen that way. And finally, also in terms of the database itself, like I mentioned, there will be a public use database at the National Data Archive where researchers and others will be able to get access to the data and actually conduct analyses on their own. So I think in the spirit of EpiNet, we do wanna do that. Now, of course, there's no way I think we can answer a thousand questions all at once so there'll have to be some deciding about what's possible. But in terms of idea generation, I don't see why that wouldn't be a great idea. I don't know whether Howard, Tara, Monica, you have other thoughts about that from the hub or the steering committee perspective. And this question here is all about whether anyone has explored recording a conversation and having AI place the answers automatically in the forms. Is that something that's being discussed or is available? I can take this one. So voice is considered PHI. And so I think to have that sort of technology set up within a clinical setting, you would have to work very closely with your technology security center. I think that if you wanted to have a clinician dictate something, you could try to do it that way. But I think you would wanna be really careful about PHI in putting data in that way, if it were to be recorded. I think there's a lot of ways, again, in which you can do this self-report and red cap. There's a number of different methods that can do that. Perhaps this is something, this is an issue that the ENDCC wants, we need to think about and talk about. Because again, I think having staff available to take a paper form and input that data, that's a lot of time and effort, and it can be quite burdensome. Yeah, it strikes me that that's something that might be a study that develops out of the deliberations of the steering committee that one or more hubs might wanna explore getting funding to do work in informatics and AI as it might be applied to assessment. But it's not something that we've considered as a steering committee yet, that seems like a new frontier. Thank you, and can someone speak to the different platforms that are now available for collecting epinet data? This individual has been asked to use one platform, but it would be helpful to know if there's a vision for how different local and national platforms will interface. This one's beyond me, are you able to get into that, Ava? Kind of, but then I'll ask the hubs to maybe chime in. Kind of only in the sense that there are just so many different data collection methods and systems out there. I can say from our perspective at the ENDCC with the hubs, we're sort of like, if you've seen one data platform, you've seen one data platform. Even the REDCap, which you've heard about, which is a common way of doing these kinds of things and is being used across a couple of the epinet hubs, has to be customized for our purposes and data sharing. So even there, you generalize some learning, but not everything. So there's just a number of different ways of doing this, and each hub is doing it actually differently. So we accommodate that data coordinating center, but the hubs do have different platforms. I'm not terribly familiar with that. I don't know, Tara, Monica, if you wanna talk about the way you guys are doing this. Monica, you wanna go first? Sure, thanks. Yeah, so we do, yeah, we do REDCap. We're able to, so we have a Pennsylvania-based REDCap and a Maryland-based REDCap, and then our Pennsylvania-based REDCap will feed into the Maryland REDCap. And our sites have set it up so that they have, there's flexibility. If people do want to do it on paper and then enter it later, that's okay. But as Tara mentioned, that can be burdensome as well. So some sites prefer to do direct data entry. For REDCap, one of the nice things is that you can send, especially all the, like remotely, you can send self-report links so people can do the self-report measures remotely. You know, there's some benefits to that. So, but I know that other systems might work. Tara, you guys do not use REDCap. You use something else? Yeah, thank you. So we developed an iPad and a web, like a web application to allow in-clinic data collection from clients and families on an iPad. And then because of COVID, we had to switch course a little bit. And then we developed this website so that you can push individual survey links to specific clients and family members. And then the data is pulled into a clinician-facing dashboard that was co-designed with our providers, clients, and families to visualize data immediately. And then, you know, knowing that we were going to have to share data to the EMDCC, we've worked on the database on the backend so that we're able to pull all the data out quickly and submit it to the EMDCC portal before uploading to the National Data Archive. So yeah, we just, we have so many different sites on so many different systems that we couldn't, there was no one application that was gonna really pull the data from the medical record or be fully integrated with various, you know, medical records. So we went ahead and kind of struck out on our own, but I think what we've created is really awesome. All right, there's a question here from Steve that speaks to what you're saying, Tara, on this sort of tension times between the community and academic piece. And he would like the established expertise on the panel to answer, how do we help our universities understand the importance of the community voice and move to more support for community-based participatory research models? Discuss. Monica, you want me to take this one? Yes, I do. Yes, I do. I mean, okay, so I'm a reformed neuroimaging researcher, and no, actually, I still do all sorts of, you know, translational science work. I think I'm gonna be very frank. I have peers and a family advocate in my clinic and they have the answers. They see the things that we miss as clinicians. And so I think my clinical experience, supervising and partnering with them has been transformative in how I see science. And so I think, you know, NAMI's kind of statement of nothing about us without us is really true. And I think this is a call to scientists everywhere, you know, that we need to partner with the folks who are living, you know, with these challenges and their loved ones and their community to understand the issues and how best to address them. I feel like, again, our top-down model, this is our view, you know, this isn't our view. And so we really need to balance that out. And I think, you know, Steve, I think your question's a good one. I don't know if it is stigma. I don't know if it is hierarchy or a paternalistic view that we know better than, or we know the right way. And I think, you know, many of us are being challenged to change our views and our approaches in many areas of our life. And I think as scientists, this is our calling, is that we really need to find ways to integrate the voice of those we serve, because that's what we do. We serve that community. And without their voice, we're only seeing half the picture, if that, and answering maybe a quarter of the questions. So I'm with you on this soapbox, Steve. I think their voice is really important. This has been the challenge of services and policy research from the very beginning. And I just would identify some of the people that have been involved in this very field of first episode psychosis who've begun to help bridge that gap. I mean, Tara rose to leadership in a project whose nominal PI was Cameron Carter. He's a neuroimaging clinical researcher. John- He's been providing care in the community since, you know, the nineties. So he's in the clinic, like he believes in bringing those things together. So. People like John Kane, Nina Schooler, and Delbert Robinson are, start out as clinical researchers with a very biological science orientation, but have moved their work into the services research enterprise and are confronting these tensions on a daily basis. I mean, after all, this EpiNet project is sponsored by the National Institute of Mental Health. It's one of their premier projects. So there is hope for understanding and rising to the occasion that is embedded in Steve's question. So, you know, as the old guy on this, I'm very hopeful over decades watching this change, but it's a tension because the justification for pooling these data are to analyze them as though they all came from similar programs with the same intervention and the same criteria for assessment with a high degree of reliability that is the underpinning of validity. Well, there may have to be some compromising in order to also collect data from the field. So it's a great, great, interesting question. And it's a challenge that we've been facing, but on balance, I'm very hopeful, particularly as it's represented in this particular project. Yeah, I would just add that, I think structures to make sure that voices are heard is important. So like we have with Pennsylvania and now with Maryland, you know, we've embedded within our steering committee, you know, multiple stakeholder voices, families, patients, providers, you know, so I would just say really, there has to be like an intentionality about it and would encourage that. Yeah, I was just thinking, you know, at Maryland, the head of our Maryland Early Intervention Program is Bob Buchanan, who's the director of the Maryland Psychiatric Research Center, a biological researcher fundamentally, but he's first and foremost, the clinician who cares about the outcomes for people with schizophrenia spectrum disorders and early psychosis. All right, we have three minutes left and I just want to make sure we get to two more questions and Judith also needs to inform people about CEUs. So there was a question here from Eduardo about suicidality and four questions. Have you considered the Columbia scale? And I think, Terri, you're going to take this one. Yeah, we did. And actually New York's hub is focusing on suicide assessment and risk prevention. As one of, Abra mentioned, that each of our hubs has a small side project that we're doing and the New York one is focusing on suicide. So I think this is a great question. And the way we've chosen to structure the CAB is that there's some screener items in the CAB that you can follow up with more detailed assessment. So I think in the Colorado, there is a real, there's one question about suicide, there's depression questions as well that are really helpful. And so I would encourage folks who want to make sure they're adequately addressing suicide to use those questions as flags and then to follow up with a standardized measure like the CSSRS. I think that would be a really excellent approach. That's what we're doing in my clinics. I think similarly, if you look back at the CAB, you can see that we included the ACEs and the ACEs isn't a trauma measure, right? It's a measure of adverse childhood experiences, but you can do that questionnaire quickly but you can do that questionnaire quickly, get really important information about your client's experiences and then follow up with the CATS or the PCL to get those trauma symptoms as well. So we've sort of designed things so that you can have a few screener items and then follow up in more detail with a larger measure to make sure you're fully assessing that domain. Final question, I think we've just got time to squeeze it in. How does a state become a partner with Epinex? This is from Ruth Condrey. Send us a note, please. Or, you know, we're very involved in work with NASMHPD if your commissioner wants to work that way through it, that might be great too.
Video Summary
The video content discusses the benefits and opportunities for clinics to join the Early Psychosis Intervention Network (EPINET). It features panelists Dr. Kate Hardy and Dr. Stephen Adelsheim, who explain that EPINET aims to improve early psychosis treatment in the US through data collection and collaboration. EPINET consists of eight hubs and 101 Coordinated Specialty Care clinics across 17 states. They introduce the core assessment battery (CAB), a set of standardized measures to assess clinical characteristics, interventions, and outcomes in early psychosis that can be downloaded from the EPINET website. The benefits of participating in EPINET include access to training, data analysis, and the ability to compare data to regional and national levels.<br /><br />The discussion emphasizes the importance of stakeholder feedback in developing the CAB and ensuring its relevance. They also discuss the process of data collection, including self-report and clinician-rated measures, and the challenges and benefits of implementing these methods in clinical settings.<br /><br />The speakers highlight the need for collaboration between researchers, clinicians, and the community to prioritize client-centered care and advocate for resources. They mention different data collection platforms, the importance of data privacy, and the potential for community involvement in analyzing the data.<br /><br />Overall, the video emphasizes the importance of data collection, collaboration, and community involvement in advancing research and care for individuals with first-episode psychosis. It highlights the benefits of participating in EPINET, including access to training and data tools to improve clinical care and program evaluation efforts.
Keywords
Early Psychosis Intervention Network
EPINET
clinics
benefits
opportunities
data collection
collaboration
core assessment battery
CAB
training
data analysis
Funding for SMI Adviser was made possible by Grant No. SM080818 from SAMHSA of the U.S. Department of Health and Human Services (HHS). The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, SAMHSA/HHS or the U.S. Government.
×
Please select your language
1
English