false
Catalog
Advances in Smartphones and Digital Health for Sch ...
Presentation And Q&A
Presentation And Q&A
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello and welcome. I'm Dr. John Torres, the Director of Digital Psychiatry at Beth Israel Deaconess Medical Center and technology expert for SMI Advisor. I'm pleased that you're joining us for today's SMI Advisor webinar, Advances in Smartphone and Digital Health for Schizophrenia and Bipolar Disorder Research. Next slide, please. SMI Advisor, also known as the Clinical Support System for Serious Mental Illness, is an APA and SAMHSA initiative devoted to helping clinicians implement evidence-based care for those living with serious mental illness. Working with experts from across the SMI clinician community, our interdisciplinary effort has been designed to help you get answers to the care you need for your patients. Next slide, please. Today's webinar has been designated for one AMA PRA Category 1 credit for physicians, one continuing education credit for psychologists, and one continuing education credit for social workers. Credit for partaking today will be available until September 13th of this year, 2021. Next slide, please. So slides from the presentation are available in the handouts area found in the lower portion of your control panel. You can select the link to download the PDF right away. Next slide, please. So for question and answer, please feel free to submit your question throughout the presentation by typing them directly into the question area, also found in the lower portion of your control panel. We'll reserve about 10 to 15 minutes at the end of the presentation for your questions and of course answers. And next slide. So it's my great pleasure to introduce today's speaker, Dr. Rhianne Moore. So Dr. Moore is an Associate Professor of Psychiatry at UCSD and co-director of the Cognitive Dynamics Lab also at UCSD. Her research focuses on innovative mobile technologies to improve assessments of daily cognitive and emotional functioning among older adults with chronic medical problems like HIV and also serious mental illness. Her current work utilizes ecological momentary assessment, which is EMA or smartphone surveys, passive sensing, and wearable technology as low-cost, efficient, objective measures of mental and cognitive health. Dr. Moore is involved in interdisciplinary collaborations with investigators from global public health engineering, wireless technologies, computer science, and medicine to further develop innovative technology-based real-time assessment techniques that we can use for research and clinical care. So Dr. Moore, thank you so much for leading today's webinar and I'm going to hand it over to you. All right. Thank you so much, Dr. Torrey, and thank you for inviting me to present today. I'm excited to share some data that we have been collecting over the last several years. Just a couple disclosures. I am the co-founder of KeyWise AI and I am a consultant for NeuroUX. So the learning objectives that we're hoping to accomplish today are I'll be describing strategies for bypassing self-report bias among people living with serious mental illness when they are reporting their symptoms and their daily functioning. Also, I'm going to be identifying ecological momentary assessment, also known as EMA, methods for at-home self-assessment of symptoms and functioning for people living with serious mental illness, and then wrap up by summarizing advantages of mobile cognitive testing as well as advantages of getting performance judgments from the participants in people living with serious mental illness. So when we think about the biggest problem, and from a public health perspective, the biggest problem that we see in people with schizophrenia and bipolar disorder, you know, it's really not the positive symptoms of the illness nor is it suicide, but it is this functional disability. So across the board in people with schizophrenia and bipolar disorder, we see social disabilities, vocational disabilities with high rates of unemployment, and many people aren't able to reside independently. And there's been work over the years trying to determine the predictors of everyday functioning outcomes in people with serious mental illness, and a lot of work has been done looking at the impact of cognition, including social and neurocognition, on functional outcomes, as well as functional capacity, which is the capacity to perform everyday or daily activities. Functional capacity differs from what people actually do in their everyday life, but it's their abilities to actually engage in these behaviors and do them correctly. People have also looked at the role of negative symptoms as well as depression on predicting functional outcomes, but there's still quite a bit of variance, and so what our lab has been doing is starting to explore other predictors that may be more proximal predictors of impaired functional outcomes, including defeatist beliefs and impaired self-assessment, which is what I'm going to be spending time talking about today. So when we look at rates of real-world functional milestones in people with schizophrenia and bipolar disorder, here you can see the blue bar is people with schizophrenia and the orange bar is people with bipolar disorder, and people with bipolar disorder are hitting their functional milestones about 40 percent, so that means 60 percent of people with bipolar disorder aren't employed or residing independently or engaging in marriage or marriage-like activities. Those rates are significantly lower in people with schizophrenia, with functional milestones only being met in about 20 to 30 percent of people, and when we look at the cognitive performance of first-episode patients, when we compare it to normative standards, here you'll see the zero here. These are standardized z-scores on our y-axis, and on our x-axis, we have different domains of cognitive ability, so a z-score of zero is a normative standard of average cognitive abilities, and even in first-episode patients, we're seeing across the board in people with schizophrenia, schizoaffective disorder, psychotic depression, and psychotic bipolar impairments in all domains of cognitive abilities, and those are especially prominent in the domains of memory and everyday functioning, as well as attention and processing speed. Here, we're looking at using a measure of functional capacity, the UCSD performance-based skills assessment brief, which is a standard measure that's used in clinical trials as an outcome measure for functional capacity to see if it can serve as a predictor of residential status, and there is good data showing that this test, the OOPS-A-B, can predict if someone is head of household or a community resident with 70 to 84 percent predictive ability. It does a fairly good job of predicting if someone's also in supported living, but a little less well, and then thinking about the predictors of daily functioning in people with serious mental illness, there's been several predictive models, and this is a paper that was published by Chris Bowie and colleagues in 2008. I was a graduate student at the time, and this paper actually is a paper that motivated my dissertation work and my continuation of work along these lines, because I just found it fascinating. You can see he looked at cognitive abilities, as well as symptoms, positive and negative symptoms, and functional and social capacity in the prediction of daily activities, and found that in both predicting community activities and work skills, these capacity measures mediated the relationships between cognition and symptoms. Other work has found the same thing in different samples, and that functional capacity is mediating the relationship between cognition and functional outcomes, and over here on the right, we have a paper by Michael Green and colleagues. It was published in 2012, where they looked at the relationships between visual perception, social cognition, some defeatist beliefs and motivation, predicting functional outcomes, and really found a role for defeatist beliefs and negative symptoms mediating the other relationships with functional outcomes. The work to date has shown us that cognitive effects on outcomes are consistently mediated in all studies. There's not a direct relationship between cognitive impairment and impaired functional outcomes, and we see this disconnect with our patients all the time. We know negative symptoms have some impact on outcomes, and beliefs may be important as well. The work that we've been doing, we've been using ecological momentary cognitive testing via smartphones as a way to try to improve our assessment of functioning and reporting of symptoms in people with serious mental illness. Historically, and mostly still today, although this pandemic has been causing a little bit shift in how research is conducted, but for the most part, clinical and psychological research takes place in hospitals and university settings, where someone comes in, and in a sterile environment, is performing tests of neurocognition or functional capacity under supposedly optimal conditions. There's several limitations of traditional cognitive assessments. One is they're in an artificial environment. It's very removed from reality, just being one-on-one with an examiner and testing your cognitive capacity, versus how our cognition functions in everyday life. When your kid is crying in the background, or the TV is on, or you're on a bus, or you're at the grocery store trying to make decisions, it is very removed from cognitive ability and performance in daily life. There's also other things that can impact performance of traditional cognitive tests, such as daily stressors. Is the person tired? A lot of patients of mine come in for a neuropsych exam, and they start out really strong, but they're long batteries. By the end, they're like, we're just so tired and cognitively fatigued. We're not trying our hardest anymore, giving best effort. Mood could be impacting it. Maybe they didn't have their morning cup of coffee. There's all of these kinds of barriers to traditional cognitive assessments. I really like this figure here on the right, showing that, for example, this little black dot could be someone came in for a cognitive assessment in the lab or the clinic, and they were having a bit of an off day. Their cognitive ability was a bit low. Then they went through a clinical trial or some kind of treatment, and then a year later, come back for a follow-up visit. They're having a good day, and they perform much better. It looks like there was this change and improvement in their cognitive abilities. If we were to do mobile cognitive testing using burst methodology, where people complete tests frequently and repeatedly on their smartphones or another mobile device in their daily environment, and if we're able to average those scores, you can see we get lots of variability in the cognitive test scores. If we average those scores, maybe their actual cognitive function is up here at this red dot. Then a year later, when they do another burst, it's actually here. This may be a better indicator of true change. Really, there are a lot of advantages to mobile cognitive testing. I just think this is a really exciting place for those of us who are neuropsychologists to move into. In terms of just increasing assessment frequency, I really advocate for giving just short tests that can be repeated over time. We typically keep our testing sessions to about five minutes, so they're doing two or three brief tests on the phone, making them gamified and fun. We can get it throughout the day, so understanding the effects of context, which really matters. What time of the day are they taking the test? Are there diurnal effects in their cognitive abilities? What's going on in their natural environments? There are also great advantages in terms of thinking about reducing cost and increasing barriers to participation in clinical trials or other research studies. Clinical trials are inherently biased in terms of sampling in that people who participate often have to live near a clinical trial site, have to have the means and the transportation to get to the clinical trial, the flexibility in their work schedule to actually go and participate, the knowledge that the trial is even happening, and it can really decrease some of those barriers to trial enrollment, especially if you think about it can be done fully remotely on devices that people have in their pockets. I always come back to the stat that more people in the world actually have smartphones than there is running water, and even for those people who don't have smartphones, they're inexpensive to buy, and we provide them to our research participants, and it can really be a great tool for gathering this data. Here's a sample protocol of how we set up some of our studies with the Ecological Momentary Cognitive Testing platform. We send text notifications three times a day, and they get an EMA survey first, and it assesses mood, daily functioning, symptoms. We ask some different questions based on what time of the day it is as well, and we employ advanced branching schemes so that if someone answers, you know, I'm not at home right now, for example, then the follow-up questions ask questions about when people are out of the house, and that can reduce the burden on how many questions people have to answer. Following that, we provide just one or a couple mobile cognitive tests, and as I already mentioned, we keep the testing sessions as brief as possible. We let people use their own phone or a study phone that we provide to them, and, you know, this question always comes up, are people going to return the phones if you loan them? We have over probably 95% return rate of study phones, and the times we haven't gotten back, people have lost them or something weird has happened, but people do like to, you know, they will return these devices, and that's not a problem. We compensate people for each survey they complete, and I'll show a little bit more on the next slide about what that looks like, and the platform that we've developed, we don't save any data on the phones to enhance privacy, and instead, the data goes in real time to a HIPAA-compliant server. That way, this research team can see the data in real time, and if for some reason the device is lost or broken or, you know, falls in a lake, we still have that data, and no one else has access to that data. So here's a sample of what one of our surveys looks like, you know, and it may really help gain ecological validity by surveying people just as they go about their daily routines and their daily lives. We keep our questions to slider scales, checkbox responses, really quick responding formats. We've actually also been playing around a bit and just completed a study where we used a micro EMA method where we pinged people five times a day with just one question, and this was in people, medical students, so very, very busy sample, and found adherence was over 80% across a full year with five EMA bursts, so people will respond to these surveys, you know, and the briefer they are, the more likely we're going to get better adherence. So thinking about other ways that we maximize adherence, our study staff provides check-in calls to participants the first day just to make sure that they're not having any questions about answering the surveys or using the phone. If people miss three consecutive surveys, our staff calls and checks in, does some motivational interviewing or problem shooting, and then in some of our longer EMA protocols, so we have a couple that are 30 days, the staff just call and check in once a week. As I mentioned, we provide financial incentives to complete as many surveys as possible, and then we've gamified our mobile cognitive tests just to make them fun and engaging for people to complete. We also have this investigator or staff dashboard, so our staff can quickly log in and see when people have completed the surveys, if they're completing, if they're not attempting, if they haven't even opened the link to complete them, and that gets uploaded in real time, and then we have our participants come into the lab for baseline visits, and we go through an in-lab tutorial for completing the surveys and making sure there's no questions. And we send them home with a user manual as well, just including things, reminding them to charge the phone, how to charge the phone, how to answer the surveys, how to turn the phone on and off, et cetera. So why have we focused our energies on using EMA for people with serious mental illness? You know, we've really been spending a lot of effort and have been fortunate to have a lot of NIH funding to pursue this work because we're seeing that people with schizophrenia and bipolar disorder, you know, they just appear to have this lack of awareness of their symptoms. But is it really a lack of awareness or is it just a limitation to the way we are assessing their symptoms? So, you know, we ask people to tell us how depressed they've been, for example, over the last two weeks. You know, symptoms can fluctuate. You know, it can be hard to try to average and remember how you were feeling. You know, there's a lot of context dependence. So if people are feeling bad or good right now, they are more likely to have a response bias based on their current state. And some things are just really hard to remember, especially thinking about people with serious mental illness have this cognitive impairment. So asking questions in real time can help reduce some of that bias. Here's an example, and this has been repeated in multiple studies and with all of our data across different samples and data from other groups, just showing there's this disconnect between participants' self-report of their functioning. This is an example on the x-axis. We have self-report of work skills from participants. This disconnect between self-report and informant report. So report by a close informant that knows the person's daily activities. So on the y-axis, we have the informant reports of work skills. And, you know, there's just lack of a correlation, and this is seen consistently. So we're curious if this lack is really, is it a lack of experience? You know, is it a reliance on momentary states for global judgments, or is it something more fundamental? And there's this, tends to be, you know, people without serious mental illness in a kind of cognitively normal sample, a normal optimistic bias. You know, I just remember in one of my first psych classes as an undergrad, the professor had, this was one of those big lecture halls, you know, there's like 400 or 500 people, had everyone shut their eyes. And he said, raise your hand if you think you are above average in your academic performance. And then he's like, open your eyes. And everyone in the room had raised their hand, right? Of course, we can't all be above average, but we all like to think we are. But there's this normal optimistic bias that we see. But when people are given feedback that deflates their self-opinion, you know, in healthy samples, that leads to an increase in convergence between observer ratings and self-report. But we don't see this convergence as much in people with serious mental illness. So what are some strategies we can engage in to bypass the bias? So things we've been doing, or we've been doing EMA messaging strategies. And we ask people, again, as they're going through their everyday life, where are you? Who are you with? What are you doing? And how are you feeling? And then we also are gathering through smartphones some passive measurements of activity. We've been using GPS as an indicator of life space. And GPS allows us to examine how much time people are spending at home, objectively. How frequently they're leaving the home, how frequently they return to the home, where they're going, how far away from the home they're traveling. And it can really be a nice indicator of life space. We also have employed measures of actigraphy. So having people wear smartwatches or Fitbits, which we're able to provide to research participants to measure sleep and physical activity and actual activities. As well as passive measurement of smartphone social activity. So how many times someone's texting during the day, how many phone calls they're making, how active they are on social media. There's all these different passive measurement tools that can also help us understand someone's daily and social activity. So we know that negative symptoms play a role in predicting functional outcomes. And these digital health methods can help us provide more objective, we hope, scores for negative symptoms. Thinking about looking at motivation and activity level, as well as engagement in unproductive activities. So, you know, this comes up with every grant we submit or every paper we submit. Will people with serious mental illness actually respond to surveys? Will they use a smartphone? Yes, they will. They will respond with greater adherence than, you know, a normal healthy sample. They won't be able, they know how to use the phone, they'll figure it out. And we will get accurate data. They will admit to psychotic symptoms. They will give true responses as to their location. And we, you know, we are seeing that we get a lot of variability in responses in terms of symptoms and functioning. And we think that this method can help reduce some social desirability in responding. So now I'm gonna share with you data from two serious mental illness and EMA studies that we've done. The first study was completed a few years ago. This was 100 outpatients with schizophrenia in San Diego. They were sampled seven times a day for seven days. And they repeated this protocol in four weeks. In total, we got 4,200 total surveys. And then study two is a study in process. And I only include a data that was collected pre-COVID. The study is still ongoing. But for the purposes of this talk, you know, I wanted to just leave kind of COVID and people staying at home all the time out of the equation. So the data I'll present is from 104 outpatients with schizophrenia and 71 participants with bipolar disorder who were sampled three times a day for 30 days. They did a mood and psychosis EMA protocol as well as some mobile cognitive testing on the device three times per week. And this is a multi-site study between UC San Diego, the University of Miami and UT Dallas. And these analysis included data from 12,540 surveys. So here you can see on the left, we have this is the data from study one. And on the right, this is data from study two. The red bars are adherence. And for both studies, we had adherence rates greater than 80%, which is fantastic. And we see that consistently in these two different samples, people are spending more than 60% of their time home and about 75 to 80% of their time alone. And we looked at what are the most common activities that people in schizophrenia are doing in their everyday life. And the activity question, they have the option to, so it's, what are you doing or what have you done since the last survey? And they can select as many as apply. But we really get people responding a lot to just one activity. And for the most part, we see that people are watching TV. Next is resting, followed by work, but at a very low percentile, about 7%. Then we have sitting alone, nothing, eating. So just a lot of unproductive and passive leisure activities. And looking at the differences in engagement, unproductive and passive activities, if someone is alone or with someone else. Interestingly, there's not much difference. So on the left, this is if a person reports that they are currently alone. They're still spending most of their time resting or watching TV, sitting or doing nothing. If they're with someone, the pattern looks very similar. So they might just be resting or sitting on the couch next to someone, but still watching TV and not engaging in social interactions. When we look at momentary reports of symptoms, location and social context, and really looking at symptoms as a function of who and where, this data I find really fascinating. So we see the blue bar is when people are home alone. Purple is when they're home with someone. Orange is when they're away alone. And red is when they're away with someone. So we see that when people are home and alone, they're the least anxious, but they have the most positive symptoms. And when they're away with someone, they're the most anxious and report higher sadness, but they have the lowest positive symptoms. So now I'm gonna share some data on using EMA to detect impaired self-assessment. And I'm showing some data looking at the relationship between EMA measured social context and activities, as well as self-reported functioning, and the relationship between momentary mood reports and self-reported functioning. So when we look at social context and self-reported functioning, we really just don't find any differences between if someone is answering surveys when they're at home versus away, or alone with someone and their self-reported social functioning. Their social functioning just seems to be kind of consistently the same regardless of context. And showing some nice convergent validity with a self-report, we are finding now using the EMA method when comparing to informant reports of social functioning, that now we have high correlation between these outcomes at a P less than 0.001. So here is just a visual depiction of that. So on the left, we have social functioning as a function of who and where. So the left is the participant self-report. The blue bar, again, is home alone, purple, home with someone, orange, away alone, and red, away with someone. And then on the right here, this is the informant's report of the participant's social functioning. The y-axis represents percent of time they're spending home alone, home with someone, away alone, or away with someone. And you can see the patterns are fairly consistent and that we're not seeing very much difference, less than a 5% point difference in all of these self-reported functioning using EMA compared to an informant. So when we've been examining the data, and this is data from study two, which is the ongoing study, we found an interesting pattern in some of the participants, and that about 20% of the people with schizophrenia across 90 survey points reported they were never sad. And that was with a severity of two or more on a seven-point scale. So we were interested in looking at the social and daily functioning of these people who said they were never sad, because you would expect, you know, over a 30-day period, that there would be more variability in sadness. So we compared these never-sad participants to other participants with schizophrenia and bipolar disorder, and found that people who were never sad were significantly more likely to be spending most of their time home alone and only engaging in one activity in the past hour. And this was compared to the other two groups. So here you can see we have the blue is never sad, the purple is sometimes sad, and the red bar is people with bipolar disorder. And looking at activities and daily functioning, you know, you can just see that the self-report and informant ratings, we have self-report is the SR abbreviation here, and informant is INF abbreviation here. We're still seeing they're pretty aligned for the most part. You know, the people though who report they're never sad are reporting higher social functioning and higher work functioning compared to the informants, as well as kind of just that they're functioning better overall. And when we look at these groups compared to other mood states, the people who were never sad report that they, on average, are more happy, less anxious, more relaxed, more energized. And when compared to the other two groups, they're significantly reporting that they're more happy. But again, they're spending just all of their time, most of their time at home alone doing nothing. So there's a bit of a disconnect there that we're seeing, which is interesting. And then looking at emotional determinants of life space using GPS and ecological momentary assessment in people with schizophrenia, you know, we wanted to see if people who are always home, using an objective measure of life space via GPS tend to overestimate their functioning, you know, as we were just seeing with the never sad sample. And then what happens if people leave their home or they're underestimating their functioning? Is it that psychosis occurs when people are alone or is it psychosis is reported when they're alone with a suppressive effect based on social context? So just looking at some of these questions and we looked at the movement patterns of people with schizophrenia. This paper was published, led by one of our graduate students, Emma Parrish, and published last year. The blue bars represent control participants and the orange bars represent people with schizophrenia. We found that people with schizophrenia spending, and this is just GPS data, significantly more time at home. They're leaving their home more, but they're leaving for shorter periods of time, so staying out less and returning home more frequently. So probably just popping down the street to the 7-Eleven and coming back. They're more likely to just be home, leave for a short amount of time and come back. And we looked at does leaving home or returning have an impact on changing their mood? So here, the green and the blue lines represent the healthy control participants, and the orange and the yellow lines represent the participants with schizophrenia. So overall, you can see the controls have more positive mood in terms of happiness and feeling relaxed. For people with schizophrenia, they're happiest and most relaxed when they have stayed home, and they're least relaxed when they have stayed out. And here, examining the trends with negative emotion, a converse is true in that in our people with schizophrenia, we're seeing significantly more negative emotion than in our control sample. The controls, they're negative moods, so anxiety and sadness didn't seem to vary as a function of if they were home or away or stayed home or went home. But in the people with schizophrenia, we saw when they were home that their anxiety and sadness was significantly less, and then when they left their home, as well as stayed home and then went home, we're seeing it's higher. And how does these GPS findings relevant to what they're doing? So we looked at how their mood state fluctuated as a function of where they were going, and interestingly, going to a clinical appointment really seemed to have a significant effect on making people more sad and more anxious, and this is in the people with schizophrenia sample, did not have an impact on their happiness. But we are seeing that just going to a clinical appointment does increase sadness and anxiety compared to other activities outside of the home. So overall conclusions from this line of research is what we feel when we're at home, it does seem different from when people are away, and symptoms and moods do differ systematically, and the people who are always home tend to misestimate their functioning systematically. So that's based on the assumption that if you're always home, always alone, always watching TV, you're not functioning as well as you're reporting you are functioning. And going to a clinical appointment may have a reactive effect on moods and possibly symptoms as well. So in this final segment of the webinar today, I'm going to talk about an ecological momentary cognitive testing protocol and self-assessment procedure that we've been doing for examining performance judgments and the relationships to functioning using mobile cognitive testing. And I'm going to give an example with our mobile cognitive test, which we call the Mobile Variable Difficulty List Memory Test. So looking at the domain of memory, which as I highlighted in the beginning, is one of the most impaired domains in people with serious mental illness. I'm also going to give an example with a task of executive functioning, which is one of the other most impaired domains. So this is an example of what our variable list, I always forget the acronym, Mobile Variable Difficulty List Memory Test, VLMT, looks like. This is an example of what they would get on their phone. So they get a notification that they have a survey, they complete the survey, then it moves on to, they have this memory list task to complete, and then they get some follow-up performance-based questions. Here's an example of what the memory list looks like. We have three different list links that we've developed, six words, 12 words, and 18 words, and they get 30 seconds to just view the words. I'm just using sample words here for the purpose of this presentation. Then the words go away, and then it's a recognition memory paradigm, was this word on the list, yes or no. The words are not semantically related, so the task at the 12-word and 18-word list is difficult. So in our study, participants had five lists each over a 30-day period. They did a list every other day, and they either got the 6-item list, the 12-item list, or the 18-item list. They did three trials of each list, so three learning trials consecutively, as well as the immediate recognition memory. The difficulty was counterbalanced, and lists were never repeated. The list six is a very easy task. It's actually more of a measure of effort we're finding than a measure of memory. 18 is really hard, and 12 is very comparable to the Hopkins verbal learning test, and we had nice convergent validity actually between the HVLT and our 12-word list. Participants did receive some compensation after completing the survey and the list, which helped with adherence, and some of the dependent variables from the VLMT we were looking at were target percent correct, distractor percent correct, and overall percent of words correct. So the self-assessment procedure that they completed immediately after each trial, they were asked, how many words did you get right? And they weren't given a range, so that was an interesting design choice, and I'll show you some interesting findings without giving the range. So for the 6-word list, they weren't given a range from did you get 1 to 6 right? It was just open-ended type in the number of words you got correct. After the second trial, they were asked, did you get better? And then after the third trial, they were asked, did you improve over the three trials? Here's some demographics of the participants in this study. So we had 98 participants with schizophrenia spectrum disorders and 70 with bipolar disorder. The sample did differ in that there were more female participants in the bipolar disorder sample. We had significantly more education in the bipolar disorder sample compared to schizophrenia spectrum disorders. More of our bipolar disorder patients were Caucasian compared to the schizophrenia group and less likely to be from other racial ethnic backgrounds, and then significantly more of our bipolar disorder patients were employed compared to the schizophrenia sample. What we found in terms of adherence was adherence for the whole sample was 75.3% over 30 days and 90 surveys. Adherence was not correlated with any cognitive variables or symptom variables. So when we looked at the correlation with adherence and the OOPSI-B, which again is a measure of functional capacity, we found it was correlated to functional capacity and it was also related to age, but it wasn't correlated with any cognitive variables or symptoms. All word lists were positively correlated with the Hopkins verbal dyslexia test and the performed best with the 12-word list. And we didn't find practice effects for the six-item list, but we did find practice effects for the 12-item and the 18-item list. This is an example of intra-individual variability we're seeing with the task. So for example, in participant one, we see very limited variability. They performed consistently about the same and well, getting a lot of percent correct over the 30 days. But for participant two, we're seeing lots of variability in terms of their test performance, vastly differed every day that they took it over the 30-day period. So when we look at objective and self-reported performance, so for list length here, we can see objectively people are, you know, getting almost all of the words right over three trials of six words, but they're reporting they're getting significantly more right, even more than are offered. So there's 18 words possible correct, but they're reporting they're getting 26 words correct on average. We see this discrepancy also with the 12-length list, and then they are reportedly getting significantly more correct than they actually are. But that discrepancy is attenuated a bit when you get to the harder list, and it's actually, their self-report is more consistent with their actual performance. Here we see learning curves in terms of objective performance. So this is for the six-word list, and very little of a learning curve. People, again, are doing well. There's a ceiling effect on that task. For subjective reports of their performance, however, they report they're getting much better over the course of the three trials. For the 12-word list, you know, objective performance again is staying about the same, but they also think they're getting significantly better over the three trials. And then for the 18-word list, we see, we don't see that pattern anymore. So people actually are getting better over the 18-word list, but they don't really feel they're getting better on this more challenging task. So just quickly, since, apologies, I lost some time, I'm going to talk about a metacognitive Wisconsin card sorting task that we gave. This is, Wisconsin card sorting is a well-known task of executive function. The metacognitive version, this was done in the lab, and participants did the sort on the task, asked by the examiner, were you correct? Yes or no? How sure are you you were correct? They were actually given feedback if they got it right or not. And then they made global judgment at the end of the task. How well did you do overall? And this paper was recently published in the Journal of Psychiatric Research. So for people with schizophrenia and bipolar on the right here, the dark blue column on the left is the correct source. This is how many they actually got right. And for both groups, we're seeing that they, this optimistic bias a little bit, they felt they got a lot more right than they did. Their immediate judgment was, yeah, I'm very confident that I got it right. And then their global judgments at the end, even after they received feedback, if they were correct or incorrect, was still showing a disconnect between actual performance. So what are the implications for this? People with schizophrenia as a group tend to overestimate their performance. And this doesn't appear to be random responding or global momentary memory issues. They tend to remember their impressions and aggregate them into an incorrect but a congruent global impression. In both, we're seeing this in both the domains of memory and executive functioning. And people with schizophrenia who fail to integrate their feedback and their momentary behavior into their global judgments. And we're seeing this more in people with schizophrenia than in people with bipolar disorder. So we're seeing a global misestimation. This appears based on momentary misjudgment, not due to random responding or forgetting your immediate estimate. Overall, we're seeing a positive response bias. And there's been prior models of psychosis that have focused on kind of this jumping to conclusions and considerable strength of the generation effect in people who have psychosis. So overall conclusions, I just wanted to highlight, I really do strongly feel EMA strategies have the possibility of increasing the validity of assessment of everyday functioning in people with serious mental illness and really pinpointing these response biases. And this data is really pointing to momentary just-in-time interventions focused on metacognitive abilities could really have the potential to kind of improve some of these disconnects we're seeing. So I will stop there. I'd like to acknowledge my lab and team. So it's NIH and Playpower. And pass it over to a more stable internet connection. Thank you so much for such an interesting presentation, Dr. Moore. It's like looking into the future. I want to take a moment to let folks know it's very appropriate that SMI Advisor also has a mobile app for smartphones. So you can use the SMI Advisor app to access resources, education, learn about upcoming events, complete mental health rating scales like the PHQ-9 and GAD-7, and even submit questions directly to our SMI team of experts. You can download the app now for Apple and Android at smiadvisor.org app, which brings us into the Q&A. So the first question is, can mobile cognitive testing be utilized on patients' existing smartphones through downloading an app, or is it better, or do you need to give people phones? That's an excellent question. And I prefer the bring your own device, BYOD model, and have people do it on their own phones. The platform that we've developed is a web-based platform, so they do not have to download an app. It gets pushed out through a text or a WhatsApp, and then it's a web link. The reason we did it that way is that the development isn't contingent then on Apple updating their OS, and it can be platform independent. It can be used on Androids or iOS devices. There are advantages to giving participants devices and having them complete the mobile test, cognitive testing, in that you're not going to have device variability. So if you give people devices, you know the screen size is going to be the same, you know that it's going to all be on the same operating system, and you're not going to have to control for some of those device effects in your analysis. That makes sense. So pros and cons of each. This next question says, how do you reconcile the idea of treatment interventions designed to minimize social isolation and increase social activities and socialness for patients with some of the interesting findings that you had around mood and being alone? Yeah, that's an interesting question, and we've been exploring that idea a lot. We've done some work with, we have a CBT to go app, really just trying to increase social engagement in people and reminding them kind of in real time of, we have some data showing that, well look, when you're actually with other people, you do enjoy the activity, but feeling that they might just be more comfortable and, you know, thinking they are, mood is better when they're at home and alone, may not accurately be a self-appraisal that for when they're with others. So I think there's possibilities for some mobile intervention work there. That makes sense. This question says, we were wondering what specifically does went home mean in the GPS-based data, and how do you interpret the similarity of anxiety ratings across settings? Also, were there more fine-grained GPS measurements of the outside, outdoor locations people frequented, or is it possible to only pick up home versus not home? Okay, so yeah, so the went home is when people were going home. So we, I showed data when they were at home, leaving their home, away from home, and then going back home. So I think that we're seeing similarities in terms of anxiety with away from home and going back home, because that's still a kind of in-transit period of time. In terms of other variables, there's definitely a lot more variables that we can measure using the GPS data. For the context of that study, those were the variables that we looked at, but in other work we have started mapping, especially GIS, Geographical Information System, layers onto the GPS data to really get a nice sense of if people are going to say a park or, you know, a transit stop or, you know, a doctor's office. What's the lived environment like? What's their neighborhood characteristics? So that's definitely a possibility, and you can map those GIS layers on just by collecting the GPS data. We just didn't do it for that study. There's a whole other world of pieces. We're getting many questions praising and saying thank you, which are not questions, but those are well-deserved. We'll do one probably more question for the sake of time. It says, is mobile cognitive testing available yet for implementing in non-research clinical settings? For example, could it be added to an EPI clinic in addition to yearly neuropsychological assessment? I think by EPI it may mean early psychosis intervention but can we kind of be using these things outside of research? Yeah, so I think that, you know, they're pretty close to ready for outside of research. Some things that still need to be improved are just the way the data gets fed back to clinicians. It needs to be streamlined a bit more. Right now we're getting all the raw data back and having to apply algorithms to score the data and things like that, you know, but there are our platform and there's some other platforms that do have these dashboards and we actually are using ours. I'm testing it out in some clinical work down in Brazil and then starting it up at a clinical neuropsych practice at Brown University here pretty soon. So I'm really excited to actually see this moving out of the lab and into the real world. And, you know, I think with a few tech updates, we're very, very close. That's exciting. So I'm going to sadly move us forward, but the good news is that if you have follow-up questions about this topic, smartphones, or really any evidence-based topic for SMI, our clinical experts are now available for online consultations. Any mental health clinician can submit a question, receive a response from one of our SMI experts. We can certainly forward any additional questions about this exciting work to Dr. Moore and respond to these consultations are always free and always confidential. Before we shift, I want to also say that SMI is proud to partner with the APA on the Mental Health Service Conference, which will take place on October 14th or 15th. The keynote address to this conference is going to feature Dr. Delton Rittman, the newly appointed Assistant Secretary for SAMHSA under HHS. The conference will feature topics such as climate change and mental health, sociopolitical determinants of mental health, structural racism, mental health, and rural indigenous populations, and much more. I encourage you to learn more and register right now at psychiatry.org. To claim credit for partaking in today's webinar, you'll need to have met the requisite attendant threshold for your profession. Verification of attendance can take up to five minutes. You'll then be able to select next to advance to complete the evaluation before claiming your credit. And last but not least, please feel free to join us next week on August 19th as Patrick Hendry with Mental Health America prevents using peer support to empower self-management and participation in treatment for individuals who are difficult to engage. This free webinar will be August 19th again next week from 3 to 4 p.m. Eastern time, so noon to one for those of you in Pacific time. Thank you for joining us. Until next time, take care. you
Video Summary
The video is a webinar titled "Advances in Smartphone and Digital Health for Schizophrenia and Bipolar Disorder Research" presented by Dr. Rhianne Moore. The webinar discusses the use of smartphone technology and digital health tools in researching and improving the care for individuals with schizophrenia and bipolar disorder. Dr. Moore explains that using ecological momentary assessment (EMA) through smartphones allows for more accurate and frequent assessment of cognitive functioning, symptoms, and daily activities. The data collected through EMA can provide insights into the daily experiences and functioning of individuals with serious mental illness. The webinar also explores the potential benefits of mobile cognitive testing in assessing memory and executive functioning and how self-assessment procedures can reveal discrepancies between objective performance and individuals' perceptions of their own functioning. Dr. Moore emphasizes the importance of understanding response biases and self-awareness in individuals with serious mental illness. The webinar is part of the SMI Advisor Series, a resource for clinicians implementing evidence-based care for individuals with serious mental illness. AMA PRA Category 1 credits, continuing education credits for psychologists, and social workers are available for participants who attended the webinar.
Keywords
Advances in Smartphone
Digital Health
Schizophrenia
Bipolar Disorder
Research
Ecological Momentary Assessment
Cognitive Functioning
Mobile Cognitive Testing
Serious Mental Illness
Funding for SMI Adviser was made possible by Grant No. SM080818 from SAMHSA of the U.S. Department of Health and Human Services (HHS). The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, SAMHSA/HHS or the U.S. Government.
×
Please select your language
1
English