false
Catalog
Workshop: Substance Use Disorder Measurement Based ...
Workshop: Substance Use Disorder Measurement Based ...
Workshop: Substance Use Disorder Measurement Based Care in Behavioral Health
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thanks for coming to the workshop today. We've had a great morning with learning a lot about cannabis and the AAAP community. And thanks for coming to our talks about patient-reported outcomes measures. This is going to be a little bit, we'll be introducing the concept, talking about how to bring this about in clinics, and how we might use this in clinical care to both inform individual patient care, but also to think a little bit more broadly. So that we understand how to make this available for clinical decision-making at a broader scope. So we're going to, we have, I'm excited to include a collection of our friends, mentors, and advisors. I've gotten to know, I'm going to start with Dr. Timothy Wellens from Massachusetts General Hospital, who's going to start off our conversation. Thank you. Thank you, Vinod. And thank you so much for organizing it. And hello, everybody. It's great seeing everybody. I'm going to start with my disclosures, but also disclose that a lot of what you're going to be seeing today is because of a result of NIH HEAL study that was allowing us to, as part of what we're doing in the HEAL study, to examine over time the effects of psychiatric disorders on substance use treatment. How well does treatment abrogate the ultimate risk for substance use, progression of substance use, et cetera. We had to come up with the use of patient-related outcome measures and include those to help us at that level, at the study level, and at the same time, the hospitals were interested in including patient-related outcome measures in measurement-based care. So it was a sort of aggregate of things that occurred simultaneously that allowed us to do this. So we're going to be talking a little bit about patient-related outcome measures, why we use them in psychiatry, and how clinical practice will benefit from that. I just want to remind people that measurement-based care isn't new, this has been around, and it really does help in a number of things. It's to use a routinely administered tool. We then review those tools, we review the outcomes of those tools. That's actually really important because, as you'll hear from Dr. Ewell follows, that sometimes isn't happening, and that's one of the barriers. And then you go through that with the patient, and there's a collaborative evaluation of what occurs. I can tell you that since we've integrated measurement-based care, that collaboration really does work if the provider is involved and really understands that this is ongoing. If you know that there's PROMS, you're talking to people about the PROMS outcomes, I actually love it as a provider. So I switched from, I'm a very busy provider, I'm also a psychopharmacologist, I know a lot of you, I'm a child psychiatrist, I see a lot of ADHD, I see a lot of different disorders and substance use, and so I'm using these tools every day. Not just doing research on it, but using them. Want to remind you that, last time you went to your primary care doctor's office, they probably got your blood pressure, height, weight, and everything. Unless you're like me, when it comes to weight, I just run away so I never get that, and I give them false reports. But then they use that information. So the question is, what do we have in psychiatry that really mirrors that? There's good data that measurement-based care is associated with better outcomes. If you compared measurement-based care where you're doing systematic observations for just treatment as usual, the effect size is anywhere from 0.2 to 0.7. So what does that effect size mean? A 0.2, it's kind of a mild effect size. A lot of times if you use naturaceuticals for depression or anxiety, that's kind of what you get. An effect size of 0.7, that's a non-stimulant for ADHD, that's an antipsychotic for bipolar disorder. That's pretty good. And an effect size of about 0.4 is when you give Prozac for depression, just so that you've got the ruler there. That's darn good to get that much better outcome from just doing systematic assessments. And one of the things I always want you to remember, and you all know this anyway, omission is what we fight in medicine, missing things. And measurement-based care helps ensure that you're not missing as much. And here you can see there's more treatment adjustments. People weren't doing as well with their depression. People weren't doing as well with their blood pressures. People weren't doing as well with whatever was being monitored. They found overall greater response rates and more people in remission. And I would add for psychiatry, wellness. We've moved from symptom reduction, as you know in psychiatry, to wellness. We want people well, and this helps with that. And people get better faster. So what are patient-related outcome measures? So measurement-based care is a general property. Patient-related outcome measures are basically systematic questions that we're using. So that's actually, when a patient comes and they fill out something, either paper and pencil, and you're gonna hear about that, or on electronics, that's what's referred to as PROMS. That's patient-related outcome measures. You're checking your substance use. You're checking anxiety, depression, functioning, things like that. What are the implications for the providers, the people actually delivering the care? Identified treatment targets, depression, substance use. It alerts providers to progress and declines. So I follow a number of people who have past histories of addiction, and all of a sudden, bing, I get a best practice advisory that there's a concern that this person, substance use just popped positive. Now I'm probably gonna ask about it, but if I'm five years into treatment and I'm worried about a current depressive episode, that may not be first on my agenda, but now it alerts me to that. It helps me with my treatment decisions. What am I gonna do now? Because I'm gonna have to think about that. And I cannot stress enough the collaboration that occurs. When patients fill out patient-related outcome measures, they're engaged in their care. Yesterday, for example, I'm treating a 71-year-old retired nurse who has ADHD, and in the past, has drank too much. She's been doing well for five years in terms of her drinking. Bing, I log on, I get a best practice advisory, this thing that pops up and says, be careful. And when I talked to her about it, I said, so let's go through your drinking. And she says, you know, when I filled out that form, it really woke me up to how much I was drinking. I hadn't really thought about it until I filled out the online thing asking me about my drinking. And I knew you were gonna talk to me about it, that it would be positive. And I hear that all the time because I'm asking patients around that, they fill it out and I follow up on that. The same with the PHQ, the same with the GAD. Oh, and the last thing I wanna say is, it's also great for quality assessment. So, you know, looking at some of my colleagues around, Chris is here, but I was commenting that if you have this systematically operant, you can look to see how are people doing for depression, anxiety, depression, their functionality, and whatever other problematic outcomes you're studying, you can track that over time in a big health system and know. Or you can take a look and say, I'm very interested in drug X along with that. Does topiramate really help across the population in terms of substance use? Well, you can look at their TAP scores and see, or whatever your favorite screener is. So what's in it for patients? There, it's like this retired nurse, all of a sudden she's now talking to me about something that she probably didn't really wanna talk about. She wanted to talk about her ADHD, but she's now talking about her substance use. It also helps with the symptoms. And we're gonna go through that. People have a lot of problems, as you know, separating depression and anxiety. But when you ask about anxiety and you ask about depression, it really helps. In some of our studies, we've looked at ADHD using PROMS. And people initially are like, I don't know what the symptoms of ADHD are. And then all of a sudden they start noticing it and they can differentiate anxiety from ADHD. By doing PROMS, patients start to learn the symptom clusters and they can then say, anxiety is bothering more, it's more depression. And it quantifies and it helps us to discuss this because now we're using a common nomenclature. These are just some of our clinics, but we use them in child, we use them in adult. We're trying to, we're sort of beta testing, do some of the adult scales work in child so we don't have all these different scales. We're using them in patients who already have substance use, both children and adults. They vary a little bit. You're not gonna use all of the same ones. Like your substance use screeners are not gonna be what you probably use in your substance use clinics. Whereas you probably will use your anxiety and depression screeners. So let's talk about it. For those of you who are interested in what screeners, the TAPS tool is one that is available through NIDA. And by the way, all of the screeners I'm telling you about are on, we have on Epic. I'm just curious, how many people here have Epic as their medical record? So a lot of you do. If you do, Epic can grab these, you can grab these from Epic. Any one of them that's developed at any Epic site is available to you through the repository. And these have been developed. Now, they may not fit exactly, so your programmers may have to adapt them for the specific Epic system you use. Here's the TAPS tool. This is one that I was indicating NIDA has supported. They're testing it in adolescence. I think actually it'll work well in adolescence, but it is for adults. And you can get a sense here, the score of two plus will mean that you'll get a flag. And here you can see why. Pass five or more drinks. By the way, everybody done their self-assessment question? Oh, that's right, that's an ethics breach. I won't say anything about how this question may show up on your self-assessment questions. But that one, plus fail to control or cut down, et cetera. And you can see why. Those are very legitimate questions. And two plus says, hey, there could be an issue. Bing, you're alerted. This is the GAD-2, the GAD-7. People have heard about the GAD-2-7. What does that mean, GAD-2-7? It means two questions pop up, and if you answer positive, then it automatically reflects to the following question. So feeling nervous, anxious, or not being able to stop or control worrying. If you answered yes to those, then automatically you'll get the worrying, trouble relaxing, et cetera. And you'll get a total score based on that. So that's called a GAD-2-7, GAD-2 reflects. Reflects means automatically a positive will open up the rest of the questions to the GAD-7. And this is the famous PHQ-2-8. Again, two, two entry questions. Limited interest or pleasure, feeling down or depressed. If you say yes to either or both, it automatically clicks and opens up to the rest of those questions. Okay, so that's that reflect. So it's quick. If the answer is no, it goes right. You go to the next screener. No, go to the next screener. They're fast. The PHQ comes as a PHQ-8, which is everything except the bottom question that's highlighted and the PHQ-9, which includes suicidality. If you include suicidality, you have to have workflows and workarounds. What are you gonna do if the patient says they're suicidal? It's great if they answer it right before your meeting because you wanna know if your patient's suicidal. That's kind of a nice thing to know, right? It's not so nice if you've got dutiful, obsessive patients like I have who answer it the day or two before and then all of a sudden I get an alert that says your patient's suicidal. So there are workflows and you'll hear more about it. So it's not as ominous as you think. Patients automatically, it comes back to them, say if you endorse this, bing, bing, bing, this is what you should do. We do get notification. We talk to our providers not to panic, to realize that the patients automatically get a signal that if they're feeling this way, they should contact a crisis center, et cetera. Some people say we can't deal with it, so they just use the PHQ-8. So the PHQ-8 and the PHQ-9 are the same except for the last question which covers suicidality. This is a functionality. This is the one our system uses. I'm not sure this would be my first choice out of the gates. It's the one we use. It has a lot of health stuff in it, but it does have mental health, social activities, and sort of social, physical, mixed activities. So it does give you a functionality independent outcome which many insurers or providers of healthcare are now requiring. So it sort of checks all that box, this panel. And again, these are all available. And I'm gonna end with what's called this thing I keep going to which is a best practice advisory. So when we log on, I'm gonna see this retired nurse. I clicked on, we were doing a virtual care, so I could click on the screen and I clicked on the medical record and it pops up this big thing, substance use risk, problematic risk. And when I click, I can see exactly what was being endorsed. Takes one second of my time. I see it there. I click acknowledge, it goes away. For this case, you'll see it for a best practice on suicidality. Thoughts that you would be better off dead or hurting yourself in some way. And again, this helps because I know this is a major flag. I can deal with it right at the time. And the patient, you know, when we've had these, they say, yes, I endorsed it. And a lot of times they say, I did, but I really am not as suicidal as it may seem or this was just the best way I can answer it, et cetera, et cetera. But I say this because a lot of people say, well, how do people know abnormal scores or abnormal findings on these PROMs? This is one way. Do you look at PROMs if you don't get a best practice advisory? I have to say, I'd be interested. Vinod can check that out for us as an informaticist. A lot of our practitioners I talk to say, no, if the patients do them, they know that things are fine if they're not elevated. I think it's helpful because you can track over time how they're doing in it. It's really easy to do. It's an integrated system, et cetera. So with that, I'm going to end this. If there's any questions you have about measurement-based care, PROMs itself, feel free to shoot me an email. I think I know a lot of you, but grab me at the meetings. I'm here all meetings. And with that, I will yield the rest of my time to Dr. Ewell, who will be coming up next. Well, thank you all for being here with us today. So I'm excited. So Tim talked a lot, or Dr. Willens talked a lot about the benefits of measurement-based care. And I'm going to talk a little bit about how do you make it happen or how do you try to make it happen? So, and I'm going to talk about barriers and facilitators to implementing measurement-based care in behavioral health. Before I get started, I wanted to note that I don't have any ACCME-defined commercial conflicts of interest, and this lists some of the funding supporting my research and clinical activities. So I'm going to focus and kind of get into some of the details about opportunities and challenges associated with digital measurement feedback systems. So really thinking about integrating measurement-based care into the electronic health record. And again, some of these kind of details around opportunities and challenges. And then discuss some of the barriers and facilitators that we've been able to identify as we think about successful implementation of measurement-based care within a behavioral health clinic. So, you know, Dr. Willens showed this slide and it kind of looks pretty easy, right? Like you can just kind of move from step to step and seems pretty simple, you know, kind of seamless. But it does not happen magically. And there's a lot of details that go into thinking about how do we systematically and routinely administer tools to assess symptoms for patients in behavioral health? And then how do we get this information efficiently and easily to clinicians to be able to review and then work with the patient together to inform their treatment plan? And I've become very, very interested in these details as part of our Helping End Addiction Long-Term Initiative Prevention Project. And so at both Mass General Hospital and Boston Medical Center, we've been working to support monitoring of mental health and substance use outcomes through implementing measurement-based care to patients in behavioral health clinics. And so four clinics at MGH and then two clinics at Boston Medical Center. And for people that are less familiar with Boston, these two hospitals are geographically quite close. So they're actually only two and a half miles apart, but see very different patient populations and have very different levels of infrastructure and support. And so Mass General Hospital is part of the MGB system. And so it's a large network of hospitals and clinics. It's not quite the VA system or Kaiser Permanente, but it does have centralized infrastructure and support to be implementing patient-reported outcome measures within different clinics. And so there's a lot of expertise and a group that works on patient-reported outcome measures at the hospital-level system. MGH also largely sees patients who have commercial insurance. And as you transition two and a half miles away, Boston Medical Center is one hospital, so it's not part of this larger infrastructure. And then we largely see patients who are insured through Medicaid and Medicare and are really the health safety net hospital for the city of Boston. And then additionally, we see kind of a larger percentage of historically marginalized groups. And so 30% of the patients seen at BMC identify as black and 20% identify as Hispanic Latino. And so this will become relevant as we're kind of talking about some of these details and going through this. So we know a lot about measurement-based care from primary care. And measurement-based care has been, there's been more successful implementation across primary care practices. But as we think about behavioral health, it's really important to think about how our setting is different than primary care. And especially as it relates to trying to systematically administer patient-reported outcome measures. And so in behavioral health, we generally have less ancillary support. So we don't have medical assistants who can kind of check to see, did someone complete their measures? Did they have questions about the measures? If patients are doing this on paper and pencil, within primary care at BMC at least, the medical assistants help to translate it from paper and pencil into the electronic health records. So that information is there for clinicians. And so if you don't have that ancillary support, it can create some challenges in terms of trying to implement measurement-based care. Furthermore, as we think about behavioral health, our work is generally multidisciplinary. And so we have different types of providers, different types of appointments. And so a visit that's largely a medication visit is going to be pretty short. The clinician's going to be spending probably quite a bit of time in the electronic health record in terms of reconciling medications, kind of looking at labs, sending prescriptions. And so there's more opportunities for them if there's information that's kind of automatically pushed through the electronic health record about patient-reported outcome measure data. They're going to see it more. It's going to be there kind of flashing for them. Whereas a therapist, maybe they check kind of to see how their patients are doing a couple days before clinic. But then as part of the visit, they may not go into the medical record, per se, at the beginning of a therapy visit. And so may not see this information that may be within the EHR until after the visit or kind of later on. As we think about also behavioral health versus primary care, we often see patients a little bit more frequently in behavioral health. And so that does kind of influence how often are we administering these scales and the cadence and kind of the overall burden in terms of collecting this information. So because of some of these barriers, there's been a lot of interest in digital measurement feedback systems. We're trying to integrate this into electronic health record and overcome some of these barriers such as an absence of ancillary staff. So there's a lot of benefits to doing patient-reported outcome measures through the electronic health record and having this digital. You can, if you're doing this through the EHR, you can track who and when to administer the patient outcome measure through using a health maintenance alert. You can send these measures if patients are connected with a patient portal before the visit, so they can complete it at home to make sure they have time to review the information. Or they could complete it in the waiting room if they come and there's availability of kind of tablets or these kind of kiosks to enter this information. The electronic health record automatically scores these measures. So you don't have to be kind of quickly checking your addition skills as you're adding up the PHQ-9. It's kind of scored. And then also sometimes you can build in kind of guidance on how to interpret the measure. So we spend lots of time thinking about this TAP scale, which is screening for substance use. Many clinicians in behavioral health might not spend as much time thinking about that measure and may be less familiar with it. And so you can create these best practice advisories that help with interpretation of the data that's collected. So these are some of the kind of benefits of doing this through the EHR. However, some of the challenges are you really need significant support from information technology analysts. And so Dr. Willens kind of said, oh, you know, the TAPs is available. It's out there in the EPIC repository. But just because it's in EPIC, you know, this kind of repository doesn't mean that you can kind of, it's not so simple as like pulling it in and then integrating it into your EHR. So it takes a lot of work. And all the steps that I just talked about in terms of these kind of ways that kind of you can implement digital screening through the EHR, they all require information technology analysts' time to do every single step. And so for example, so and then the other piece is that these EHR vendors, there's a lot of inter-variability between systems. And so both MGH and BMC have EPIC. We have very different versions of EPIC. So, you know, the kind of BMC version is like the car with the roll down windows that you kind of manually are rolling. And the MGH version with I think being a larger system with more infrastructure and resources is a very nice EPIC Cadillac version of EPIC. And so their system already has a lot of these pieces within it and kind of built in. So you don't have to spend as much time building it. Whereas at BMC, we've had to spend a lot of time. Even if you have the PROMS measure within a flow sheet, which is a place within EPIC that's important to have these kind of measures, you then need to build out the questionnaire to be sent through the patient portal. If you want to do it via iPads in the waiting room, you need to then rebuild the measure to be administered on an iPad in the waiting room. And you need a particular EPIC module to be able to do it on an iPad in the waiting room. So again, I could talk for hours about these details because it's just been really interesting to try to do this in two different systems with different resources and there's challenges. And so, you know, as we think about the details involved in the first two steps of the measurement-based care system in terms of collecting the information from the patient, getting the information to the clinician, we have seen differences in access and engagement with patient completion of outcome measures by demographic characteristics within our system. And so I wanted to talk about those because as we think about measurement-based care, we heard a lot about the benefits. And so it's really important to identify if there's inequities in terms of who's engaged in accessing measurement-based care. We want to make sure that we're identifying that and addressing it to make sure there's equitable access to these important measures for clinical care. And so we saw a really kind of big change with measurement-based care within the MGB system by race and ethnicity when we transitioned from doing measures in clinic to doing them through a patient portal. And so the top graph that you're seeing are patient-reported outcome measures by race, ethnicity, the completion rate when they're done in clinic. So this is people come in person to clinic, they're given an iPad, they complete the measure. And you can see that there aren't huge differences by race, ethnicity when people are doing this in person. There's this huge drop, and that's the COVID pandemic. And with the COVID pandemic within psychiatry, things shifted to being really virtual. There was still in-person care in other departments within the MGB system, but they took away the tablets because they were concerned about infection control issues, and they also redeployed the tablets to other areas as part of COVID. So with the absence of these tablets to be able to do in-person, in-clinic administration or PROMs, they switched to pushing these out through the patient portal. And so that's what you're looking at at the bottom, how many patient-reported outcome measures are being completed in the patient portal. And you can see there's a really clear kind of separation by race, ethnicity. So the two top dark lines are individuals who identify as Asian and white. The yellow line that's kind of in the middle are individuals who identify as black or African-American. And then the orange line at the bottom are individuals who identify as Hispanic or Latino. And so if MGB hadn't done this analysis, they would think like, okay, well, maybe we're getting kind of less patient-reported outcomes when we're doing it through the patient portal, but like, you know, life's okay. And so this was really important to identify inequities within the system. We don't know exactly kind of why we have these inequities, but I will say that in trying to do this project across systems, you know, at BMC, which sees again a much higher percentage of historically marginalized populations, we have a much less patient engagement in patient portals. So at BMC, only 65% of patients in behavioral health are kind of have an active patient portal, whereas at an MGB, that's kind of a much higher percentage. And so it may be related to kind of digital access issues, but we don't totally know, and I think it's an area we need to figure out. But really quickly, it's a quick question. Was the process available in Spanish or was the interface available in Spanish? And so that's another issue with these patient portals, so that they aren't all translated. And so in part, as a result of these findings, MGB did a huge push to translate all of their measures into different languages and have actually done a really nice job in terms of having it available in a lot of different languages. But many patient portals are just set up for English. And so, and even at BMC, it's only been recently that we're able to have it in English and Spanish, despite having many patients who need interpreters and having many patients who don't speak English. So the other area where we've seen inequities is with age. And so I'm a child psychiatrist, and we've been trying to do this project within child psychiatry and adult psychiatry. And there's all these issues as we think about adolescents. So within patient portals, they're set up so that when you're under 13, your parents have kind of just carte blanche access to your medical record. When you turn 13, in order to try to protect adolescent confidentiality, the system completely shuts off. So if any of you talk to a 13-year-old, they have zero interest in their electronic, in their medical record, right? And so parents go from being able to help reschedule appointments, request medications, when they're 12 to age 13, not being able to do anything. And so, and it's tricky because to then have a patient portal account, the adolescent needs to reestablish that account. And then the parent can have, or a caregiver can have a proxy account where they don't see as much of the adolescent's record, and some information is sequestered and confidential. And then, so there's different views. But this is such an onerous process that many parents, and I think, you know, we'll see what I do when my child turns 13, but I probably will do this as well. They just kind of sign up themselves for their child's account. And so when they've, you know, because this is such a challenging process, when they've looked at this, that kind of, you know, 64% of caregivers continue to directly access their adolescent's account. So as we think about sending out patient-reported outcome measures through a patient portal, where we're screening for substance use, and may wanna protect the adolescent's confidentiality, or kind of the adolescent's thinking about, you know, how honest am I in completing this? If they know their parent can access it, you know, that that creates challenges. And then because of this, you know, as you know, it really varies by state in terms of confidentiality laws. And so some, because of differences in confidentiality laws and interpretation of this, some EHRs restrict patient portal functionality, so the clinicians can't send patient-reported outcome measures to adolescents. And so again, our two hospitals separated by two and a half miles, at BMC, we are able to send patient-reported outcome measures to adolescents through our patient portal. And MGH, even if you kind of know for sure that you're, the parent is not the primary person with the patient portal account, and you can communicate with the adolescent in a confidential way, you're electronically blocked from sending them a patient-reported outcome measure. And so we haven't been able to really, it wasn't until we, you know, they started to return to more in-person care at MGH, that we were really able to incorporate measurement-based care into adolescent care. So again, these are kind of some of these things to be thinking about. Another issue to be thinking about is measure development. There's been so much interest in doing all this stuff digitally, because you can really, you know, decrease burden by having branching logic. So if someone answers no, they don't have to kind of see or answer a bunch of questions. You can really simplify or decrease the burden on patients in terms of how many questions they have to answer. And so because of this, you know, a lot of the newer screeners have really been developed for electronic administration. But there are some places where you still need to do it in paper and pencil. And so, you know, it can be really difficult to convert a questionnaire with branching logic to paper for patient self-administration when you don't have ancillary staff to kind of, you know, do the branching logic for you. And so we saw this at BMC. We wanted to get started with measurement-based care. I continue to spend a lot of time with our Epic analysts in terms of building all these different things within our system. And so we converted the tobacco, alcohol, prescription medication, and other substances to paper. We're really excited about this screener because it's broadband. And as we think about patients at a higher risk to develop a substance use disorder, it's those within behavioral health so want to be administering a tool that's broadband. But had lots of challenges in converting it to paper. So we did, you know, through the HEAL Initiative, we have access to consultation. And so consulted with people from RTI about kind of, you know, survey development and the best way to do this and follow the recommendations. And you can see an example on the right here. And it is, you know, so we tried to do these arrows and make it clear. So if you answer never, you know, go to the next page. And so how did it work out? And these are, oops, are the numbers are still, oh, I didn't realize I had the numbers here. So it didn't work out very well. So 75% of the patients who completed this on paper did not fill out the questionnaire according to the instructions. 70% of those patients answered more questions than needed. So they were really diligent. So even though they answered no, they continued to kind of answer no for every single question, which is very nice of them. But that's a big burden on them. And then we had about 20% that answered extra questions that contradicted their initial response, which made it difficult to interpret. So we did, we have switched, so this is version one. Based on that data, we have switched to version two to try to make it a little bit easier. But then in trying to do this, the number of pages has expanded. And so then patients like look at it and they're like, you want me to complete the seven page questionnaire? Like, why would it, you know? So we've had, we're, you know, trying to look at the numbers now to see is this better than on the left and are really eager to try to be implementing this electronically. So as we think about measurement-based care, we've been talking about kind of some of these patient kind of access and engagement with these questionnaires. And in thinking about this process, you know, measurement-based care really involves multiple stakeholders. And to actually make this kind of magical process happen. And so stakeholders including the patient, as we've been talking about, administrators, clinicians, leadership. And as we looked at the literature on measurement-based care and behavioral health, we did find that kind of when looking at barriers and facilitators, it really had only been kind of, work had only really looked at kind of one or two stakeholder groups. And so as we started to implement this project, we were eager to ask all stakeholders about their perspective and, you know, what are some of the barriers and facilitators to implementing measurement-based care? And so as we think about the kind of key domains to consider when implementing something new, we used the consolidated framework of implementation research. And it looks at different kind of areas that you need to think about when you're kind of doing something new. So thinking about the inner setting, which is the clinic that you're doing this in, the kind of intervention characteristics, which are the measurement-based care, these patient-reported outcome measures that we were using, characteristics of individuals. So the individuals, you know, the clinician actually implementing the measurement-based care, the administrator. So this is kind of a helpful framework to be thinking about. So we did a lot of focus groups. We actually did them all at MGH pre-COVID, and I'm gonna share some of our findings. But, you know, talked to about kind of, you know, 30 clinicians, 10 leaders, six administrative assistants, and really queried and asked them kind of, what do you think about this? Like, what should we be considering as we're implementing this? And you can see this as a kind of a busy slide, but it's a summary of barriers and facilitators by stakeholder groups. And so on the far left, A are administrative assistants, and so these are the people that are checking patients into the clinic and handing them the tablets with the patient-reported outcome measure on it. C are clinicians, and then L is leadership. And you can see that they're kind of, people were talking about different things. And then one other, you know, which reflects kind of their role in the system. And then one additional kind of interesting thing that we found was that even in areas of agreement, there was kind of different aspects of the implementation that different stakeholders were considering. And so one example of this is that, you know, administrators, clinicians, and leadership identified that the complexity of the measurement-based system could be a barrier to implementation. But they identified kind of, you know, that this is a barrier for different reasons. And so just to read you a few quotes, the administrator said, but there's some questions that are every 30 days, there's some questions that are every 90 days. So the patient does one today, but the one from 90 days is already up in 10 days. When they come back, they're like, I already did this. So we're used to telling patients it's every 30 days, but technically there's some that are 30, and there's some that are 90. And then in terms of clinicians, Mike, if you can hold till the end. You know, one clinician quote was, and the other thing is, I had to find on my own, go online, and figure out what the numbers meant. And so this was reflected in that there wasn't guidance in terms of how to interpret the scale. And so that's why we built in these best practice advisories to help with guidance for clinicians who maybe are less familiar with some of the scales. And then the leadership kind of reported, like, they can't find this information. They tell us this over and over again, that clinicians have a really hard time within this very easy to navigate epic kind of infrastructure on, you know, finding this information to then incorporate into the clinical visit. So as we think about this, you know, it's really important to be getting feedback from different stakeholders who are involved in measurement-based care. And each of these different stakeholders identified and prioritized different factors within the implementation constructs, which generally reflected their role within the measurement-based processing clinic. So in summary, as we think about measurement-based care in behavioral health, there are many opportunities and challenges that exist with digital feedback systems as a strategy to implement measurement-based care in behavioral health. It's fun to get into the details. Happy to talk to you about the details after this workshop or at the end. And it's really important, though, that we be thinking about implementing these systems in an equitable way and to also be gathering feedback on implementation from stakeholders with different roles in the system. And so with that, I will end. I think, Mike, I can probably take- I'll go back to the slide that showed that the table actually was the different groups. Mm-hmm. Yeah, so in terms of going back again, what resources do you think would have made it easier for those three different groups to actually make it easier? Well, this is us eliciting their feedback before we implemented the measurement-based care system. And so we- I saw that feedback. What did you think would make it easier? So there was a lot. We gathered quite a bit of data. So we'll definitely forward you the paper when we have it finished. But this was an example of kind of some of their feedback. So, you know, the clinician saying, you know, I needed help with interpreting the measure. So we built in the best practice advisories to kind of help make interpretation easier. Leadership saying kind of they can't find this information. And so it like flags as a color, which Vinod may have pictures of, but kind of flags for you in red if someone's done it or if someone's overdue for it. And then we streamed- based on administrator feedback, we really streamlined the questionnaires to be every- I think it's every 60 days within MGH or 90 days. So even though some of the taps is meant to be done annually, we're doing it every 90 days because it just was much easier to have everything on the same- with the same cadence. And it was also interesting that clinicians wanted it more often. They were the ones who said, why are you doing it just yearly? We want it every 90 days. We're like, really? Okay, great. So, yes, there was a lot of different- and happy to kind of talk offline about kind of more granular details, but those were, you know, kind of some examples of how we modified the system before we got this feedback, before we implemented it to incorporate these details to make the implementation more successful. So. Yeah. Thank you, folks. And just as Amy got to nerd out a little bit on these implementation details, I'll get to do the same on data extraction. So my name is Vinod Rao. I'm a psychiatrist at National General Hospital, and I have no relevant financial disclosures, although I am part of a couple of NIH grants. So during this part of the conversation, we want to be talking about how do patients, how do we actually provide the patients with feedback? What can it look like? And what does this look like in terms of clinical care? But then getting back to what Amy was alluding to, where does the state of live, both for, not just for the individuals, but also for the clinics? And how might we be able to access this? And how might we be able to structure questions so that we can answer questions of relevance? So I'm gonna take a look at a few different views. And again, we're operating within the Epic system. Other systems, Cerner, the VAs, can look a little bit different, but a lot of them have similar features. Within an encounter, when a patient completes it, electronically at least, it can often produce a table like this. Again, one of the advantages of engaging with PROMs is it allows efficient interviewing. Through a table like this, for instance, the first column indicates the most recent filling of these questions, and in the bold, I'm not sure how well that shows up for you, are some summary scores. And just from this sort of table, you can quickly get a sense of the trend over the last few sessions, and then drill down into components. Again, with these examples, you can quickly delve into some of the details. So for instance, for this particular patient, if you look at the summary scores here, you might not be able to see this very well, but I'll just tell you that the use score, sub-score on the brief addiction monitor, are zero across all these sessions. They haven't resumed using. And the risk factors are fairly steady, but the protective factors seem to have gone down in the most recent one. So why might that be? When you look at some of these individual questions, and you look through each one, for instance, their self-help participation, that's been pretty steady. And you go through these, but then you quickly see, in the last 30 days, they spent less time in touch with family members or friends that are supportive of their recovery. Looking at this quickly, and with some practice, you want to quickly scan these tables, you can really hone in efficiently on that part of the conversation. And rather than starting really broadly in your conversation, more of your precious clinical time can be spent focusing on these sorts of questions. I like to think of it as turning a 25-minute appointment into a 45-minute appointment, because you got to collect all the data in advance for you. So say you were looking at a longer, or an individual clinician working with a patient, but looking over a longer stretch of time. Ideally, actually, one thing I wanted to emphasize was this format of it essentially works fine if you were implementing it in paper. You can look at the individual responses and get a sense as to how people are doing fairly quickly. However, if you do want to look across sessions, it's often really practical to be able to have that in an electronic format. You need some sort of system set up to allow you to quickly tabulate the responses and then visualize it in a way that's helpful. For this particular patient, what I found really helpful clinically is showing the patient what these look like. When I was seeing patients consistently in person, I would just turn the screen so they could see it. Now in the age of Zoom everything, I would share my screen and show them part of the EMR so they can see their trends over time. So for this particular patient, when he started in his care, he wasn't using, that's what the reddish line on the BAMU score represents, he wasn't using, but his risk factors and protective factors according to the BAM are pretty similar. But over time, his risk factors went down, his protective factors went up over the course of our work together over many months, and we were able to capture this and show this to him. This patient, if you just looked at face value and just said, well, he hasn't been using then, he's not using now, no big deal. But actually, when I showed this to him and demonstrated to him, look, your risk factors over time really are going down and you're in a much better place in your recovery, this actually meant a lot to him. He asked me to print it out for him and make this available, and he put it on his fridge and he still refers to it as a marker for how he's doing in recovery, and that's not something that would have been available to him. I don't remember how I was doing last week, and it's unreasonable to expect our patients to remember how they were doing in detail months and years ago. But what about if we're talking beyond the level of the individual patient? Many of us think in terms of what's going on in terms of the whole clinic, and so there might be questions you want to ask. But I invite people, and before people say, all right, great, there's data, let's look at it, I want to encourage people to take a moment and think, what exactly are you trying to estimate? There are different kinds of questions that you might care about. Some of them, and Amy, you'll allude to some of these, relate to process questions. For instance, PROMS completion rates. PROMS are an indirect goal, like what we actually care about is our patients doing well. We don't care about them filling out the questionnaire inherently. However, if trusting the literature that measurement-based care tends to improve outcomes by implementing it, then it becomes valuable to be curious about what these PROMS completion rates are. Dr. Ewell also mentioned some really important examples of how systematic biases in terms of availability and equity can impact PROMS completion rates. So there are other questions related to the process that can be important to pay attention to. In addition, you may also be interested in certain aspects of the content of these questions. For instance, if you're implementing screening measures, you might be wondering about the distribution of the substances within the clinic. You might be interested about the presence of certain comorbidities, whether psychiatric, non-psychiatric, either diagnoses or symptoms, suicidality, for instance. There may be things that are going on over time. This morning, there was an extensive conversation about the legalization of cannabis or decriminalization of cannabis and how that's changing things at a broader level. For instance, the hypothesis was discussed about, is it impacting opioid use? That's a question that you can ask in the context of your own clinic. Are my own patients seeing a change in their opioid habits as a result of changing trends in terms of cannabis and how that might relate to legislation? That's available that you're sitting on in your clinics that this could potentially help you access, but how do we go about doing this? Where does the data actually live? So most electronic medical records, we're looking at our monitor and behind that exists a server which contains a lot of data on it that's being accessed real time. Some electronic medical records make available analysis tools to directly extract the data. For instance, in Epic, they make reporting tools such as reporting workbench or slicer-dicer. Different systems have different other elements that can help extract that. However, some of these tools may not quite meet your needs. There may be some variables that you don't have access to or it doesn't organize them in a way that's as useful for you. So you might need to take an extra step. A lot of organizations, not everyone, but many of them actually transfer the data from a server that's used real time for clinical care onto a data warehouse. Data warehouse is a relational database and it's like a large network of interconnected tables that contain a lot of the clinical information that is relevant to your care. And then from that, one can extract the data in a more flexible way. It might require more sophisticated tools, but more flexible tools, for instance, structured coding language, SQL, in order to extract this data. Now engaging in this and extracting this data will definitely take partnership. Many of us trained as clinicians and not as people that are like really into the ins and outs of Epic or data servers behind it, but many places have infrastructures that support that. Remember, a lot of the people that work in these places maybe could have gotten jobs paying more money in different industries, but are in healthcare because they are connected to the healthcare mission. I found that when I've had conversations with people that are helping me extract the data and I explain to them, this is what I'm trying to do. This is the big picture. There's an opioid epidemic out there and by helping me extract this data, we'll be able to improve the care that we're delivering. They're susceptible to that kind of sweet talk and help me out in terms of accessing the data. But like so many endeavors that we engage with, it's really about building the partnerships and the relationships. So it may not be obvious how to do it, but I really encourage you to ask around and get some assistance. So say you're able to access the data. Then what? What constitutes a tractable question? And so we think about the variables that exist in the record and I think you want to think very carefully because there's lots of questions we want to know, but some of them might not translate well into useful scientific hypotheses. So first, do they live in a coded field? Is there a reliable place within the electronic medical record that it could be accessed from? The note itself is not very specific. There's all kinds of information in there. You might hear about natural language processing, which has a lot of potential, but I think a lot of perhaps inappropriate hope is attached to it. There are certain situations where it works really well and certain situations where it's not going to be very helpful. If the data you're looking for lives in a reliable space, that's important. Whether it's discrete or numerical, if the information in that space comes from a constricted list of options, that goes a long way. For instance, we have to do a required assessment as part of our state licensure regarding various aspects of their use disorder as part of entering the clinic. People have to identify their most problematic substance. I forgot the exact language, but most problematic substance. It's a free text space. When I look at it, I was thinking, oh, this is going to be a great opportunity to look to see what's impacting the clinic. I'll see cocaine, heroin, fentanyl, pills, dope, all kinds of spelling, all kinds of capitalization. Translating that into something that's going to be relevant across hundreds or thousands of patients. which is really challenging. So when they can exist in a discrete formulation, that goes a long way to making these analyses tractable. Whether it's actually updated by clinicians, as opposed to front desk staff that might be looking a little bit less carefully, things evolve over time. The date of birth is accurate in the record, and that's not changing over time, but what's their employment status? What's their marital status? Was that gonna be the same when they, will the discrete field be updated reliably? And over time, commonly it isn't. And then carefully described by all clinicians. For instance, visit diagnoses is something that we love to use, and we do use them. But one of the things that we're curious about is whether people are in remission or not. How do we know whether people are in remission? Well, if you can try to look at the text of the note, but that's not gonna be very available. That's not gonna be easy to parse at a broad scale. And then you could ask people to report alcohol use disorder in early remission, in sustained remission. But how often do busy clinicians actually do that reliably? If it's not gonna be carefully described by the clinicians, then it might not be a variable that you can use very effectively. So not all variables are created equally. This is an oversimplification, but I'm gonna go through some examples of things that are easy to harvest, but still might be challenging with interpretation. So age, I think that's really reliable. Sex, there's more nuance there. There's gender identity, there's sex at birth. Where do these information live? And so it may be easy to harvest, but it might not capture what you intended to. Race, that's another one that is not reliably updated. In our system, you look at race, and there's a ton of unsure, declined answer, other. And then race as a concept in and of itself, there's been a growing discussion that it may not have the validity that people used to think it did. Sorry. Appointments themselves, that's fairly reliable. Billing codes, you can see what people have billed with. Visit diagnoses, again, that's there. That might not be exhaustive. It might be capturing, it might, people might only be incentivized to include two things, because then you can build a 99214 and then go on. ED visits, hospitalizations, again, that's great if it's available within that particular medical record. Insurance information can be pretty useful, because the EMR might capture whether they're billing. But some are, some even simpler, ones that ought to be more straightforward. The problems question there is, you need to identify all the questions and make sure that you know which ones are reflecting, which responses are reflecting which questions, which summary scores. So it takes some data organization in order to capture that effectively. Toxicology screens, I must have sorted through like 200 or 250 components of toxicology screens that exist in the record. Maybe no one ever uses it, but if we want to find out whether people are using, we'll need to be able to, we need to be able to know which measures we're looking at from the record. Medications, we've spent months in terms of, we're doing analysis regarding depression treatment, and are people treated for depression? Medications are organized as orders. We have to parse out the name, the date of the order, to find out whether it's active or not, and then the dose, and the dose is, well, is it an effective dose or not? That's different for each medication. You need to think about that. So it's doable, but it takes some work. Overdose may exist. Sometimes it's in the coded field, especially in the ED, but sometimes it isn't. And comorbidities can live in all kinds of places. And some ideas are just not reliable. I don't trust the information from the electronic medical record regarding education, marital status, employment. Homelessness, it's there somewhere, but oftentimes it's in the note, not necessarily in a coded field. I already mentioned remission status, and type of psychotherapy is another one that we'd love to be able to do more analyses in detail, but what does therapy mean? We can distinguish group from individual fairly reliably using billing codes, but CBT versus psychodynamic versus some other modality, family therapy, it's not always reliably coded, and so it's challenging. One quick example of how we use PROMS to characterize an impact in an intervention. My colleague in the back, Dr. Ward, and I in our clinic implemented a walk-in clinic to engage patients in care in a hopefully lower barrier way than an orientation group. And we wanted to find out whether that was characterizing the patient, whether it was capturing a different set of patients, and so we happened to be collecting PROMS data from these patients, and we found out that these patients in the orange that came in through the walk-in group tended to have higher scores for their risk factors on the BAM, lower scores for their protective factors on the BAM. We collected PHQ and GAD-7 and found that they had worse depression, worse anxiety, so we were able to validate, if we're taking the energy is to implement this kind of model, we're at least capturing some sicker patients, and that was useful for us to validate what we were doing. So to wrap this up, just to review the take-home points, as a clinician, accessing and discussing the PROMS within an encounter can really increase the depth of the conversation and can be used to motivate the patients in terms of working on their own recovery. If you collect the data systematically, there is a lot of questions that can be answered, both from the QI perspective or from the research perspective, but you need to take the energy and effort and form the relationships to figure out how to best access the data. And when you're formulating these questions, be really thoughtful about what kinds of concepts you wanna be able to engage with in order to find something that's tractable. So with that, I'm gonna take a pause here and invite our next speaker from the National Institute of Drug Abuse, Sarah Steverman, to come and share some perspectives on patient-reported outcomes measures. Thank you. All right, I'm just gonna try to make a little bit of space. It's okay. I don't have any slides, so I apologize. Just a few thoughts. So I am Sarah Steverman. I'm from the National Institute on Drug Abuse at NIH. I don't have any financial disclosures since your tax dollars pay my salary. I am also not an MD. And on the off chance that someone calls me Dr. Steverman, my four-year-old says, mommy is not a doctor. And so I am not a doctor. But I am the project scientist on this project with Tim and Amy and Vinod, and I get the opportunity to check in and listen to what they have going on with the study and learn from them. And so they graciously asked if I'd come and just say a little bit about some of the outcomes, some of the lessons that they've been learning that I've been able to peer in on. And I'll just talk a little bit about that and a little bit about sort of the Arnida kind of focus that we're trying, where we're trying to go and what we're trying to do with our prevention portfolio just very briefly, and then we can get to questions before the end because I'd love to hear from you all as well. So just a few themes that came up across these presentations and that we keep talking about in our regular check-ins are really mostly related to these implementation questions that I think are so interesting and important and that are coming up across these various projects that we have within this HEAL initiative. And as Dr. Rao mentioned, we aren't necessarily, today we haven't talked about whether this has improved any outcomes for any of these youth or young adults. At this point, we're just trying to figure out how to do this better. And what we need to be thinking about in order to do this. And importantly, in a collaborative way where we have the expertise incorporated from various stakeholders. So, as Amy talked about, as Dr. Ewell talked about, the administrators, the clinicians, the leadership. Clinicians, it seems, are often the bridge there trying to figure out how to make this make sense for them within the clinic visit, but also make sense for them within the clinic visit, but also make sense for their colleagues. But then also, as Dr. Rao mentioned, the IT folks. Like, we need them to be, we need those experts who understand what's happening inside that EHR, what's happening inside that system, to make it work for us. We need those friends involved in this process as well. And then I think we can't also forget about the patient and the patient-provider collaboration that PROMS kind of offers. And that this patient experience is potentially one that is going to, using PROMS is going to validate the patient experience, validate their experience. And especially for youths, that they're being heard and that then they're able to see their progress or they're able to have conversations that they feel like are meaningful and that they have been heard when they've been asked a question on a survey as they see it, that their clinician then asks them about it and is able to bring that back. I think it can make them feel like they are part of this process, as they should, right? And then I think this other issue of feasibility keeps coming up. Like, I'm gonna talk a little bit about the challenges that have come up, but this is something that I think in one way or another has come up in this project that implementing measurement-based care needs to be something that's feasible. It's something that the team has been working on that has to be easy-ish, easier for the provider to consume and do something about. So the best practice advisories are one way that Dr. Yule kind of identified. Like, we need to, if we're telling someone that this is a problem, we need to help them figure out what to do about it. And then again, easy for the patient to fill out and understand. That is one huge benefit of this electronic method. And then as Dr. Rao mentioned, that there's a way for those data to be displayed in a way that the patient understands and is able to consume and is able to understand their progress over time. So then some of these challenges that are really, again, going back to these implementation factors and our lessons learned that I'm really thankful to this team that they have been so thoughtful about these implementation factors and writing them down and writing these papers to get them out because I think that these are important for specifically for PROMs, specifically for implementing measurement-based care within a behavioral health practice. But they are, you know, I think important for healthcare more broadly, substance use prevention more broadly. You know, that this is all learning that we need to do on how to better implement clinical care for these types of populations especially, or for populations that are at increased risk for substance use. So these issues of startup to implement measurement-based care, ensuring that the patient-facing PROMs electronic tools are user-friendly, that there's some sort of training for clinicians. We didn't talk about that much but, you know, the clinicians are gonna need to be told how to do this and probably continually update it as those EHRs are being updated, they need to be reoriented. The inconsistencies between EHRs, as I think all three of the speakers talked about, difficulty in updating EHRs, difficulty in mining those data. The differences between the variables that Dr. Rao was talking about, so that clinicians know when they're looking at outcomes that some may be more reliable than others, that they're getting out of their epic system or out of their system. And that's not to mention the researchers that also need to understand obviously, and the administrators who are kind of tracking progress at a population or a clinic level. And then these issues of equity, I think are really, really important. I'm glad that Dr. Ewell talked about them, that there's such differences between using the patient portal to report the PROMs and needing to identify those equity issues and then figure out something to do about it, right? Not to sort of say, oh, well, this is, you know, this is what we're seeing and controlling for it, but really trying to figure out how do we address them, overcome them, and change them essentially. And then the equity also related to the infrastructure of the settings. The Dr. Ewell talked about that, you know, that we are dealing with, you know, in this one particular study, we've got the differences between BMC and MGH, but those sort of disparities are kind of, you know, are rampant within our healthcare system, whether it's, you know, rural, urban, big systems, small systems, payer, you know, FQHC, you know, whatever that is, we have those inequities that we need to sort of grapple with. It's sort of, we have to, as clinicians and as administrators within healthcare settings, you have to sort of grapple with them there, but then I also, you know, think we need to grapple with them at a policy level as well. And then just sort of, I love the, you know, the recognition that challenges of working with youth this age, the patient record issues, potential, I don't want to use honesty, but sort of transparency of a youth, what they might tell a clinician, especially if it's something that they believe that their parent or their caregiver might overhear. We've heard that in a lot of different settings as we've been talking about EHRs and trying to engage youth in both care and research. Okay, so just real quickly, this, we sort of started by talking about measurement-based care, and I want to just bring you up just a couple, many thousands feet to talk just for a second about NIDA and our NIH Helping to End Addiction Long-Term initiative. This project that we've been talking about today is one of 10 that was funded through the HEAL Prevention Collaborative, and we have 10 different sites that are across various settings, not all healthcare. Actually, most are not healthcare. And we, so we have folks in emergency departments. We've got behavioral health treatment, school-based health services in schools, community settings, including some tribal communities, legal settings, trying to help kids who are involved with the justice system or reentering from the justice system, and child welfare. And so these 10 projects have been gathering and sharing information and trying to learn from one another and also trying to kind of take lessons learned and try to push them out as we're thinking about a broader initiative and just sort of like, just sort of moving our work in substance use prevention at NIDA and within the NIH HEAL project. So one of the, some of the cross-cutting foci of the HEAL Prevention Collaborative are increased access to prevention services for underserved populations, community and systems engaged research, and intervening during periods of vulnerability for opioid misuse. So, you know, these are obviously sort of these goals of this project are really, you know, kind of aligned with these foci of trying to ensure that the research is really grounded in the folks who are in the clinic, the folks, the patients, the clinicians, the administrators, the IT folks, that those folks are, they're really the experts that are bringing, that bring knowledge to the project and bring change to the clinic, and all in the service of trying to prevent substance use for this group of folks who are at increased access or increased risk for substance misuse and kind of intervening at that time when, you know, hopefully before escalation of use and before SUD. So, yeah, so I am, I sit within, I work on HEAL, but then I sit within NIDA's Prevention Research Branch and our Division of Epidemiology Services and Prevention Research. I hold a whole bunch of grants in healthcare settings, which includes behavioral health treatment. And I am always interested in talking to you all about NIDA, about HEAL, and about any research ideas that you may have. We're really trying to push on trying to promote prevention within healthcare settings and also moving upstream that kind of issue that we've been talking about, moving upstream, thinking about social determinants of health and policy, that makes our work and makes our ability to prevent substance use disorder within these settings possible. So I'm interested in our questions, discussion, and look forward to talking to anybody who's interested offline as well. Thank you. I'm thinking of this from several levels, but I think in terms of some of the logistical challenges of doing what you're describing, two things come to mind and I'd like to hear the panel's thoughts on it. One is the cross between patient safety, quality improvement, implementation science and larger scale. It's a matter of scale and just sophistication, but there's a lot of stuff that the interprofessional fellowships and things like that have access to and data that goes back years at VA that some of it's related to implementation of say the PHQ-9 and some of these other things that it seems like, I mean, I'm thinking of sustainability when I'm asking this question because there's a lot of data out there that could be extracted that's already in existence. It's just a matter of talking to the right people and quality management and getting that data, moving forward of how do you streamline the process because you're showing all of the different barriers that Amy was alluding to, but there are solutions. It's just going to where is the data actually exist. It's good to have this as a small clinic, but systems of care like VA Boston, you can have, we just published a paper on 2.4 million veterans and looking at access to opioid use disorder medications, MOUD. But there's also within that data set, it's a PCORI grant, that there's scale that some of the questions are logistics. Those can be answered and you can translate it to other systems outside of VA, but there's just all these databases, I'm just realizing as I'm getting really digging into it, that are underutilized that would speed the platform and the solutions a lot more because you can look at it at VA national level, you can look at it at the VISN level, you can look at it at the major, at the individual VA level or even individual clinic. We're actually looking at clinic to VISN types of questions in terms of access right now. The question is how do we get there to promote that and how does that inform what is done outside of VA? But there are solutions that would really move this forward in a major way because we talk about evidence-based care and we talk about the problems, but the solutions seem like, well, I don't know. I think it's a matter of collaboration where the resources really are talking to the right people and moving it forward, but I think if we actually did it in a systematic way, I would say in five years, we would be much further along if we actually did something along the lines I'm proposing as opposed to piecemealing individual grants and thinking in terms of systems of care, where does the data actually exist that can give you the answers to the problems you raised today, but just interested in the panel's thoughts about that. I'll start with a couple of... Thank you for that point. A couple of thoughts related to that. Although the focus of this conversation was... Nope. That's scary. He's our informaticist. I don't have to use a mic to talk to the computer. Can we take this out? So a couple of points that I really appreciated from your comment. In this conversation, we're talking about measurement-based care with a focus on patient reported outcomes measures. The patient reported aspect of it was a theme throughout a lot of the implementation work that we were talking about. But that exists within, as you point out, a broader landscape of data that exists within the medical record. We're talking about how do we have a particular interest in this as part of improving boots on the ground clinical care for individuals. You're right. There is a ton of hypotheses that can be generated from these really robust systems like you have in the VA or exist at Kaiser, certain places like that. In terms of that generalizability bit, as a scientist, I want to think of these as hypotheses that you're generating that hopefully will be testable in other circumstances. So that if we can make observations, maybe in an associative way, from these broad datasets and then test them in a few places, then we might be able to really take that on as sort of like a best practice more broadly. But exactly, and maybe Sarah or others can comment at a really broad level as to what's happening at a larger scale in terms of this. Yes, I think you're involved with the workshop. Yeah. Yeah. Just so one question that we keep coming, keeps getting asked specifically by Dr. Volkow, our NIDA director, and this is specifically to substance use prevention, which of course is not necessarily, you know, this is, which is often, you know, provided in communities or school settings, that sort of thing, you know, aren't these evidence-based practices being implemented and where are they being implemented and to what degree? And we can't really tell her. And we did have a meeting in the spring related to use of EHR data and research, and we kept coming back to this issue that EHRs were not giving us reliable, EHR data was not providing reliable data on what evidence-based practices were actually being implemented. So outside of MOUD, I think, you know, and you're finding this in this study too, that you don't know, people are getting, you know, an individual therapy session or a group therapy session and we don't really know if what they're getting in that session is evidence-based and what it is. And so I think, I'd love to talk to you and I'd love for you to come with some ideas of how we can do this better, because I do think that that's a huge gap in sort of our understanding of what people are getting and whether or not evidence-based practices are being implemented. And we just don't have a good sense of whether they are. If they don't have an individual billing code, then, you know, we don't really know. Peter, I think. Thanks for such good talks, guys. When you, Sarah, when you mentioned periods of vulnerability, that kind of piqued my interest and I'm just wondering, through the HEAL Initiative and these grants, is that something that was, that is defined specifically by NIDA from prior research? Is that something for the grantees to define? And I guess for everyone else kind of thinking about measurement-based care related to periods of vulnerability, is that something that, I don't know, how would you guys define a period of vulnerability and what are we learning there? For the purposes of the HEAL Prevention Collaborative, that age group is 15 to 30, I believe. I believe it's 15 to 30. And then within that individual, research studies have defined it more narrowly or, you know, kind of in that realm. But what we're talking about that adolescent and early adulthood period and trying to prevent initiation and prevent, delay initiation and prevent escalation from misuse to SUD. And so it really has, it's been defined differently based on the different populations. And some of them are more narrow, they're, you know, folks who are leaving the child welfare system and transition-age youth, which is, you know, 18 to 21 in most states, or 21 to 24 in some states, you know, things like that, that kind of defined the sample population differently, but. And the only other thing I would add to that, Peter, and I'm looking around, I think we have half the world's census of child and addiction psychiatrists here, yeah, how many do we have here? I'm seeing, look at that, yeah, exactly, it's impressive. And this specific why we're in the prevention group is because it was, you know, we know that psychopathology doubles or triples the likelihood of having a substance use disorder. That's been established, that's out there. The question is, if you treat it, does that mitigate that risk? And what happens if you have depression and anxiety or depression and ADHD? And how much do you have to treat? How much do you have to drive the symptoms down, ergo that's where the PROMS comes in and things like that, to actually mitigate that risk? We know, for example, you can't just go to your cardiologist and they say, oh, you have high blood pressure, you've got to reduce it, otherwise you're going to have a stroke. Well, that's not satisfying, right? It's like, well, how much do I have to reduce it, you know? And if you're like me, you bring it one below what they say and then you go out happy. But that's what we're trying to do with it. That's where the prevention comes in. It's taking a vulnerable group, psychopathology, does it, driving that down treatment-wise, and what treatments really do move the dial when it comes to cigarette smoking, substance use, and ultimately opioid use disorders? So that was sort of that. And then what's the age group? Well, that 15 to 30, that's when it's going to occur. I already asked one, go ahead. Oh, okay. Hey. Jeff. Jeff DeVito from California, formerly of Boston. I'm not sure if I have a question so much as kind of comparing notes to some extent. One of the jobs that I have now is a behavioral health director at a health plan for Medicaid, which kind of covers from San Francisco up to the Oregon border. And we did a measurement-based care quality improvement program with some behavioral health integration grant funds that we had from the state. And we recognized very early on, we targeted maybe about 14 different rural behavioral health clinics, and we realized that we couldn't use EHRs because some were using paper charts, some were using stone tablets, others were using things as fancy as Epic. So we couldn't- You haven't changed, Jeff. But we partnered with a telehealth company that was doing their own measurement-based care programming. And we're kind of in phase one of our program, was to kind of just, number one, get the measurement tools into the clinics. The way that we did that is that we met with each clinic, and we decided, what are your normal workflows for kind of managing patients, and how can we fit this into your normal workflow? So in some clinics, it was a tablet that was in the waiting room. In other clinics that relied a lot on email pushes, we were able to kind of piggyback on the systems that they already had in place, rather than trying to integrate it into their EHR, which was going to be a non-starter, like kind of across the board. And then on the back end of this, the cool thing, and sort of, for me at least, the cooler part on sort of phase two, which we haven't really gotten to, is the ability to kind of integrate back and translate the results back to the providers directly, as opposed to with a lot of programs that the health plan does or has done, the goal is to get information back to the clinic administrators, and then it dies at the clinic administrators. They get their little bonus, or whatever it is, but then the clinicians never see it. So our goal is to get it back to the clinicians, again piggybacking on some of the quality improvement stuff that the telehealth company already had in place internally, to be able to look at some of the, like be able to, just give a quick example, someone's working with someone getting a PHQ-9, and they aren't seeing improvements in their PHQ-9, we have the ability to kind of look and age and demographically match and diagnostically match with other patients that have similar presentations and say, what's the timeline that we would actually expect to see an improvement in the PHQ-9? So the clinic's not saying, or the clinician's not saying, geez, my patient's PHQ-9 isn't improving and it's been six months, when in actuality for most of these patients it would be about a year anyway before we'd see an improvement in the PHQ-9. So anyway. I just want to add to that, to the conversation. One of the things that VA National and VISN 1 are looking at are, there's these different measures called SAIL, for one example is SUD-16, which is all the patients with current opiate use disorder in the denominator, numerators, everybody that's on active meds. So we're able to cull that, so we know what population, like I checked last month, it's 1,024 in Boston VA. But then you can look at those that are actually current, you know, and you can actually look at the comps measures within that. So you can hone down really quickly on your relevant populations. So it's taking what we're talking about and doing in California, but that could be done outside of VA too. And there's some comp, they're way ahead in terms of the big metrics, it's how do you hone in on those populations quickly and tell supervisors and clinicians what needs to be done. I think it's the last question, I'm sure it's going to be very profound, so no pressure on you. No. Actually, my question was a very personal one because we're just transitioning on our epic journey at Emory, and I would love to be able to use the BAM. And so I'm wondering if, but when I looked in my training, I didn't see it in what was listed. So how do I get? It's obviously there somewhere in one of the Cadillac models, perhaps? Yeah, well, it's within, like, you can go, there's a repository of kind of things that have been built within Epic, and you can go to the repository and ask your Epic analyst to then pull it into your system. It doesn't work perfectly, but that's one way to kind of decrease kind of IT analyst time is by, it's already kind of been built for kind of Kaiser, built for a program in Colorado, and so you just pull it from the repository and then have to do some modifications. But it's within kind of an Epic kind of warehouse. Epic repository is what I mean. Yeah, forget the, I mean, although it's, you know, some, it depends on your hospital system. Like our hospital system was like, oh, we can't actually translate that or their features, you know, of the S2BI, their screening to brief intervention that were built for a hospital in Colorado that don't quite match how we want to implement it. So it can't be a starting point to decrease the number of hours it's going to require an Epic IT analyst to build it within your system. But that's one resource. But happy to talk further offline. Thank you. I just want to thank you all for your attention and coming in this conversation. If there's more that you wanted to converse, please find us. I will be around for the conference. And enjoy the case conference and the rest of the afternoon. Thank you.
Video Summary
The video content focuses on the implementation of patient-reported outcome measures (PROMs) and measurement-based care in behavioral health clinics. It highlights the benefits and challenges of using digital measurement feedback systems through electronic health records (EHRs). The speakers discuss disparities in patient access and engagement with PROMs, and the need for collaboration and feedback from stakeholders to ensure equitable implementation. The video transcript specifically discusses the implementation of measurement-based care using PROMs in the Epic system and emphasizes the importance of accessing, understanding, and visualizing the data produced. It also addresses challenges such as integrating PROMs into workflows and promoting equity in data availability. The transcript emphasizes the need for collaboration between clinicians, administrators, IT professionals, and patients to effectively implement measurement-based care and improve patient outcomes. Overall, the video and transcript provide insights on the benefits and practical considerations of implementing PROMs and measurement-based care in behavioral health settings.
Keywords
patient-reported outcome measures
PROMs
measurement-based care
behavioral health clinics
digital measurement feedback systems
electronic health records
disparities in patient access
collaboration
equitable implementation
data visualization
workflow integration
patient outcomes
The content on this site is intended solely to inform and educate medical professionals. This site shall not be used for medical advice and is not a substitute for the advice or treatment of a qualified medical professional.
400 Massasoit Avenue
Suite 307
East Providence, RI 02914
cmecpd@aaap.org
About
Advocacy
Membership
Fellowship
Education and Resources
Training Events
×
Please select your language
1
English