Episode 109:
Stay Compliant, Stay Connected: Navigating Healthcare Outreach
How can AI revolutionize patient engagement while staying compliant? Find out in our latest Compliance Conversations episode.
In this episode of Compliance Conversations, CJ Wolf and Dan Fox explore the intersection of AI and healthcare compliance. From proactive patient outreach to regulatory risks, they break down how organizations can leverage AI while staying compliant.
What You'll Learn:
- How AI-driven communication is enhancing patient engagement
- The biggest compliance challenges organizations face
- Practical examples of AI implementation in healthcare
Don’t miss this deep dive into AI, compliance, and the future of patient outreach.
Looking for more? We have another Compliance Conversations episode focused on AI and healthcare -- check it out!
Interested in being a guest on the show? Email CJ directly here.
Episode Transcript
Welcome, everybody, to another episode of compliance conversations. I am CJ Wolf with Healthicity.
And today, we're gonna talk about, some interesting things about, health communication and reaching out, outreach to to patients and those sorts of things. And we're excited to have our guest, Dan Fox, on the episode today. Welcome to the podcast, Dan.
Yeah. Thanks for having me.
So today, we have a fox and wolf.
So I think we're we're gonna think we're gonna be good on the on the predator front here.
That's funny.
I I once lived in an area in the I had I was in a church congregation where we had a family, last name of Fox, and people would always up. They'd be like, this is the Fox family. This is the Wolf family. No.
No. No. Totally. Yep.
So, Dan, we're we're glad to have you, and we'd like to have our guests, take a moment at the beginning of the podcast just to tell us a little bit about themselves. Feel free to tell us what what you've done professionally, where you are, what you're doing now. Anything you wanna share?
Yeah. Sure. So, Dan Fox, I I head up our health care team at Dript. I've spent around the last fifteen years in AI technologies, specifically conversational AI technologies with a background in speech recognition as well as natural language processing.
And in the last five years, helped Drift build out our health care vertical, which is now around half of the business that we support today.
Wow.
That's awesome. And, you know, and when we got connected, I thought this is gonna be a really great topic because, you know, though I'm not an AI expert, and a lot of our audience, right, we're we're compliance folks in health care.
We might not be AI experts ourselves, but we know our organizations, our companies, and and everyone really is starting to rely more and more on AI, looking for ways to improve efficiencies and those sorts of things. And where compliance gets involved is, okay. Somebody in our business line wants to use AI to do x, y, or z.
As a compliance officer, what do I need to look out for, you know, in health care and those sorts of things? So I thought this was a great topic. I'm really glad you brought it to us. And so, if you don't mind, why don't we just kinda jump in with, with me asking, you know, how is AI driven communication transforming patient engagement? Right? I mean, what are and what are some of those biggest compliance challenges from a high level?
Yeah. I would say if you look at how we communicate with both patients and members and a health plan, proactive communication is right for artificial intelligence. If you look at the limitations of how we use humans in that capacity, we're very constrained operationally, and we're very constrained in terms of keeping those folks on message, on target, and all of the HIPAA concerns around, like, leveraging people to do a lot of that proactive outreach. So I think there's a two like, there's a double edged sword to it.
Right? Like, if you're introducing AI, there's so many compliance factors that come into play, and that's super important. Right? Because not all AI systems are built the same, and not all AI systems are ready for the market in the same way.
I would never recommend somebody just, like, builds a ChatChiPT script and starts calling people calling patients. And, you know, there's there's inherent bias built into AI systems as well that people need to factor in. But I think on the plus side, having a system that's always on message, that can be relevant, timely, conversational, and maintains compliance at scale while being, infinitely scalable. It's it represents a huge opportunity for you to offer an extension of your brand, through an AI generated capacity.
I think with that, there's a lot of constraints and and factors that you need to take into into account there. Like, what are you saying to folks? What are you comfortable saying? What are your guardrails?
How do you make sure that you stay on script? What happens if the AI doesn't recognize something? There's all of these considerations there. And I think we're kind of at the phase where, you know, we're seeing some early adopters take this on. But in terms of what consumers are ready for, I I think we're we're seeing a change in behavior, but I don't necessarily know to the consumer whether it matters to them that much, whether they're interacting with AI or a person as long as they get an answer.
Yeah. I I think you're spot on there. Tell me a little bit, when we talk about patient engagement, what what are we talking about? Are we talking about AI, systems reaching out to existing patients to help them follow-up with certain medical issues? Are we talking about reaching out to patients or potential people who could become our patients? Or just kind of, lay the landscape a little bit for me about what patient engagement activities we're talking about.
Yeah. Absolutely. So I think, you know, in in our space traditionally, which is one facet of proactive communication, we focus on engaging, both members and patients to drive them to certain care outcomes. So that could be, medication adherence. That could be a social needs screener.
That could be driving them into specific care pathways. Could be following up with somebody after an ER visit. Any any of those those touch points when you proactively wanna connect with a person to drive a specific care outcome, that could have to do with, like, you know, patient acquisition, member acquisition efforts. It's definitely a use case for this. Could be the same thing with, you know, keeping a member, in in a plan that they're in. Like, think about Medicaid redeterminations.
There's a lot of confusion around that outreach, and it's not something that there's a lot of, you know, money and funding behind because at the end of the day, people need to redetermine, Medicaid through the state, and the plans can only contribute so much. So we see all of these entry points, I think, within, you know, a lot of the health care systems that we're working with. We've identified more than fifty use cases where that proactive outreach would then become something that you could turn into an outcome. And that outcome for the hospital system or for the plan could be digital self-service for those who are ready and and able to embrace it, which comes with other factors. Right? If you look at the health care system, like, you know, how many of our patients are do have, MyChart logins.
And it's great, like, you know, for the folks that do have MyChart drive self-service, but I think we often forget about, you know, the ninety percent of folks that don't have MyChart access and what are their solutions there. And I think, you know, within the health care ecosystem, payers have always been comfortable using live agents to drive resources, to drive outcomes because there are, like, you know, there's government funding that's associated with those programs.
But, you know, I think that exists in in the provider space as well with value based care arrangements. But the the value hasn't always been there because a lot of hospital systems are constrained with, you know, the amount of inbound calls that they're getting, that they haven't necessarily focused on an outbound, capacity. But if you have AI to cover a lot of those outbound self-service touch points, then you could change the way that you think about outbound. And maybe you use AI to bite off a chunk of that, but you still leverage humans in a capacity when you do need to talk to them.
Like, think about revenue cycle management for the provider system. You know, we we often like, the typical experience for me, I have, like, bills sitting on my desk, but it's like, my insurance shouldn't cover that. I don't have the time to call the insurance. I don't have time to call the provider.
I'm just gonna let it sit there. Now what if AI reached out to me and they said, you know, a system said, hey. You just got this bill, and I could write back and say, I don't think that this was billed correctly. How would we handle that differently?
So now we don't have that. Like, now we just get these bills letters, you know, and then maybe I'll get a call, like, six months from now. And how many letters, how much cost was incurred, how much revenue was missed from not starting that conversation earlier?
Yeah. That's fascinating. You know, you mentioned my chart. I actually was just using it yesterday because I wanted an answer. I'm planning some international travel to a place that, you know, I think I need immunizations and maybe some pro you know, prophylactic medications to be prepared for. I wanted answers right then and there because I was one of those people that that might do just fine interacting for that topic at least, you know, with with some sort of intelligence, to get my answers right away.
Tell me a little bit about the channels. Are these a voice? Is this like a voice? Like, somebody calls me on the phone? Is it texts? Is it email? Is it all the above?
Our for for drips, our predominant channel that we use is starting conversations over text. The reason being, like, you you know, it's a place to mobilize people. For us, if you look at you know, we're we're recording this podcast right now, but we both go look at our phones after this. We're gonna get a litany of, notifications, and that could be emails from Outlook, you know, maybe Facebook notifications, whatever else.
A lot of times, we pay attention to text, and not mass text. Like, I think we ignore a lot of those, like, you know, text from Crate and Barrel. Like, hey. Twenty five percent off.
No offense Crate and Barrel.
Right. But if if, like, you know, a friend or family member texts me, it's probably the first notification that I'm looking at. So what we found is if you can leverage text in the same capacity that you use friends and, that you do with friends and family members, then you can really pull that person in and and help them respond. There's a huge opportunity in voice as well.
I think we just deal with, you know, an influx of calls that we if we don't know a number, we're not picking up. Like, no one does. It it doesn't matter, you know, who you are, what type of patient you are. I I think we find specifically, like, Medicare folks may be more inclined to pick up the phone, but we're talking about, like, one percent, two percent.
On average, ninety percent of an audience will not pick up the phone for an outbound call. So we find if you can pull, together channels, great. And, you know, I come from a speech background, and I see the industry of speech recognition advancing so much. So I think there's a huge opportunity for us to adapt to these voice systems because the technology out there is just so incredibly fast moving.
The challenge is getting that person engaged, on that voice channel, and that's why we use text. And that text could often just be a conduit to maybe getting somebody to engage on that voice channel. We just find it's, like, the most effective launching pad.
That's great.
You know, we're gonna take a quick break. But, Deb, when we come back, I wanna have you, answer this question about, you know, being in an industry where we have these strict regulations that govern patient outreach. You know, how can health care organizations balance compliance with this need for for proactive engagement? So I'll let you think about that for a second, and we're gonna take a quick break, everybody, and we'll be right back.
Welcome back from the break, everybody. I'm here with Dan Fox, and we're talking about patient engagement and using AI to to facilitate that. And, just before the break, I was asking Dan to think about this question of, you know, in an industry where health care, you know, strict strict regulations, governing patient outreach, HIPAA privacy, all sorts of things. You have federal regs. You have state regs. You know, how can health care organizations balance compliance with the need to proactively engage, patients? So what are your thoughts on that, Dan?
I'll give you my thoughts. You know? But, ultimately, you know, I'm just a guy that works for a vendor in the AI space. I'm like, I'd I'd love your opinion being in the compliance space.
Right? Like, for us, what we find is when you look at these health care organizations, having a compliance road map is super important, but your road map isn't going to encompass all of the aspects of what these technologies are capable of because they're ever changing. And we do find that there's a lot of gray area because a lot of the compliance that we deal with has to do with case law. If you look at TCPA, which is the primary body that governs outbound calling, it's a lot of case law.
I mean, there was a ruling, you know, from Facebook, you know, prior to me even joining drips around whether text and calls are even considered an automated dialing system. And that rule hasn't really been challenged, in in a way where we have clear answers because it takes so long for a lot of these case laws to turn into actual, you know, what we're considering laws. So we find the same thing with HIPAA.
You know, if you're looking at a text message, for example, what's considered PHI to send in a text message? And I think that varies, and I don't think that there's one clear answer for every organization. So for some organizations, if you have, you know, really large exposure, if you are a company like UnitedHealthcare where everyone's looking at your every move, you're the largest you know, one of the largest health care companies in in the world, are people gonna scrutinize your messages larger than, a state Medicaid plan, for example? So I think there's different answers for different people.
But, yeah, on that HIPAA, like, if you send a text message that says, hey, CJ.
We wanna talk to you about a screening that you should be getting. Right? What are we implying? Like, if we say colonoscopy in that message or colon cancer screening, we're implying that you're likely male. We're probably implying that you're on a a Medicaid program or a Medicare program. Like, what is that boundary of PHI? So I think what it's what's important for organizations is to figure out what's right for them and have a strategy mapped around that.
Yeah.
So I don't know if this has been AI or not, but so I have a a health system here that I use regularly. When I when I have appointments, I get an automatic text, you know, a few days before saying, you know, fill out the intake form, you know, confirm that you're coming, and it all works great. Well, this last week, I was getting, texts for Maria, and I won't say the last name, but it was a Hispanic Hispanic name. And they're saying, you know, you're scheduled for your mammography this week. And I'm like, well, first of all, I'm not getting the mammography, and I'm not Maria blank.
Yep. And so I kept getting this this message. I even, you know, clicked on the link to try to reply and say this is not the person you think it is, and and the form was in Spanish. So it seems like they had somehow gotten their their communication swapped. Again, I don't know if this was AI or not. But as a compliance officer, those are the kinds of things I worry about is, you know, do I have something like that going on?
So as compliance officers, our nature is to worry about, like, the worst case scenario. We're not really thinking about this could you know, this is, you know, the the best things in sliced bread. We're gonna do all this great thing. We're always thinking about the one in a million.
Well, what if this happens? Right? And so Yeah. That thing that crosses my mind sometimes.
And knowing and having kinda confidence that those things will or will not happen, is something that's on my mind. I don't really know how you address that, other than if you have, like, some sort of kind of testing or vetting that shows results. And and even when you get things perfect or you try to get things perfect, there's always gonna be missteps. That may have been a one in a million, or there may have been ten thousand patients getting mixed up this week.
I don't know. I was just kind of on the recipient end.
And just, you know Yeah.
Nothing nothing that you're going to build with AI, nothing without AI is going to be infallible. It's the same way with live people. Right? Like, people don't always stick on script either.
So I think when we when we go into an organization who's looking for a bulletproof AI strategy, I think what we present is, like, hey. We built a ton of guardrails around this. A recent example is, like, there's, you know, we recognize a lot of different ways that people can opt out of our system. So if somebody says I'm not interested, we can recognize that.
And that's one of the strengths that we bring to the table. That's not built into every platform. And there's a, an ongoing case around somebody who wrote duck off in a text message. Okay.
And the plan didn't recognize that. Yeah. How do you how do you accommodate that? Right?
So at the end of the day, if you're looking at these programs, they have to be the value that they bring to the business has to supersede any of the risk that you are exposing because at the end of the day, that risk is not going to be one hundred percent infallible.
And that situation that you described, that could have been, you know, a reassigned number that somebody switched. That could be, you know, SMS systems getting their wires crossed. It could be bad data. There's so many points of failure within these systems that the best that you can do is maintain a high compliance audit of the systems that you have going on, and the systems especially the systems that are touching, individual, members or patients and make sure that you're doing everything that you can to limit the threat that you're exposing to the organization.
Yeah. And I like what you said at the beginning is not nothing's infallible. So let's go back to the I've been in health care long enough when we used to fax people, things. Okay.
And people would you know, you you get your fat fingers and you type the wrong number, and it would get faxed. You got all of the letters or numbers except for one digit right, and so it got faxed to the wrong person. So that happened. So like what you said, nothing's, one hundred percent infallible.
And I think as compliance officers, we need to see, okay, this is the value to the business. But I like what you said. Here's the guardrails. And so I think in a compliance conversation, you know, a vendor or somebody in our operations coming to the table where they know compliance is there, just being able to to, you know, clarify what you just said, which is these are the guardrails we have in place.
This is what we're doing because you can't. There's one sure way to eliminate risk. Don't be in the health care business. Well Right.
Then you're not doing what you wanna do. So, yeah, we can't we can't have it that way. The other thing I would say on compliance, Dan, is, you know, the Department of Justice, you know, enforces a lot of, health care compliance cases.
They recently updated their compliance guidance with a paragraph about AI. So they were saying and they didn't get specific in the systems, and they're talking to other industries other than health care. So it's kind of a high level guidance document for compliance officers.
And they just said, look. AI is around.
You just need to be thinking as your organizations use AI, what new risks might arise that you didn't think about before. So I I think we are being prompted just to be thoughtful, and and this kind these kinds of conversations are helpful because I am not an AI expert. I don't know what, you know, it it brings to the table as far as risks that might be different from the way we used to do things.
So that's about And AI AI is such a buzzword, and it kinda Right.
It encompasses I think a lot of people see artificial intelligence as something mirroring human intelligence in a way that we're comfortable with, like the way that we speak and the way that we engage the systems. I had a great mentor of mine, you know, years ago who worked for, Bell Labs. And, you know, an interesting concept that he brought up is, like, look at Google Maps. Right?
We go into Google Maps, and we type in a location, and then it goes into a system and calculates, like, here's your current location. Here's the destination that I'm going to, and here's the route that's gonna be the quickest amount of time to get there. Did I say, like, hey, Siri. You know, calculate the you know, find me the fastest route to get to my location?
No. I didn't prompt it in a human like way, but I did prompt an app in a way that delivered a level of intelligence that would mirror, like, you know, what what would be a manual process in the past. Like, there's nothing to say that, you know, Google Maps, which we've been using for the past, you know, two decades, is a form of AI. We're just seeing these promptings and advancing in natural language processing to emulate human behavior, and that's what we're calling artificial intelligence.
But I think for health care organizations, that level of we've been using AI in a lot of different capacities for a long time. We may have just not called it AI.
Yeah. That's such a great that's such a great point. And, you know, you may not be aware of this, but I know some of our listeners might be and some aren't. Those that aren't, we did a a podcast episode on AI kind of in the revenue cycle space. You you referred to that a little earlier. And so I kinda I'd ref I'd, refer some of our listeners to that, episode as well in talking about medical coding and kind of revenue cycle, those sorts of things. So we are using it in lots of places.
Diana, I did wanna ask you since you are in the vendor space, I wanted I was curious if you could share an example of how Dripps has helped a health care organization improve patient engagement, and compliance through kind of AI driven communication. We'd love to hear maybe kind of a a case study if you could share something like that.
Sure. Yeah. Yeah. I I think, you know, we often speak about, Margaret. So, you know, Margaret is a, Medicaid member in a small state, like, call it, like, Arkansas, for example.
And Margaret is on that Medicaid plan, and she has never had a cervical cancer treatment.
And, because of that, exposing a lot of risk, but, you know, Margaret's very busy. She has two jobs, and she has kids, and she's trying to manage her time. And what we've been able to show is if you can reach out to Margaret in a way that makes her care about her health in a way that she may have not cared about previously, because you do have a lot of things going on when you're managing kids and two jobs, and not everything is very important to you at that time, especially if you're on Medicaid. You may be, you know, on on SNAP programs as well, so you're having to reupload applications all the time.
And, you know, getting a cervical cancer screening, getting a proactive preventative care screening is not the most important thing to you. So if we can ask Margaret a question and, you know, we we did have this one example where we said, hey, Margaret. You know, when is a good time to talk about a cervical cancer screening test or a preventative screening? And what we got her to do is kinda bite in, and she didn't say, I'm ready now.
Call me now. But she said, I'm really busy. Do you mind reaching out to me in two weeks?
So think about if that was a human reaching out. Like, chances are, you know, Margaret would have been on the list. She would have got a call from a number she didn't recognize.
And maybe if she picked up and talked to somebody, she would have been like, yeah. I'm I'm busy. Can you call me back? Would that person have actually called her back in two weeks?
It's just a constant game of phone tag, and chances are Margaret's not gonna engage. But when we said, hey. We're gonna reach out to you in two weeks, we did. And we had this flow where we reached out to her two weeks later.
We said, hey, Margaret. Is now a better time, to to to talk about this? And she presented an objection, and she said something like, I think I've had a screening of this type before. I I don't think I need this.
And we kinda had this conversation with her all using AI. And, eventually, we got her on the phone with somebody in, that Medicaid plan scheduling team. And what they were able to do is identify, that she hadn't had a screening before. It got her scheduled for a screening.
And because of that, you know, we were able to identify some flags that would cause for for some further action from Margaret. And this is one example of hundreds of thousands of of opportunities to drive preventative care across the health care ecosystem that we're not acting on. So because of that, like, yes, you're gonna have an impact on quality. You're gonna have an impact on, the amount of outcomes that you're delivering, but I think it's important to remember that behind all these numbers and metrics, because we can count case studies around like, hey.
We improved quality scores by x y z. I think what's most important is that if you look at the individuals that are impacted, what's most important is looking at people like Margaret and being able to say, hey. Do we turn that person's, day around? Did we actually improve that person's, health outcomes?
And that's what's most important.
Dan, I think that's such a great example. Like, as you were sharing that, I am thinking how that would have worked in my own life. So, you know, I'm in my early fifties, and so I'm very busy during the week. And if a human being called me during the week and said, you know, when's the last prostate exam you got or when's the last colonoscopy you got?
I'd be like, I am so busy. I probably wouldn't have even picked up the phone like you had referred to earlier. But if I'd gotten a message and then I could reply and say, you know what? During the week is terrible.
But if I am interested in my health and proactively thinking about this, but how about Sunday morning? Right? So send me a script Sunday morning, and that human being's not working Sunday morning, but AI could be potentially, and they could send me a script Sunday morning when I'm in a little bit more relaxed zone thinking about my health.
And, okay, now I'll I'll spend thirty minutes kinda considering this Yeah.
Looking at my schedule and stuff. So I really like that, that kinda case study, and I could see that working in so many different ways.
And that's and that's where we're at today in twenty twenty five. Now imagine that we reached out to you about that, you know, colon cancer screening, and we asked you a question like, hey. Would you prefer to do this via an at home test? And without you even having to go anywhere, we confirmed your address.
We shipped you a kit. Your provider got involved, and you got a kit delivered to your door. And then we popped in with a video that said, here's how you use this kit. And then we persisted if you didn't return it.
Hey. Do you have trouble, you know, returning this kit? And then we got you to actually return that test kit, without ever having to leave the house and without ever having to get on the phone with anyone. Yeah.
Yeah. I mean, the the possibilities are almost endless here, and I could see all the permutations and and ways that you could, kinda really cater to somebody's life or busy schedule or preferences. Right?
Like, you're describing with Margaret. I just great great examples. And I come from a clinical background, in training originally before I got into compliance. So I can see how all of these things people who doesn't want to have good health?
I mean, everyone wants to have good health. Right? It's just the things you describe. You're busy.
You're taking care care of two kids and two jobs, and now is not the right time. I'm not saying I don't ever wanna hear about this. It's just, can you cater to my schedule, cater to my preferences a little bit more, and eventually get what we need from a from a health perspective? So great example.
Dan, I you know, I could talk to you all day about this, but, unfortunately, we're getting close to the end of our time. And I'd love if you have any last minute thoughts or kind of kinda give you the last word on on this topic, maybe something we didn't share or just reiterate something that we did. Love to hear kind of your last last thoughts here.
Yeah. I think it's important for, you know, anyone who's in the compliance space to kinda not recognize I guess, recognize, like, hey. Artificial intelligence is coming, and it's big and scary, but recognize that, like, AI is already here, and it's pervasive in a lot of the systems that we're already using. And it's important to get beyond the buzzword and figure out what is this system.
What is the system doing? What are the systems that it engages with? What are the potential risks? Consult compliance teams.
Consult legal departments. The answers are out there for this, and it may it is a gray area, in a lot of cases, but there is a strategy to develop here. So rather than saying, hey. We're not gonna use AI or, hey.
We're not gonna, you know, expose ourself to certain types of risk. It's important to look at the business value that you could drive from some of these systems. And as long as it fits into your AI road map or compliance road map strategy, then figure out what's safe enough for you to pursue and develop a strategy around.
Yeah. Great advice. Well, Dan, it's been a pleasure, meeting with you and talking to you about this. Thank you so much for your time.
Yeah. Thank you, CJ.
And thank you to all of our listeners. We'll include, contact information for Dan and and his organization. So for those who are interested in learning more from him, can reach out directly to him. And we thank you, all of our listeners, for listening.
And as we always do, we welcome, your input. If you know of a guest or a a subject that you'd like to have highlighted on the podcast, please reach out to us. We'd love to include those types of things. So until next time, everybody.
Take care.
This transcript has been auto-generated. Please forgive any errors.