Compliance Conversations Podcast: Artificial Intelligence in Healthcare Compliance

In this episode of Compliance Conversations, our host CJ Wolf, MD., sits down with Amy Brown, the CEO, and Founder of Authenticx, to talk about artificial intelligence in healthcare compliance. From social work to compliance AI technology, Amy opens up about her journey from health policy, specifically Medicaid, to creating a conversational data platform.

After an organization stores conversational data (i.e., the chat thread where you’re asking a nurse at the clinic if you can mix your allergy medicine with your heart medication), Authenticx can organize all of that messy conversational data. Once it’s collected and flagged for compliance risks, it’s passed on to the humans for final review.

Why would anyone want to bother with organizing old conversations? And why are companies recording us in the first place?

Healthcare organizations often listen to these conversations to help solve compliance issues such as HIPAA and informed consent. Unstructured conversational data is critical to helping healthcare organizations improve, but it’s also very disorganized. AI helps get it all sorted. But then, the last step, the human step, is necessary because the context of a conversation is essential for leaders to listen to and understand. Still, there are millions of these conversations in a healthcare organization. Around 20% of conversations in a healthcare organization contain evidence of compliance risk.

Tune into this episode of "Compliance Conversations: Artificial Intelligence in Healthcare Compliance" anywhere you get your podcasts. In this episode, you’ll learn how AI is used to reduce compliance risks and inform the FDA about adverse events. You’ll also learn how:

  • Healthcare Organizations Store Chat Histories
  • Unstructured (Messy) Conversational Data is Organized
  • Your Privacy is Protected, and Personal Information is Redacted

Listen Now >>

Compliance Conversations Podcast


Episode Transcript

CJ: Welcome everybody to another episode of Compliance Conversations. I am CJ Wolf with Healthicity and today’s guest is Amy Brown. Hello Amy.

Amy: Hi, good to be here.

CJ: Glad to have you. Amy is the CEO and founder of AuthenticX and we’re going to talk a little bit more about that service, that offering, and how it might help in a compliance-type setting. But Amy before we get into that type of thing we just love to talk to people and find out;

Question: What brings you into the compliance space?

We all come from different backgrounds. Who grows up thinking they’re going to be a compliance officer or work in compliance. My kids are like dad what do you really do? So it’s kind of fun to hear a little bit about your background, maybe how you ended up where you right are now and then we’ll get into more specifics.

Amy: That sounds great. I bet I have one of the most unique backgrounds as to how I got into it, and it will be very strange for you listeners how I ended up in compliance. My background, from an education perspective, is I actually got my Master’s in social work.

CJ: Okay.

Amy: But I knew coming out of that program my gifts were more at the macro, systemic level than at the interpersonal, social work level so I went into state government and I worked for two governors working on health policy matters, particularly in the area of Medicaid and then left state government and went into the private sector and my experience was predominately in healthcare, and in healthcare, predominately the health insurance managed care space. Probably because of my social work background and my experience in state government, I really cared a lot about vulnerable populations and so I got involved with Medicare and Medicaid programs in the private sector in addition to commercial insurance. So my career really, for a large part, was spent learning how to manage largely regulated programs and became very aware of high-quality programs, quality management programs, etc. I also spent about four (4) years on a detour working in life sciences in the pharmaceutical space and entered the compliance and regulatory landscape of the FDA, managing big pharma, and so I have always played a role in compliance because I’ve been an operator in these highly regulated organizations so it just kind of came with the territory.

CJ: Absolutely and your story, though unique, is probably not too shocking because a lot of us come from different backgrounds. I’m an MD by schooling, similar a little bit sound like to you. I wasn’t passionate about direct one-on-one patient care. I loved administration, I got into that. I also worked for an international medical device company so I have a little experience there. We have some commonalities there and our listeners are coming from all different backgrounds so thank you for sharing your background.

We want to talk a little bit today about your founding and leading AuthenticX. Tell us a little bit about what AuthenticX is.

Question: What problems does it solve and why should compliance folks be listening to that?

Amy: Sure. You can think of AuthenticX as a conversational data platform. So in large healthcare organizations and even outside of healthcare most organizations have some way of communicating with their customers and usually, it’s a combination of things like call centers as well as chats and emails. All of that gets created and stored as unstructured data, and conversational data and as a healthcare operator, I became really close to that data source because my teams were the ones amassing all of this data. So after listening to these interactions I realized there were all kinds of insights within those conversations that can organizations improve and do better and so AuthenticX was founded to help healthcare organizations to those conversations at scale to solve certain problems. One of the problems that we help solve is compliance. Most compliance organizations that don’t leverage AI technology and what AuthenticX brings to the table, they have some sort of quality program and it usually requires humans listening to these conversations for things like HIPAA compliance, informed consent, GDPR compliance, sticking to a script, all of these things that are important to healthcare organizations. What we do is that conversational data comes into our platform without humans, but instead using AI models, identifies compliance risk and then those conversations that contain compliance risk are funneled for human evaluation. So it essentially allows auditors and QA teams to much more efficiently leverage AI to find the conversational data that is most likely to contain high risk situations for the company so they can understand the context around it by actually listening to the interaction or reading if it’s an email or chat. Then make changes because of it. We have a workflow that allows leaders to be notified immediately when there is a non-complaint interaction so they can train and coach very quickly and responsively.

CJ: That is fascinating. I just returned from the Healthcare Compliance Association’s major annual conference called the Compliance Institute in Phoenix. AI and these newer technologies were a pretty hot topic. You know compliance 1.0 is paper and pencil and spreadsheets and creating a paper policy manual. We left that years ago, we’re into compliance 2.0 here you’re into analytics and maybe compliance 3.0 is more this artificial intelligence and AI that you’re talking about. We need to mature as compliance programs to use as you’re saying. And as an MD trained I’m like data? There’s a whole group of data there that we could mine and use in a smart way to try to find risks. Rather than just humans always thinking about what are the risks, what does the data tell us about the risks? Is that right? Do I understand that right?

Amy: Absolutely, unstructured data, conversational data is so very insightful. But it’s such a messy data source. It’s unwieldy because it exists in free flight right? And so what we’re trying to do is say hey the context of a conversation is very important for leaders to listen to and understand but there are millions and millions of these conversations happening in the average-sized healthcare organization. So how do we more effectively turn unstructured data into some structured insights using AI? So hey, there are a hundred thousand conversations on our platforms from yesterday but 20% of them contain some evidence of compliance risk and that evidence is around HIPAA or adverse events that might be reported by patients and now we need humans to apply their value to understand deeper what’s going on in that interaction. And take technology to make sense of that to identify whether that’s a true risk or not so that they can have a concerted effort leveraging the limited resources that they have.

CJ: That’s fascinating and without sharing anything that’s confidential;

Question: Could you share an example of a HIPAA or quality, or whatever you think is an appropriate example to share of this work and how it goes from this unstructured data, you use your AI platform, and then a human, compliance officer or whoever takes some kind of action?

Amy: Sure, I’m going to use the example of adverse events. Everybody can relate to this. When you’ve prescribed a medication the manufacturer of the medication is required to report any knowledge of a patient that experiencing a side effect, even if that side effect is expected. Pharmaceutical companies, if you watch commercials on TV between your programs, you’ll see drug commercials and they typically have a 1-800 number at the bottom. Well, those companies are required to record those conversations and identify whether or not a patient is reporting an adverse event and if they are reporting an adverse event they are required to know that and report it to the FDA. In many cases, they have outsourced vendors to be their call centers and there are hundreds, if not thousands, of agents that are taking these calls every day. And just as in human nature, not every human receives the training and the protocols in the same way or their interpretations might be different. So what our AI does, is it identifies the potential presence of adverse events in a conversation while also identifying whether or not it appears that adverse event was identified by the agent on the other side of the phone and reported or if it was missed, not identified and that’s what matters to the FDA.  Side effects are expected but when an agent misses one, that’s when it becomes a non-compliance issue. Our platform is designed to find, to funnel those interactions where there is evidence of non-compliance to humans employed at those organizations whose job it is to go track those down.

CJ: Yeah so a patient calls in, they never say “I am now recording an adverse event.” So that keyword or phrase is not there. They’re using some other sort of language and I’m assuming the AI somehow picks up on keywords or it has some sort of algorithm or methodology to identify that.

Question: Can you share a little bit about that?

Amy: Yes and actually the way we’ve approached training our AI models is very different from the typical. The typical model for training an AI algorithm is to use open-source data and in this case, a lot of medical terminology that exists on adverse events to train a model, and the reality is human beings, healthcare providers, and healthcare consumers don’t talk like that, right?

CJ: Right, exactly.

Amy: They use human words to describe their symptoms and you’re right CJ. No one calls and says I’m calling to report an adverse event. That’s not what happens in the real world. What happens in the real world is a patient is calling to talk to a nurse about their medication and in the natural flow of the conversation they’re mentioning “Oh I’m feeling really tired and I had a headache yesterday.”  So the way we have trained our AI models, and all AI is trained by human beings, by the way, the way we’ve trained them is we’ve hired social workers, nurses, and customer experience professionals that have worked in healthcare and who are highly trained in identifying how human beings, consumers, non-trained medical consumers, talk about symptoms. That’s what has gone into training our models so it’s more accurate and more useable.

CJ: So fascinating. My mind is running a million miles an hour right now. I’m also an educator and I’m thinking that would even be cool AI for training medical students and residents when patients are reporting symptoms and things. My mind is just going down all these other thoughts but I don’t want to take you there. Really fascinating and that was a good example. I’m thinking obviously that would be a potential client. That example might be the pharma or medical device industry.

Question: Are those appropriate clients and are others like hospitals and doctors’ offices?

Who do you think are the right clients for this type of technology?

Amy: Yes, so we work with health and hospital systems, we work with health insurance and insurance companies, and we work with pharmaceutical manufacturers. Really all of them have regulatory compliance issues. On the health and hospital side… well all of them have HIPAA requirements; all of them if they are recording their interactions have to disclose that they’re recording interactions so there’s informed consent. Many CMS, Medicare, and Medicaid-related regulations are compliance issues. So if an organization receives any Medicare or Medicaid funding then they are required to meet compliance terms in those programs and so there are a lot of use cases that those other client types, payers, and providers, use our platform to help them stay compliant.

CJ: Yeah, so what do you think maybe we’ve talked about this already, and if so you can just say so.

Question: What do you think are some of the biggest challenges that compliance leaders face? A lot of our listeners and our chief compliance officers may interact with patient safety and quality programs. A lot of our listeners are probably coming from the health system and hospital type of space.

Amy: Yes, at the very Meta level I think one of the challenges that face compliance leaders is that there’s often not a lot of investment made on staying complaint until you’re in trouble already.

CJ: Exactly [laughs].

Amy: And so trying to gain resources and support and convince the decision-makers on where to invest resources is a real challenge for compliance leaders. Secondly, what we’ve learned from studying conversations, and millions of them, is that human beings are the ones carrying out those compliance expectations and as you know by being an educator, people learn and understand what they’re being taught differently. So one of the biggest things we’ve uncovered is human beings aren’t trying to be non-complaint they just don’t understand what the rules are and so we have uncovered a vast array of inconsistencies amongst the folks who have received training but in deploying and executing what compliance really looks like and means and so one of the challenges for compliance officers is making sure there is the really good common alignment of understanding of what those expectations are so that when an audit comes you can be confident there’s a high degree of consistency in complaint behavior.

CJ: Yeah interesting. I’m going to take you back a little bit to the beginning of our conversation where we were talking about where that data comes from. Because I’m kind of curious, you mentioned chats, emails, and recorded conversations.

Question: Like a patient portal? Like if a patient communicating via the chat function in a patient portal? Is that also included?  Could you tell me a little bit more about where the data comes from and some of the concerns?

Like a compliance person might be like “Whoa I’m going to be listening in on that conversation. Does that person know?” You talked about consent and that sort of thing.

Question: Could you talk about the privacy issue that compliance officers might have with analyzing this data so that the patients or individuals don’t feel like big brother watching?

Amy: Sure, that’s a really good question. So let me start with real-life scenarios what are these conversations? So yes, in today’s world most organizations have multiple ways you can communicate with them. It might be a portal chat, it might be email, in many cases, it’s still a call center, and the types of conversations. Well, if I’m calling to schedule my appointment or if I’m getting ready to schedule a surgery and a nurse is calling me to pre-register or in insurance, I received a claim or a bill and I’m calling my insurance to understand why my insurance didn’t pay the way I thought or I found out that I have a prior authorization on this service that my doctor just prescribed and I need to go call my insurance company to get a prior authorization approved. There are just endless use cases, there are nurse triage lines, and all these types of things, and in all of the patients are identifying themselves, so there’s PII. In many of them there’s personal health information being shared and in all of them, there’s at least one or a handful of compliance requirements on the part of the organization that is having that conversation right? Protecting their information, making sure they have consent, all of those things.

So how compliance officers become comfortable with analyzing this data source is first of all realizing the value that it can bring to actually improve compliance rates by really studying at a granular level, macular and then granular, why are there non-complaint conversations happening? You are only going to know the truth if you go to the source of the truth so there’s first the use case but in terms of protecting the data our technology, when we take in customer conversations, has the ability to  redact all personally identifying information in an automated way before it ever gets to human ears. So it can redact the patient's name, any credit card, any social security number, address, date of birth, all of that. We also have the ability to do voice obfuscation which means modulating the voice such that you still keep the tone and you can still hear it clearly it’s just not identifiable to the person. All of those are ways in which compliance officers get comfortable with studying where the compliance issues are, understanding the quantity of them, the type of them, and then going deeper and listening to a few with direct ears to really understand it’s not a black and white matter in these conversations. There’s a lot that can be left to interpretation and if you don’t realize that as a compliance leader you’re missing an important part of how to move forward.

CJ: So, help me understand. Let’s say I love the product and I want to buy or rent or pay for it.

Question: Is it a cloud-based program? Are you coming in, are we funneling phone conversation recordings, and emails? Tell me the mechanics of setting up an organization’s data for your system and how that might work.

Amy: So we are a software as a service company. We are cloud-based. We’re an azure and because we work in healthcare we have had to pass all the tests from a security perspective and typically what we do is work with our client to identify what data sources need or want to be analyzed. If they’re phone conversations we receive recorded conversations from their telephony platform. Usually, that’s through a batched FTP security process, and then for more sophisticated phone systems we can connect to be an API and that data just flows into our platform. So that’s how it works and then in chat platforms, there is typically an ability for batched extraction of files and there’s a secure method we provide to get those files into our speech analytics platform.

CJ: Okay, really fascinating. We’re getting close to the end of our time and I want to make sure that I’m allowing you to share some things that you want to share. Maybe you can tell us, and we can put these in the show notes as well, how people can reach out. Website, contact information, and then I’m going to ask you if you have any last thoughts. Maybe something I didn’t ask you that you want to share about the service or those types of things. As far as contact information, website?

Amy: Sure, our website is and you can find me, Amy Brown on LinkedIn. I’m happy to connect with your listeners.

CJ: Great, this always happens to me. I get so talkative and I’m interested in what I want to hear and say.

Question: What are some things that you wanted to share that maybe I didn’t ask?

Amy: Hmm, such a great question. Well, first I’ll say that it is possible to use this technology without feeling like you have to boil the ocean as a compliance leader. I find that many compliance leaders, or many buyers of our technology, sometimes feel that it’s so sophisticated that maybe they’re not ready for it and I would just say what we help our clients do is pick a narrow use case, and let’s experiment and make sure that you’re getting the value from it. I would encourage compliance leaders to really think about how they can leverage AI to make their jobs easier. Everybody is always trying to figure out how to do more with fewer resources and this is a really great way to do that.

The other thing I would say, and maybe this is just an emphasis on an earlier point. Conversations are such a rich source of understanding as to why maintaining compliance is so challenging. Because when you have a human and a human interacting there’s emotion, there’s empathy, there are scenarios that weren’t imagined up in the training program that you took. And one of the most important things that compliance leaders can do is understand the reality of the scenario that their workforce is put in so they can create and improve on training programs with real-life scenarios so that the folks who are responsible for carrying out that compliance training have a really good understanding before they go out there and perform. I would just say there’s so much goodness leading into this data set and you can really improve your compliance program very quickly if you use it.

CJ: Yeah I really like that comment you made about how conversations can be interpreted in many different ways. I’ve read transcripts of conversations and you don’t have the emotion, you don’t have the context necessarily, just reading words doesn’t always give you the response that you think. One application I was wondering about.

Question: Maybe can end this question. A lot of compliance programs have software or a hotline where people can call in, anonymously report issues, that sort of thing, and a compliance officer is typing up a summary of the case or the issue. I’m assuming that type of data could also be analyzed? Amy: Absolutely, any type of text-based data, free text, unstructured data can be analyzed. Conversational data is most informative because you have a back and forth and you really understand the source. The note is, if you’re taking a note it’s still one person’s perspective of what happened. It’s reflective of what happened whereas in a conversation you’re hearing it as it’s unfolding and you’re hearing both sides of the conversation and we just find that that’s a more helpful data source to go to for understanding.

CJ: So then if somebody, they’re convinced of what you just said. A compliance program may decide they need to start recording these phone calls as opposed to typing up a summary.

Amy: Absolutely, and many many many healthcare organizations record all of their calls and there have consumer accepted ways of informing that consumer that their conversation is being recorded.

CJ: Yeah, well this has been fascinating Amy. I’m so happy that I got to meet you and learn a little bit about this. I think it’s very timely, I think the concept of using, as I mentioned, in the beginning, it’s one that’s being discussed in compliance leadership circles, doing things smarter, doing things more efficiently, and using all the data at our fingertips. Thank you so much for joining us today.

Amy: Thanks CJ, I really appreciate the conversation.

CJ: Thank you and to all of our listeners, thanks for listening, and until next time be healthy, be safe, and be complaint. Take care everyone.

Questions or Comments?