How Healthcare Organizations Can Use AI to Find Compliance Risks
In this episode of Compliance Conversations, our host CJ Wolf, MD., sits down with Amy Brown, the CEO, and Founder of Authenticx, to talk about artificial intelligence in healthcare compliance. From social work to compliance AI technology, Amy opens up about her journey from health policy, specifically Medicaid, to creating a conversational data platform.
After an organization stores conversational data (i.e., the chat thread where you’re asking a nurse at the clinic if you can mix your allergy medicine with your heart medication), Authenticx can organize all of that messy conversational data. Once it’s collected and flagged for compliance risks, it’s passed on to the humans for final review.
Why would anyone want to bother with organizing old conversations? And big, why are companies recording us in the first place?
Healthcare organizations often listen to these conversations to help solve compliance issues such as HIPAA and informed consent. Unstructured conversational data is critical to helping healthcare organizations improve, but it’s also very disorganized. AI helps get it all sorted. But then, the last step, the human step, is necessary because the context of a conversation is essential for leaders to listen to and understand. Still, there are millions of these conversations in a healthcare organization. Around 20% of conversations in a healthcare organization contain evidence of compliance risk.
Tune into this episode of, "Compliance Conversations: Artificial Intelligence in Healthcare Compliance," anywhere you get your podcasts. In this episode, you’ll learn how AI is used to reduce compliance risks and inform the FDA about adverse events. You’ll also learn how:
- Healthcare Organizations Store Chat Histories
- Unstructured (Messy) Conversational Data is Organized
- Your Privacy is Protected, and Personal Information is Redacted
Interested in being a guest on the show? Email CJ directly here: email@example.com.