AI and Compliance: Future-Proofing Your Program
AI is no longer optional in healthcare—it’s everywhere, from documentation tools to patient-facing chatbots. But as innovation accelerates, so does regulatory scrutiny. Artificial intelligence is no longer a future concept; it’s here, embedded in healthcare systems, vendor platforms, and even your day-to-day operations. But with innovation comes risk.
In this episode of Compliance Conversations, CJ Wolf, MD, is joined by Santosh Kaveti, CEO of ProArch, to discuss how compliance leaders can guide their organizations through the AI era.
Key takeaways from the discussion include:
- AI magnifies both good and bad processes — making security and compliance more critical than ever
- Compliance officers must develop AI literacy and join cross-functional governance teams
- Explainability is essential — if you can’t trace how a model reached a decision, you can’t validate its safety
- Organizations should prepare today for rapid regulatory changes that are sure to come
Listen to the full episode here to future-proof your compliance program.
Episode Transcript
CJ Wolf: 00:00
Welcome everybody to another episode of Compliance Conversations. I am CJ Wolf with Healthicity and today's guest is Santosh Kaveti. Welcome to the show, Santosh. Hi, CJ. Thanks for having me. Good to be here. Absolutely. We appreciate you making some time to share a little bit of your expertise and experience. We're going to talk a little bit about AI and compliance, but before we jump into those topics, Santosh, we'd love to hear a little bit about yourself what you do and anything you want to share.
Santosh Kaveti: 00:34
Yeah, I would love to. Again, my name is Santosh Kaveti. I'm the CEO of a technology services firm called ProArch. We're based out of Atlanta, Georgia. And we help our customers with their data AI implementations. But what differentiates us is doing it securely and making sure that you know your risks.
CJ Wolf: 00:55
Absolutely. And Santosh, I think that rings true to a lot of our audience. Most of our audience, as you know, are compliance professionals in the healthcare space and so That's our job is to be concerned about potential risks. We know that AI has a lot of advantages and it's gonna be the future of a lot of things, but we wanna make sure it's done in a secure way, in a way that protects our organizations. So that's what we're gonna talk about a little bit today. And maybe we can just start there, Santosh. How can healthcare organizations leverage AI, but at the same time maintain a strict compliance and security standards?
Santosh Kaveti: 01:35
That's a really good question, CJ. Look, I think it's fair to say that at this point, most healthcare organizations are leveraging AI in some shape or fashion. They probably are leveraging AI and they don't even know about it. Right. So-called third-party risks and shadow AI and so on. One good thing about this AI revolution, and I honestly see which I'm actually happy about, it really brought the compliance and security to businesses and it made this conversation more prominent. This conversation was a siloed conversation, afterthought, reactive conversation. Now with the implementation or usage of AI and everybody recognizing that AI is going to magnify exasperate whatever the good and the bad, both sides. The implications of AI really magnifying the bad is worse. I don't think that organizations will be able to survive that period. Therefore, data security, compliance, security in general has become really more important. I'm so happy about that because this is an opportunity to bring security and therefore compliance into the center of day-to-day, center of the workflows, center of the operations. And that, I think, is the fundamental shift that I am noticing, and I'm actually very, very pleased about that. And of course, there is AI for compliance, and there is compliance for AI. It goes both ways. But it really all starts with knowing your risks. If you don't know what your risks are, Then needless to say, you're not prepared. Understanding that AI itself inherits lots of risks from data. The data risks you already had and were managing somehow, they all are going to AI risks. Now, in addition to the data risks that AI inherits, AI is beginning to add on several new risks and some emerging risks that we don't even know about. Until recently, when ChatGPT-5 was released within 24 hours, somebody broke it, hacked it with a very new technique called echo chamber. Again, I'm just giving you an example, but there are so many new ways that we don't know of. And therefore, it's going to be an evolution of regulation and compliance, period. But it goes back to basics. Know your risks. Have a framework in place to manage those risks. And compliance is a result of really good security hygiene. That's the way I look at it. That's a good point. And it's a great enabler. It really differentiates you, has the ability to differentiate you in so many ways. But it all starts with knowing your risks and knowing your emerging, evolving risks that AI is introducing
CJ Wolf: 05:01
into your day-to-day workflows. Yeah, I think that's a really important point. A lot of us in the compliance industry, we rely on guidance from enforcement agencies in healthcare. The HHS OIG has some compliance guidance. And then we also look at the DOJ's guidance. And when the DOJ updated their guidance document, evaluation of compliance programs back in September of 2024, they updated some language about emerging technology. So they didn't specifically mention AI, but obviously that's an emerging technology. And to their point, it's what you've just said. It's, you know, if we just stick with the risks we had 15 years ago, we're not very current, right? And so to your point, an important aspect of any compliance program is having a evolving risk assessment process and recognizing new risks as they come up, not just the old risks that we had five years ago. And AI is gonna be one of those. I'd love to hear your thoughts about what you experience as some of the biggest misconceptions that you see around AI governance in regulated industries like healthcare. Do you have a couple of those misconceptions that you can share? Oh, yeah. Yeah, absolutely.
Santosh Kaveti: 06:23
Many. One of the main misconceptions is what I've seen, especially in healthcare, as well as in critical infrastructure, is not understanding the risk exposure level or attack surface is one way to put it. And just because you have a good networking setup or a good segmentation, assuming that you are protected, you'll be surprised. We work in critical infrastructure as well as in healthcare. In critical infrastructure, when you are talking about power generation where NERC and a different network compliance comes into play, everybody thinks that, hey, my IT network is really segmented. It's not connected to the internet and therefore I'm secure. I see. You'll be surprised how poor security practices are still, you know, make the entire network, OT network, you know, vulnerable. Same is true with healthcare systems, right? These are all critical systems. The consequences are severe when something goes wrong. So the The misconception is just relying on traditional security and thinking that I'm protected, especially in the AI world. Thinking that because I've deployed an AI team, they're going to solve my problems. And the reason I'm mentioning this one, there are lots of them, but these are, I mean, particularly mentioning this one is upskilling your employees at all levels on AI and the risks of AI is super important. That's your first human shield. An effective AI team can only do so much. They might be able to help you, but in AI world, you are the hero. You actually have the responsibility And I'm beginning to see many organizations, even policy-wise, making decisions saying, look, you cannot say that AI, because you used the AI tool, you made the wrong decision. You have to take the accountability of the decision. It's on you to actually go validate, verify, and have the confidence that this is how the decision was made. So another big area is educating people. all of your employees and making them AI ready. And it's very different. It's not an ongoing process. It's not one and done. It's not like they can take a simple class and go, okay, this way I can write a few prompts. Am I AI ready? No, you're not. There's so much more to being AI ready. Just as a normal user, as a business user, we need to be AI ready. We have a framework that we deploy, but that's another huge myth is, hey, I have a security team, I have AI team, and therefore they'll take care of it. They'll take care of all the risks now.
CJ Wolf: 09:28
Yeah. So, you know, a lot of our audience, they're listening to you. They're compliance leaders. What would you say to them on how compliance leaders should evaluate AI vendors to make sure that they're aligning with either regulatory or ethical frameworks? So what should compliance leaders be doing when they evaluate these AI vendors? Oh,
Santosh Kaveti: 09:53
amazing question, CJ. There are two parts to this. In my two parts to this topic. One is compliance leaders will have to recognize new risks, as I said early on, that AI now poses. That's where you start. Without having your risk register, risk assessment, it's very difficult for you to say, what are you, I mean, it's easy to say, pull up a checkbox and say that, okay, this particular, you comply with this particular regulation or this particular rule, but when the situation is evolving, and regulation is only catching up. Regulation is not catching up as fast as the technology is evolving, right? That's right. So the compliance leaders will have to wear a different hat and it's not enough to just to say, okay, I comply with HIPAA or whatever, CMS, FDA, it's not enough. You have to go beyond that and really understand AI, build an AI risk register of your own from your own research, your own understanding, your own usage, right? Okay. Now, another big issue here is, given in healthcare, let's just say care provider, hospital, they probably have maybe 10 to 15 applications or systems that they're using to make everything work, right? Okay. In addition to their own collaboration platforms that they use, how many AI model do you think on an average are deployed into a hospital these days? Yeah, I don't know. Five to ten? Easily or more. But do the hospitals know that by virtue of using all these third-party apps, they are using different AI models that they are subscribed to? I see. Do they understand the risks that those models are now bringing into the hospital? Whether it's EM system, EHR, it doesn't matter, even, you know, diagnosis systems, what models are they using and, you know, what risk do they bring in? Now, the one thing compliance leaders will have to push very hard on is explainability. And I know regulation will eventually get there. I'm sure regulation will start pushing for explainability, but that's what matters. Now, FDI says, now, in one of their recent rulings, they're beginning to say that, look, whether it's a hard device or not, if you use AI somewhere during the patient care, you may be subject to calling that process a medical device. And imagine when that happens, explainability becomes super important. What does explainability mean? Knowing what was the representation of your data? What was the possibility of your data poisoning? So it starts with data. Going on to hallucinations. This is not the place where you can allow any kind of hallucination. Bias, which again is induced by data. And the context and the accuracy is super important when it comes to critical systems. My ability to say, okay, when you prompt something and you get a response, how did you make the decision? Ability to trace back, the traceability and observability is super important for compliance leaders in the future. I think any system, when it comes to critical functions, cannot be deployed if you cannot explain the decision. Yeah, that's such a great way to put it, explainability. I like that. And unfortunately, we're not quite there yet. Very few models today, most of them are closed models. I mean, there are some open source models, but even the closed models, we're not quite there where they can explain how a decision was made. And that's going to be the key. I think, especially in healthcare and critical infrastructure, we will see more and more implementations of those AI models that only can explain themselves and are able to audit as the data changes. Another thing is you can't afford in healthcare is this thing called model drift. What happens over a period of time if you don't train the model continuously and deploy as it learns from the real world Your answers are drifting, and therefore your model accuracy is at fault. And you can't afford that when it comes to, again, healthcare, right? No, you can't. Gotcha. But these things is what I would say. Keep an eye on it, explainability. It all goes back to explainability, in my opinion.
CJ Wolf: 14:44
Well, very good. This is a fascinating discussion so far. We're going to take a quick break, everybody, and we'll be right back to talk some more about AI and compliance. Welcome back from the break, everybody. We've been having a really fascinating conversation about AI and compliance. And I know it's on a lot of our minds. And it's a very popular topic. There's conferences that are coming up just about AI and compliance. And so some of us are probably a little further along in our understanding. Others are probably just getting started. And Santosh, I have another question about organizations. You keep kind of talking about the future. and how things evolve, how can organizations kind of take to a future proof kind of thinking about their data architecture to support AI compliant innovation? Because we know it's going to become more and more innovative. How do we future proof our organizations? Again, really good
Santosh Kaveti: 15:46
question, CJ. And look, I mean, in AI transformation times, what happened six months ago is now being referred as ancient times. That's how fast the technology is moving, right? Yeah. So unlike before, the future is like two or three months away and that's how fast things are changing. So we have to get there. Good news is that it's not as hard as everyone thinks it is to just have a good data security hygiene and good compliance hygiene. It's really not. You have the tools these days. You actually have a really good setup and it's not even cost prohibitive. You know, it's not even like a huge cost together. It's about building that culture though. It's about making sure that everybody understands what their responsibility is towards the data and towards the AI. What do I mean by that? It's easy these days to establish a really good data governance program that establishes the owners of the data. See, data quality and data security are super important. If you don't have the good foundation for those two things, I know there are a lot of fancy words there, but data quality and data security. These two are super important. Get that right. Get proper data governance, basic data governance in place. Make sure that you at least identify all your data risks. Make sure that your security controls are able to mitigate that risk. And that's a good start. And I would say do the same thing for AI. In AI, though, that AI readiness takes a bit more education and online learning. What does it really mean when I'm consuming an application and it uses AI? What should I do? be mindful of right because ultimately if i am held accountable for the decision but ai is recommending it well how do i verify you know is super important so again i would say understand your ai risks and good news again is now we have ai security controls in place there are ai red teaming you can you can monitor prompt injections you can monitor abnormal behavior you can do much to do so much both reactively and proactively even on the ai side ai side Again, this is a bit futuristic. When I say futuristic, hopefully not more than six months. Wouldn't it be great to have an LLM trained on healthcare compliance for all of the healthcare compliance professionals such as yourselves, compliance leaders, for them to use it, where it only cares about healthcare compliance, highly customized, super accurate, can explain all of its decisions, but it remembers every compliance that there is of. Now, once you have that, imagine if you could build an agentic AI, which is there's an agent that's running behind the scenes, monitoring every process across the hospital and flagging things real time and saying, hey, this one doesn't seem like, you know, this is where you're probably violating some compliance. A quick action to get that. So agenting framework, you know, in fact, is an AI for compliance. There are huge benefits. Now you have to, AI comes with its own risks, but AI can be leveraged today in compliance to automate so many things. I'm to actually build this custom LLMs that you can work with overall.
CJ Wolf: 19:18
Yeah. So let me ask you this. You're working with clients and are most clients, Where does kind of responsibility for AI sit, right? A lot of us as compliance officers aren't data experts. We're not information security experts. We rely on partners like a chief information security officer. But AI seems to be more than just information security. Maybe that's where a lot of it is residing. Or are organizations having committees and teams for AI where you have clinical folks, operation folks, finance What are you seeing as probably a good way for organizations just to give ownership to AI, if that's a fair question?
Santosh Kaveti: 20:06
Oh, it's an excellent question. And most organizations struggle with this. AI truly is a cross-functional liability and cross-functional accountability. You can simply say, hey, I'm going to hire a traditional CISO and say, you're also now responsible for all things AI. I think it's fair. I'm beginning to see some new roles like CISO equivalent for AI emerge as well. But I think AI governance should sit on top of data governance. It needs to be truly a cross-functional team. And again, if you use education and learning wisely and you have enough representation from different groups, from administration, from doctors from labs in every which way. Then it becomes asking the right questions. The governing team should just know what questions to ask. And that's where partners like ProArch can help. And I know security teams are now beginning to train. We're training our own security. We have 24 by 7 SOC, a great security program. We're training our own security teams as we create new AI solutions for AI to monitor AI. Security teams will have to, those skills will have to be upgraded. Your IT skills will have to be upgraded. But truly, it's a cross-functional team, cross-functional governance.
CJ Wolf: 21:36
Yeah, and that's kind of what I thought the answer would be. And so as compliance officers, we're kind of struggling with, well, then what's our role? Because a lot of us respond, like you said earlier, we respond to regulations. And you mentioned the regulations are kind of lagging behind the technology, right? And it usually regulation doesn't usually come until there's been some sort of catastrophe or some sort of bad event. And then, you know, legislative bodies think, oh, we have to prevent this from happening again in the future. So then the regulations come. And so I think a lot of us as compliance officers who usually rely on regulations to help guide us in what we're trying to help our organizations do. We don't have that right now. And so it's just what would you say to a compliance officer? officer, where should they start? Should they take a class on AI? So what should a compliance officer be thinking about?
Santosh Kaveti: 22:37
And I would say you, I mean, the compliance community is in a great spot right now. Early on, I said, for the first time, I'm beginning to see the business owners, regular users actually be concerned about compliance as they are using AI. I think you're in a really good spot, as you rightly said. It starts with your own AI education. If you don't understand the emerging risks, first of all, understand that regulation will always be catching up. So you cannot simply be, The job isn't simply, okay, am I just passing the regulation? No. I think you need to start advising companies beyond regulation to future-proof them. Because when the regulation hits, they won't have time. They will not have time, as much time as they had before. Because AI is moving so fast. Their time is going to be, okay, regulation today, tomorrow, they all need to be ready to go. I can't think of a data-governing team or an AI-governing team without somebody a compliance specialist in the team. That's going to be a disaster. Right. So you need to have a seat at the table on these core committees. And that's where you bring your voice because you have a wealth of experience. You know what went wrong. You know what will go wrong. You know what will happen. Right. So that's where I think, in fact, I would say that the compliance community is really, really good, very well positioned right now.
CJ Wolf: 24:02
Are there resources that you could steer a compliance officer to if they want to? to learn more about AI and how compliance fits into it? In other words, are there certain conferences? Are there certain white papers? Are there certain courses? Are you aware of any of those kind of resources? Oh,
Santosh Kaveti: 24:20
I'm pretty sure. We use several different platforms other than, of course, attending industry conferences. We're a Microsoft partner. So we get a lot of our Microsoft has a platform called Microsoft Learn for many of their platforms We get a lot of compliance information from Learn. We use Pluralsight and they have excellent, you know, and believe me, I can have my team put together a list and send it over to you. Happy to do that. But, you know, there is actually really good information right now when it comes to how, where to go just to start your education. In fact, I did this a week ago, you know, as a result of me talking to another compliance, you know, leader. You know, I put together, I went to ChartGPT, course five is a pro version, and asked it to put together a really good course for compliance leaders. And it did a fantastic job in just starting with basics of AI, the risks, emerging risks, pointing it to regulations and saying, these are the courses, these are the courses. In two weeks time, I think, you know, one could, if you just spent one, two hours a day, you will actually be in very, very good shape. So there's a lot of really good content out there.
CJ Wolf: 25:40
Yeah, good idea. Well, Satyaj, we're kind of coming towards the end. I'd love to hear what your company offers clients in the healthcare space. Like what do you specifically do? Do you work primarily with compliance officers or are you working with, you know, the chief business officer or whatever? So I'd like to hear a little bit more about what your company does and then any last minute thoughts before we leave as well. Absolutely. So We
Santosh Kaveti: 26:07
used to work with IT teams because IT was responsible for deploying apps, monitoring, and also security teams. That was our primary work. Now, we're beginning to work with physicians. We're actually beginning to work with people who are operationally very active and involved because they need AI and they need partners like us to make sure they're using AI the right way. We're beginning to see a huge shift. And that's what I really like about it. And they're beginning to ask good questions too. Like, hey, data risk, what should I be aware of? What should I get trained on? What should my app get trained on? Can you all curate some modules so that we can make sure we can take some self-assessment, see how are we faring? How are we faring just from literacy perspective? Where are we when compared to other hospitals? Are we good? Are we not good? And so on. It's been a fascinating journey. And for anybody who wants to get into compliance and security, I would to get your fundamentals right. I think you need to be prepared for the day when AI or any other system won't work. It'll stop. See, I remember the CrowdStrike bug that got everything. Somebody I knew was actually in a hospital. And I remember the hospital struggling when they had to go back to the manual mode. They were struggling to administer basic care because they had no idea what to do. They were so reliant. That complacency that one builds when you're reliant on the systems and all of a sudden one day there is a bug and systems all stop. I think everybody should mentally be prepared and be aware of that, okay, when that happens, I need to be ready. I need to be as good as I am with those systems. And that
CJ Wolf: 28:00
differentiates all of us. That's such a good point. I got a message today in my own personal life saying our water was going to be turned off for the day from nine to four. And I thought, OK, how am I going to flush a toilet? How am I going to brush teeth? How am I going to? So like you said, we become so reliant on something that when it's not there, hopefully those days don't happen very often. But if they do, can you do something manually in the health care space? I was aware of a health system that their electronic kind of ordering system went out and the laboratories could not Mm-hmm. And now more
Santosh Kaveti: 29:08
than ever, because of AI, I would emphasize on that. Have a manual process in place.
CJ Wolf: 29:13
So fascinating. Santosh, thank you so much. For our listeners, we'll make sure we include some links to Santosh's contact information and or his company's information so that if people are needing kind of that partnership or that guidance, they can reach out to you. Thank you so much for your time today. Thank you, CJ, for having me. Yeah, and thank you to all our listeners. for listening to another episode. As usual, if you know of a topic that you'd like to hear more about, or if you're aware of a guest like Santosh, somebody with expertise in a certain area, please let us know and we'll see if we can get them on the show as a guest. And until next time, everyone, take care
Questions or Comments?