ISO 42001 for Healthcare: What CISOs and Compliance Leaders Must Know
compliance, healthcare compliance, podcast, leadership, healthcare, Artificial Intelligence, SOC Audit, AI, ISO 42001, CISO
AI is no longer optional in healthcare—it’s everywhere, from documentation tools to patient-facing chatbots. But as innovation accelerates, so does regulatory scrutiny.
In our latest Compliance Conversations podcast, CJ Wolf speaks with Walter Haydock, founder of StackAware and former national security policy advisor. Walter helps healthcare organizations adopt AI responsibly, guiding them through ISO 42001—the emerging global standard for AI governance, often referred to as the “SOC 2 for AI.”
Key takeaways from the discussion include:
- ISO 42001 Readiness: Why it matters for healthcare organizations and how it ties into HIPAA and HITRUST.
- Shadow AI Risks: How unmonitored AI tools put patient safety and compliance at risk.
- Balancing Innovation and Safety: Steps to secure AI without slowing down clinical workflows.
- Emerging Regulations: From Colorado SB-205 to the EU AI Act, compliance officers must stay ahead of rapidly evolving laws.
As Walter notes, the key is to offload work to AI without offloading thinking. For compliance and security leaders, that means developing governance frameworks that keep patients safe while enabling innovation.
Resources:
You can find additional resources from Walter Haydock below:
- Free in-person risk assessment for healthcare CxOs
- Free email course on ISO 42001
- Blog Article: How to deploy high-risk artificial intelligence systems and still comply with Colorado SB-205
Episode Transcript
Welcome everyone to another episode of Compliance Conversations. My name is CJ Wolf with Healthicity. And today we're going to be talking about AI and data security and all sorts of good things. And our guest is Walter Haydock. Welcome, Walter.
CJ, thanks a lot for having me on.
Yeah, we're glad that you had some time and made some time to do this. And before we jump into our topic, Walter, we'd love to hear a little bit about you and kind of where you come from, what you do professionally and those sorts of things, anything you want to share.
Absolutely. So I'm the founder of StackAware, and we help AI-powered companies measure and manage their cybersecurity compliance and privacy risk. And I've been working with a range of healthcare companies over the past few years as they roll out artificial intelligence to help support their patients. And the companies that I work with are, I'd say, the top 10% of sophistication. They're the ones who see the opportunity, but they also see the risks. And they're working hard to mitigate them and ensure that they're deploying these technologies safely, but at the same time, understanding the benefit for patients, for providers in terms of efficiency, quality of care, all of those things.
Yeah, absolutely. You know, I attend a lot of these compliance conferences and AI is like almost every session that there's somebody talking about AI. And so, like you said, there's a lot of good opportunities, but we also have to be aware of the risks. And a lot of our listeners are compliant. So we're often brought into the room or at the table when they're discussing, when the leaders are discussing different initiatives, excuse me, initiatives and those sorts of things. So I'd love to, you know, you're working in this space, you work with a lot of healthcare folks. Tell us a little bit about maybe high level, you know, balancing innovation with governance, you know, so that AI tools can enhance rather than risk patient safety or other areas that you're aware of.
Absolutely. So key to making a smart trade off when it comes to risk and reward is understanding what the use case is. So there's circumstances where an artificial intelligence system is processing data that's not really sensitive at all. So for example, if you have your insurance PDF, you know, those things can be dozens, hundreds of pages, they're impossible to understand. And if you're just putting that into a large language, language model and asking questions about it, there are potential issues in terms of accuracy if it's giving you information that's wrong. But from a sensitivity perspective, that data is public, so don't worry too much about it. Now, on the other side of the spectrum, if you're processing protected health information, then there are going to be a lot of safeguards that you need to have in place. So, for example, if you're using a third party artificial intelligence provider, you'll want to have zero data retention of anything processed by that system to minimize the cybersecurity risk. You'll need to make sure that they're not training on your data in a way that you didn't expect. And then if you are training a model on healthcare data, which might be appropriate in certain circumstances, it's possible, you'll want to do that to the minimum degree necessary to achieve the outcome. So if it's de-identified PHI, or if you're pseudonymizing the PHI, then those are ways that that can help you achieve the goal while at the same time managing the risk.
Gotcha. Yeah. I've seen, I don't know if you've heard of these kinds of things, but I've seen AI being used in reviewing medical records and recommending like a certain CPT code or those sorts of things. Have you heard of that kind of thing? And is that kind of in something that you guys are exposed to or is that something completely different.
So that, I would say, is at the leading edge of what I've heard being done. I can talk about some of my customers that I work with because their products are already live. So Embold Health, which is one of my customers, they have a product called EVA, the Embold Virtual Assistant, which helps basically the customers of their customers find providers that address specific conditions for those patients. So for example, I might know that my foot hurts, but I don't necessarily know a podiatrist is the type of provider that I need to help my foot feel better. Also, I need to make sure that that podiatrist is within network, that I have the correct coverages to go see that podiatrist. And then also that the podiatrist is going to be following industry standard practices and procedures, not just jumping to surgery, but going through the proper diagnostic steps. So that's That is what Embold Health does. And they have wrapped all those capabilities into a chatbot that helps patients who are used to natural language conversations with the doctor perform those searches in a very intuitive way. So that's one example of how I've seen things get
done. Yeah, that's a great example. Tell me, what is shadow AI in health tech? And maybe are there some hidden risks and practices Yeah, that's a great question. Shadow AI is merely an extension of shadow IT, whereby there are assets in use that the organization doesn't formally authorize. And there are different types of shadow IT and shadow AI. There may be kind of wink, wink, nudge, nudge shadow AI, where everyone knows that other folks are using certain systems to perform certain tasks. because it's kind of impossible to do it without using those systems. And that's almost a worst case scenario because it breeds a lot of contempt or disregard for security policies. And if you have that sort of situation going on, I would say make sure you determine, well, do we need to be prohibiting these systems to begin with? Is there a reason we're prohibiting this? And if there's a good reason to prohibit them, then clamp down on that activity. Then there are individuals who are kind of more toward the either ignorance or contempt side of things or malice side of things where they know that they're not supposed to be using a certain system and that there are good reasons for it. And then they're just using it anyway because it's easier because they don't care. And those types of behaviors are more isolated, but also things that organizations should focus on and make sure that people understand that they shouldn't use certain systems for certain reasons and are held accountable for that.
Yeah, got it. Got it, got it. So it's like when you're interacting with clients, are you predominantly interacting with like the chief information security officer? Are you working with the operations folks, compliance folks or all of the above?
Generally, my point of contact is the chief information security officer at the healthcare companies that I work with, because that person often is in charge of the AI governance program. Sometimes by default, they kind of wake up one day and have been put in charge of it because no one else feels equipped or has the bandwidth to handle it. But I do work a lot with the data science teams, with the business teams, operations teams, to understand how these tools are being used in a real world way, because that's important. And, you know, a good CISO will understand that he or she doesn't know how every team is using every tool and will want to know the business justification for a piece of technology before applying controls to it.
Yeah, now that makes That makes a lot of sense. I work with a lot of doctors and medical practices too. And some of them are talking about using ambient AI, where I don't know if you're familiar with that, but they're telling me that this is when they might be in a room seeing a patient and they can speak verbally kind of what they're finding on exam and what their rationale and cognitive work is. And somehow that gets into the electronic medical record or their note that they can edit and those sorts of things. Are you familiar with Ambient AI? Did I describe it right?
Yeah, yeah, definitely. So one of my other public clients that I can talk about is Elios Health, and they work with behavioral health providers to essentially speed up the documentation process because there's a big documentation workload for those types of healthcare providers. So they help to transcribe the information that's being processed during the interaction with the patient. And so that would fit into that broader category that you're describing. And ultimately, that would be awesome for both patients and for providers to be able to just think aloud and have the artificial intelligence system encapsulate and structure all the information that you need. I mean, that's the type of thing that I'm doing on my own right now for non-medical purposes. You know, when I'm just brainstorming on something, I'll just start talking out loud and throw on the transcription to capture everything I'm thinking.
Yeah, that's awesome. Yeah, I Really, really interesting. And I've heard patients say that they like it because, you know, when electronic medical records were first kind of brought on, a lot of clinicians were like stuck behind the screen typing stuff and not really interacting with the patient. And one of the positives or one of the pros that they say is that it kind of allows for more engagement between the clinician and the patient. So interesting kind of ways that things are being used. This is great. This is great stuff. We're going to take a quick break, though, everybody, and talk some more with Walter when we come back after the break. Welcome back from the break. We're talking about AI and some of the experiences that Walter has and his company. You know, it always seems like technology advances quicker than laws and regulations. And I'm curious, and I think a lot of compliance officers are curious if there are specific AI regulations. My guess is, you know, over the next many years, you know, as things happen, as bad things happen, there's going to be laws and regulations probably, right, that try to govern AI or healthcare companies and their use of AI. Are you aware of some of those?
Yeah, we're already there, CJ. So there are a bunch of AI specific laws that are on the books. And recently, there was a proposed amendment to the big reconciliation bill that passed, which would stop all state level AI regulation for 10 years. And that was stripped out of the reconciliation bill at the last moment before it passed. So to quote a CIO that I was talking to, he said, it's going to be a free for all now because the states are going to move to regulate artificial intelligence and specifically in the healthcare arena. So I'll give you an example. Colorado has led the way in some ways, the nation in terms of artificial intelligence governance, and they passed a law, S which is very similar to the European Union Artificial Intelligence Act. And it lays out a whole set of requirements for what it describes as high risk artificial intelligence systems. And specifically in the case of healthcare, it applies to systems that affect the cost terms, approval or denial of healthcare being provided to a consumer and put some pretty stringent requirements on systems that are performing those types of tasks.
Wow. Are those kind of bills affecting everyone in healthcare or is it mostly like hospitals or doctors? Does it affect pharmaceutical companies, medical device, home health, nursing facilities? Are these bills kind of broad and talk about healthcare or they get kind of specific, if you know?
Depends. That Colorado law is mainly focused on kind of equal opportunity situations. So it's more focused on discriminatory pricing or rejection of claims. So insurers would probably be most impacted by it. Anyone who's setting pricing on drugs or on care. Now in Utah, in your home state, there was another recently passed bill, which is specific to mental health chat bots. So very specific in terms of its focus. Yes. Yes. And it requires certain disclosures. It prevents the sale or transfer of types of information and make sure that the users aware that he or she is interacting with an artificial intelligence system rather than a human.
Wow. So it sounds like, you know, legal departments with health systems and compliance departments really just need to stay up to date as much as possible on state regs and state laws and those sorts of things, because it sounds like some can be broad and some can be pretty narrow. And my guess is they're going to they're probably going to be more coming out. It's not just going to be a once and done type of thing, right?
Yep. And we're even seeing at the city level, we're seeing AI specific regulations. This is not for healthcare, but New York City has its own AI regulations on the books that have been enforced for about two years at this point. So we're seeing every level of jurisdiction implement AI regulation.
Wow. So you mentioned that the national bill, some of these things were taken out. Are you aware of any national laws or regs or anything for AI or right now? Is it mostly states, cities, and those types of things?
In the U.S., our regulatory framework for artificial intelligence is heavily state-based. I would love to see a federal AI governance bill, but I would also love to see a federal data privacy bill. I'd love to see a federal cybersecurity bill. Yeah. And I'm still waiting to see all of these things.
Yeah, no, exactly right. Yeah, you know, and in healthcare, we're always saying, you know, it depends on the state It depends on the state, you know, like state licensures and those sorts of things for clinicians and nurses. Yeah, it would be nice to kind of have instead of having to learn all these different state regs to kind of have a one one stop shop would be nice. So you talked about kind of data security. What you know, what are some specific measures that health care companies that use AI can do to secure their data?
Step one would be getting a policy in place so that everyone understands what is acceptable behavior and what is unacceptable behavior. And a lot of organizations aren't even there. Like I said, some companies create a gray area where it's understood that people are using certain systems even though they shouldn't be. So that would be a first step is creating a policy because that is the way that you memorialize your risk appetite and ensure everybody's on the same page. So that would be step one. Step two would be doing a thorough asset inventory. So understanding all the systems that are in use, you know, you might even have a kind of a get out of jail free card or a amnesty period for people to declare, you know, all the tools that they are using. And so you can do an effective inventory and understand everything that's in use, where your data has been going now that you have this policy in And then once that inventory is completed and you're reasonably confident that it's accurate because no inventory is ever accurate, but assuming it's reasonably accurate, then you can move into doing a risk assessment, looking at every system that's in use, looking at the security, privacy, compliance implications of each tool. And that can be a very detailed process if you've got dozens, hundreds of tools across your network. And then if you're you're looking at going for a compliance certification or complying with certain state-level regulations, you're going to need to do even further work on the impact assessment side of things to understand how these tools are impacting folks outside of your organization looking at individual and societal impacts.
Gotcha. That sounds like a good kind of strategic vision of kind of how to attack that a little bit. As you were talking, something came into my mind. I also kind of work in academia, but also in the medical school, medical space of academia. I was aware of a journal article that was published in a peer-reviewed journal, which you'd think would be reviewed by multiple people and edited. In the conclusion section of this medical journal's article, it was obviously AI had written the summary statement because some of the prompts were still in there and some of the from AI that said something like, I don't have all the intelligence, you will need to also consider blah, blah, blah. That phrase was in the article. And I thought, oh my goodness, we're going to be using this for everything. Yeah,
I mean, you highlight a key risk there because improper use of artificial intelligence without the appropriate level of human oversight can be a real risk, especially in a high impact area like healthcare. So understanding where it's appropriate to have human oversight and then inserting it is a key consideration of your AI governance program. Now, conversely, there are some studies that show when you insert a human in the loop, the results get worse. So you need to do a risk assessment based on the situation, based on the use case to determine whether it makes sense to have a human reviewing the outputs. In the circumstance that you just described, it definitely would be appropriate to have a human reviewing those outputs before.
Exactly. Like I was just shocked. I'm like, wasn't, didn't anyone ever really read this? And it got published. It was kind of, it was kind of, it blew my mind. But I'm sure we're going to be seeing, you know, novel uses of AI because it really can help in certain ways. Right. And if you, if you can write some really good prompts, you know, it can get a lot done and maybe the mental work instead of is after the fact it's before the fact it's, it's writing very specific, detailed, helpful prompts. And that's where the human thinking can go in. And then, of course, yes, you review it afterwards as well. But I'm finding that it saves me a lot of time as long as I'm thoughtful in my prompts. And then I'm also reviewing it, you know, very carefully to make sure I agree with what's been generated.
Yeah. I think a key consideration is to offload the work to artificial intelligence, but not the thinking to artificial intelligence. Yeah, that's a great way to put it. You know, I know when people, you know, we go through these technology changes as, you know, historically, right? Like the calculator was created. Never said, oh, no one will know how to do math. And well, the computer was created. No one will know how to do this or that. And I think what you just said is spot on. Any technology if it can take away kind of the repetitive work and those tasks that we don't need to be thinking so much about, then we can do the thinking and kind of practice at the top of our intellectual license, if you will. Exactly. Yeah, good way to put that. Well, Walter, we're kind of coming towards the end. I'd love for you to have kind of the last word, maybe share a little bit about your company, what you guys do, your sweet spot, any of your offerings and those sorts of things. things.
Yeah, absolutely. So StackAware helps primarily healthcare companies with their AI risk management and governance programs. And the way we do that is primarily, although not exclusively through ISO 42001 readiness. ISO 42001 is a global standard for AI risk management. And we have helped a range of healthcare companies pursue that standard. And it is an excellent way to develop a system for for staying on top of your AI risk and also building trust with customers who may have some concerns about how you're using it. So what we're doing right now is for CXOs in healthcare, we are offering a free risk and gap analysis using the ISO 42001 framework that would be done in person at your office or home office. And we will come to you to do it for free. So if you're interested in that, please go to cxo.stackaware.com. And all the details are located there.
Thank you for listening to another episode of Compliance Conversations. As always, we welcome your input on other guests that you think would be good guests on the show or topics that you want to hear about. So please let us know. And until next time, take care, everyone.
Questions or Comments?