AI in Healthcare: A Clinician's Honest Take on What Works and What Doesn't

Artificial intelligence is everywhere in healthcare right now, in the pitch decks, in the vendor conversations, in the board presentations. But what does it actually look like when you've built it, deployed it at scale, and watched it succeed and fail in real clinical settings?

That's the conversation we had in a recent episode of Compliance Conversations, when CJ Wolf sat down with Dr. Ashok Gupta, founder of TheraNow, doctor of physical therapy, and one of the more grounded voices we've had on the show when it comes to AI in healthcare.

Ashok didn't start as a tech entrepreneur. He spent years as a physical therapist working in VA hospitals, home health settings, and ICUs across the country. The problem he kept encountering wasn't clinical, it was access. Patients couldn't get to care, and care couldn't always get to them.

That experience shaped everything about how he eventually built TheraNow, and it shapes how he thinks about AI today.

Why This Conversation Matters for Compliance Professionals

AI adoption in healthcare is no longer hypothetical. Organizations are actively implementing AI-powered documentation tools, clinical decision support systems, and diagnostic aids. For compliance officers, that means new questions are arriving fast — about oversight, accuracy, vendor assessment, and what happens when things go wrong.

What makes Ashok's perspective valuable is that he has seen it from every angle: as a clinician using the tools, as a builder designing them, and as an operator scaling them across major health systems. That combination is rare, and it produces a different kind of insight than you get from a purely technical or policy-focused perspective.

What the Episode Covers

Understanding AI Hallucinations in a Clinical Context

One of the most useful parts of the conversation is Ashok's plain-language explanation of how and why AI generates inaccurate outputs and what the clinical stakes are.

The core issue is context. AI tools are designed to produce answers. When they have complete, accurate information, they can do that well. When they don't, they fill in the gaps, and in a clinical setting, that can mean fabricated medical history, missed drug interactions, or documentation that doesn't reflect what actually happened in the encounter.

For compliance officers evaluating AI tools, this surfaces a practical question worth asking any vendor: what happens when the system doesn't have all the information it needs? How is that handled, and how is it documented?

The Chai Framework for AI Assessment

For organizations trying to evaluate whether an AI product is ready for clinical use, Ashok points to the Chai model, a structured framework designed specifically for assessing AI in healthcare settings. It gives compliance teams a step-by-step approach to evaluating security, safety, and clinical impact before a product touches patients.

If your organization is in the process of reviewing AI vendors, this is worth looking up. It takes the question out of the abstract and gives you something concrete to work from.

Knowing the Limits of Any Tool

One of the most practical points in the episode is about clinician training — specifically, that knowing where an AI tool fails is just as important as knowing where it works.

AI documentation tools, for example, are capturing words. They're listening and transcribing. They're not observing, interpreting, or noticing the things a clinician picks up on in the room. Clinicians need to understand that boundary, and they need training that reflects it, not just a mandate to start using the tool.

For compliance programs, this has direct implications for how you structure training, what you document in your policies, and how you set expectations with clinical staff.

Measuring Whether AI Is Actually Working

When asked how organizations should measure AI's impact, Ashok's answer was refreshingly simple: pick one North Star metric and watch it move.

His example from TheraNow is patient drop-off after a first visit — a single number that reflects the quality of the patient-clinician interaction. When they introduced an AI tool designed to improve documentation and engagement, they watched that number. If it moved, the tool was working. If it didn't, something needed to change.

This approach — one clear metric tied directly to patient outcome, not a sprawling KPI dashboard — is worth borrowing for any compliance team trying to build a case for or against an AI investment.

Looking Ahead

Ashok closed the conversation with a point about where healthcare technology is headed, and what organizations should be building toward now. His parting advice was direct: be the shepherd of the open, integrated environment.

The organizations that will get the most from AI — and manage the most risk responsibly — are the ones that treat integration as a foundation, not an afterthought. Pulling data into a tool after the fact is not the same as building a system where everything is connected from the start.

What This Means for Your Program

Whether your organization is actively implementing AI or still in the early stages of evaluation, the themes from this episode offer a useful framework.

Ask vendors hard questions about what happens when context is incomplete. Use structured assessment tools like Chai before deployment. Train clinicians not just on how to use tools, but on where those tools fall short. And identify one outcome-based metric you can actually watch over time.

AI in healthcare does not have to be overwhelming to navigate. But it does require the same rigor and intentionality that compliance professionals bring to everything else.

Episode Chapters & Transcript

CJ Wolf: 00:00

Welcome everybody to another episode of Compliance Conversations. I'm CJ Wolf with Healthicity. And today we're going to be talking about AI a little bit. And I want to introduce our esteemed guest, Dr. Ashok Gupta. Dr. Gupta, welcome to the show. Thank you, CJ.

 

Dr. Ashok Gupta: 00:19

Thanks for having me.

 

CJ Wolf: 00:20

Yeah, thank you so much for taking some time and being willing to share some of your expertise. You know, on the show, before we kind of jump into our topic, we'd love to have our guests tell us a little bit about themselves and would love to hear a little bit about you.

 

Dr. Ashok Gupta: 00:33

Definitely. So I'm not going to go back too much in where I got educated and what I did, but I'm going to talk about the mission that we started out on. As a clinician, we always like think about how best we can care for our patients. But the most important problem is like, how do you get the patients to you? So access to care, and we think about access to care and we think, uh, oh, we are a developed country here in America, we don't have that problem. But it's a deep-rooted problem. I worked with uh VA hospital in Manhattan to a smallest possible home health company out here in Texas. And I realized the problem is everywhere. The access to care is uh uh something that is very, very integral problem. And then the only way you can fix it if you have more ways to access the care. And then that's when we started thinking about telehealth being the way that, and then we took a small problem of like specifically rehab or physical therapy and started to build our own company around it. Me, uh my own background, I'm a doctor of physical therapy. My wife, my co-founder, also is a doctor of physical therapy as well. So we looked at each other, is like, is it something we can do online? And the answer was yes, absolutely yes. And in the beginning, nobody followed us. So we had to say, trust me, I have years of experience, I know what I'm doing. But now with thousands and thousands of patients' data that actually speaks for itself. And now, if I get that question, I always talk about is here is thousands of patients when they were treating in the clinic versus when they were online. That started our technical journey. And we started building a tech company and uh because without the platform, it's just a video call. And a video call is not telehealth. Uh, you need all these tools. And that's where we got introduced to AI. And we thought uh back in 2020, and this is like before this AI wave, we knew that if we have to make a difference to the millions of patients' lives, we have to make sure we have to stay in the front of the technology. Build something that will impact for really strong reasons. And nowadays, AI means that generative AI that spits the words and then predicts like what the next best word is. Well, we took it to the physical AI domain, and then one of those examples is computer vision. So um, have you have you been talking to some of the experts related to computer vision and then the examples?

 

CJ Wolf: 03:01

Yeah, I know. Tell me about it. I don't, I'm not as familiar with it.

 

Dr. Ashok Gupta: 03:04

Oh, here you are. So uh if you believe it or not, computer vision is all around you. You go to grocery shopping, the cart, the self-checkouts you're doing, there is a camera on top of it. It's actually looking at your action. It's not reading the words on the internet and then telling you what the best LinkedIn line would be uh or what the best image would be, but it's looking at the movement and it's tracking what you're doing. You go into a public place and then or or uh the amazing example I'm gonna give you outside of healthcare before I bring you in back into the healthcare would be is like we're going to a trade show, uh Becker's Healthcare, one of the best uh plays to be at if you're in a healthcare and executives to meet with. So I will be speaking on telehealth on a panel there. So we also have a booth for my company Therana. And then one of the equipment they are actually offer is you can put these cameras on your booth. It in real time records the sentiment of the audience walking on your booth. And how's that happening? Computer vision again. It's not reading words, it's reading your expression, it's reading the number of people, it's reading the crowd, uh, things like that. So now I'm gonna bring it into our use case. We started to see it was like there's a little bit of lack of uh, or there's a gap. When I can't touch you, I cannot see how much your shoulder is moving, how much your elbow is moving, how much your knee is moving. I won't be able to provide you the most objective care that I should be providing you. If you were in the clinic, I'll put in a goniometer and then be able to see, oh, yesterday you were 110 degrees, today you are in 12 degrees, and we are doing great. Let's just double up on to what we are doing. So we built the computer vision AI. So it's able to recognize where your shoulder is, where your elbow is, where your wrist is, and to the tenth of the degree precision, is able to tell how much range of motion it's moving. So not only now I'm more precise versus what I would do uh physically, like with the old school ganiometer, now, but at the same time I can replay it to you in the next time saying, This is how we started, this is where we are, and then this is where we would be.

 

CJ Wolf: 05:12

Wow. Amazing, amazing stuff. This is fascinating. You know, and a lot of us in the compliance space are, you know, we hear all about AI, and there's so much promise there, but there's probably some risks too, right? Um, and and compliance-minded people, compliance officers, who's predominantly our audience, you know, they're wondering, okay, now what could go wrong, right? And so they're always looking at those sorts of things. And so maybe tell us a little bit about AI hallucinations in healthcare specifically, and tell us first what those are, and then why do those potentially uh pose a risk? Right.

 

Dr. Ashok Gupta: 05:48

No, I'm gonna give you this from a very different perspective. And every single time I'm I'm like lately talking a lot about the physical AI, and that's where computer vision or other sensor data and things like comes in. So let's start with the generative AI itself. The AI models are trained to give you answer. They are the best cheerleaders you can find out there. Even if you say the sky is black, it will say yes, sky is black. Uh, after two, three times you'll press, it'll just say yes, it is a fact. Uh, if you say I am the president of the country, it will say yes at the end of the day. So it will be what if you want validation, go to the Ma'alm and then ask for anything, it'll do it. How it does that is because it's trained to give you the answer. Obviously, it's trying to give you the best possible right answer, but if it doesn't have the right answer, then it'll make up the answer. And that is what is called as hallucination. So you might think it's just uh like not a very big problem, or what if if it just got one thing wrong? Humans also get things wrong. Um, but start thinking the amount of application we have around AI these days in the in embedded into the workflows. I'll give you the smallest example. Uh, you you might have already read it in the news, like uh one of the uh uh court um uh attorney filed the motion with all these citations. Yes, right. Where did these citations come from? Because he was actually asking and pressing hard on, I want a citation that shows this particular objective of mine and proves that. So it came up with something. And uh you see that can get you into trouble. Obviously, you didn't have any harm any life there. He did harm his or he or she probably lost their license or might be in the trouble for that. But now this comes to the healthcare. And in healthcare, if I don't have your chart fully embedded into the AI, it'll just make up the medical history. If it'll make up the medical history, and then simple, harmless example I would give is like you have an AI that actually does the drug interaction. Okay. You uh you're about to give somebody a Tylenol, and then you're checking like if this person has any drug interaction problem or not. Simple use case, there is not too much going on. But what if you don't provide 100% of the context? You don't have a problem list, you do not have a comorbidities, you don't have an H and P and you don't have other medical uh drugs that this patient is taking. Now it will start hallucinating and then give you some answer that you are actually looking for. Um, you can risk somebody's life with it.

 

CJ Wolf: 08:27

Yeah. Such a good example. You know, you were talking about the attorney, and that was kind of in the news. I was at a conference about a year ago um in academia about AI, and they showed a medical journal article that had been published. And this is a peer-reviewed, well-respected journal. And obviously the peers didn't review it very well because at the end and the conclusion, it had all these AI prompts and it had all of this language that you could tell the author used AI. Now, right, we all might use it, but you know, clean it up, make sure you're comfortable with it, and because that can ruin your reputation a little bit, right? So I thought that was an interesting thing you brought up.

 

Dr. Ashok Gupta: 09:09

No, absolutely. There are endless cases. Not that like I'm a biggest proponent of AI. Sure. But solve the right problem with the AI in the right way in a secure environment. Uh, that is the um big motto in like Ethereum. I'll give you a few examples of it, like how we approach AI. We don't take a problem and then say, from tomorrow we're gonna use AI for this. What the way we start out is like I'll uh the smallest example is when the telehealth started out, um, we did not go to school and then said, okay, now we got to relearn everything. Right. So in school, we all were taught bedside manners. So bedside manners, uh, everybody everybody had a teacher, professor talk about like, no, you don't stand like this, you don't walk like this. Even if you sit down, the patient satisfaction goes up 25% just by actually clinicians sitting down in the room for even one second. Wow. Now, this was taught to us when you went to website. Uh, I call it like a not bedside, but website, it has never ever been trained to us. So we all actually developed our own ways of actually coming up to you came in a suit, I came in a shirt. Uh, when but it's a medical environment. You're my patient, I am here. If there's a bad uh my background is not clean, it doesn't look professional, my microphone is choppy. You can see this is a technicalities, but what if if like how do I introduce myself? How do I gain your trust in the first five minutes and I get your same level of respect and compliance from you? That's not gonna happen. So we built a tool around it. And now, first we did it is like, how do we tell everybody? We have 350 physical therapists, actually licensed in all 50 states, who work thousands of sessions every day. Now uh going to everybody's home and teaching them or bringing them to these email notifications and trying to see if they will learn out of it, not possible. So, what we did is like first we started human-based. We always start human-based uh without before we start developing something. So we took all these conversations between patient and uh clinician interactions, and then we started humanly, created a rubric of it and then evaluating each point. Did you introduce yourself well? Did you explain the how the telehealth will work? Did you explain what exercises you're performing? How are they going to help? What's your plan and care gonna look like? Did you talk about the follow-up visit? Did you explain the app, how it's going to work? Those types of basic questions. And those are very critical, right? So we had humans sit down, listen to all these conversations, and then check, check, check, check, check. We went back with the feedback to the clinicians, and then clinicians really shot up, and then we started to see a better outcome on the patients. We now we had a problem that there are so many humans needed to listen to these. This is the problem to solve with AI. It took us two weeks to build a refined, a fine-tuned AI model that is able to listen into and specifically solve this one problem to assess every single session that happens and then how well the website manners of our clinicians were. We also added a fun element on top of it to show a leaderboard, like how everyone is scoring and everyone wants to be on the top wave. Exactly.

 

CJ Wolf: 12:20

Competitive nature.

 

Dr. Ashok Gupta: 12:21

Exactly. At the end of the day, AI brought in good results. So that's the right way to do it.

 

CJ Wolf: 12:28

This is so fascinating. Um, we're gonna take a quick break, everybody. We'll be right back and we're gonna talk some more about this. Welcome back from the break, everybody. We've been talking about AI and clinical scenarios. Dr. Gupta is an expert here in physical therapy and physical medicine. Um, I wanted to ask you a little bit about from your experience and working inside live clinical environments. Can you tell us a little bit about how either patient safety or quality of care might be affected or improved even?

 

Dr. Ashok Gupta: 12:60

Right. Um so there are different ways we are approaching artificial intelligence today in the patient care. One is like uh patient-driven. So all this time, uh, we recently talked about Chat GPT health. And um, one in four queries are about health today. That means uh there is a need, and then there, even though we know that chat GPT is not a doctor and it can give you wrong advice, it also hallucinates people are still using it for healthcare questions. You may have done it. I'm I I definitely have done it. Um the way security and safety structure works is like if we can provide these LLMs in a secure environment access to my background data, I can reduce the chances of hallucination or wrong information coming out because the same way how the physician or clinicians are able to make a better suggestions is because we have access to a lot of information that you may have provided to the other clinicians before this. So the B Well, which is an amazing connector, uh a lot of companies are partnering with B Well, and then they are able to now pull the information off your chart from the hospital that you are actually working with before this particular chat. Ah. Right? So now not only one hospital, now you can actually go into Chat GPT or Cloud Health and then be able to actually pull in. So let's say if in our area you have the University of Texas Healthcare System, you can actually pull in the charts from there. You go to the another healthcare system, you can put in a center health, and you can put a Kaiser permanent thing. And I'm just naming names, not necessarily everyone on them uh are connected to it. But this now gives enough context for an LLM to be able to provide you a best possible uh response that will definitely be very productive and helpful to you. Make sense? It does.

 

CJ Wolf: 14:49

Makes a lot of sense. So a lot of your work focuses on preventing kind of these AI-driven mistakes, right? Before they are going to affect the patient. Any strategies, safeguards uh with in your your product or or company is with is Thera now, right? Um so tell us a little bit about that.

 

Dr. Ashok Gupta: 15:08

Right. So um, as I was talking about the previous example, the value of the context and the data is very, very important. You cannot just ask an AI to do some do something and not give enough information. And that's the same approach we have taken into our uh products as well. So we integrate very deeply with large EMRs, like ENEHRs, like Epic, for example. We have uh very deep integrations so that we are able to pull back all. If you went to a doctor yesterday and then there was a vital science taken yesterday over there, I am able to pull that information into my software where I'm actually providing you care. When I document something here, a physician sitting in California logging into their Epic is able to pull it and then be able to see how well are you performing in your uh physical therapy virtual care. That is called as an integrated care. And that's very, very important. So I'm pushing my data and then I'm taking. Now we have an AI scribe built into our platform. What that does is like allows me as a clinician to spend more time with you rather than typing on the chart so we can get paid. So the AI scribe is doing amazing because of the context it has, not only from this session that we are having together, but all the previous sessions we have had, and then all the sessions or any amount of uh information that has been put in the entire health system. And generally, you know, like any area you live in, you usually have one or two major health systems, and you usually pick a side. If you're working with one hospital, then pretty much our entire healthcare history of yours is on that software. So, in a way, technically, we are very, very specific about we are not gonna write something about you until unless we have all the context.

 

CJ Wolf: 16:54

Got it. Makes a lot of sense. So, you know, how how do you see AI supporting clinicians without replacing their judgment? And you know, what innovations do you see in kind of point of care documentation that gives you know you the most confidence for safer, more accurate patient care? A lot of our compliance officers listening are probably thinking, okay, is this gonna hurt somebody? And how do we make sure it's accurate and good?

 

Dr. Ashok Gupta: 17:21

No, absolutely. That's an amazing question, actually. So it it all starts with the approach how the product is built, and it's approach how the product is consumed. And I'm gonna put myself here on assessment. So when we work with the healthcare systems, uh, there's a model called as Chai. Um I always forget the full form for that, but uh CHAI, collation for AI in healthcare. I think that's the that's the right one. So the Chai, and it is a very good detailed analysis of any product and its security and a key impacts. So any health system or any compliance officer out there who's thinking, like, how do I assess? How do I know it's safe? Uh, anyone who is built, because these days people just say everything what you want to hear. So now you have models actually built which you can step by step use to analyze if this product is going to be safe for my patients, or I'm gonna put my patients at risk. This is specifically for AI assessment, I'm talking about. Other than obviously, you know, hospital systems and health systems are notorious, already risk averse uh from the aspect of safety. So I don't think so they can make it any more worse. It's already, as I have been on the other side of it, it takes months to go through the compliance on the security side of it. We have to go through. So I've been through many health systems now that we had to go through the same thing over and over again. But I would never say that don't do it because that is what is keeping you all safe.

 

CJ Wolf: 18:52

Gotcha. Um, you know, earlier you were talking about how we know we've been trained for year years ago for bedside, right? Appropriate bedside manner, and you're now talking about website. So, what role do you think clinician training plays in in preventing AI hallucinations? And how do you make sure that that training is practical, not just theoretical, do you think?

 

Dr. Ashok Gupta: 19:15

Right. See, I'll give you an exact example of it. Um when I'm treating you and I see you're moving your hand, and then compared to the last time, you're not moving enough. Uh so now I know what my trigger, brain triggers, is like, let me investigate it a little bit more. I'm gonna ask more questions and I'm gonna ask questions that are not relevant to this actual protocol right now because I saw something. And I'm not gonna spook you as a patient without not knowing more about it, right? Most of these AI products in the clinical workflow are capturing the words. They're not seeing what's happening, they're just listening to transcribing it, and then those are text words. I am keeping that here, I'm keeping that here, but I'm not saying it out loud. So that's where we need to know as a clinician what is the breakpoint of any technology. So if I want to use this technology, I need to understand this is where it fails. I can't keep talking out loud what I'm thinking about or what my interpretation is in front of the patient. So I need to reserve some amount of time towards the end and then actually tell the AI, hey, by the way, this all happened in the session because of this is what my thought process was. And then here are the things I didn't talk out loud. So knowing what the barriers are, knowing what the breakpoints are, it is very important for the clinicians to use any technology that is whatsoever out there. And then the for the product builders also, we need to be very uh truthful about uh what our capabilities are when we build the product. We need to be very, very transparent about these are the things we can't really do, these are the things we can probably do half of the time, right? And then these are the things we are really good at. But this is how you can make this better. Yeah.

 

CJ Wolf: 21:03

So how do you measure? Like, are there ideas or thoughts on how you can uh measure whether the AI is actually improving either safety or quality?

 

Dr. Ashok Gupta: 21:12

Absolutely. Um, so I don't think so. This is an AI-specific question. This is like an operative question, I guess. Right. Um, you can always get a bunch of KPIs, key performance indicators, but my personal practice, I always like to have one North Star metric. Um, and don't like to chase 20 different KPIs in a big Excel sheet. So I'll give you the smallest example of like the previous example I was talking about, patient uh clinician interaction. When we see a patient clinician interaction is not of a good quality, uh physical therapy generally happens in a multiple episode, multiple visits in one single episode. So uh we analyze our clinicians on a one visit drop off. That means a patient saw you and then didn't find enough value to come back again on the next visit. And then it never showed up after that. So we saw that number. Now we have the baseline of it. That is our North Star metric. And then anything we do for the website manner, this AI tool that we created, it is directly going to make your interaction with the patient better. That means the patients are going to show up more. That means my KPI, my North Star metric of one visit drop off should immediately change. And that's how you see it if it is working.

 

CJ Wolf: 22:26

That's right. Yeah, it makes a lot of sense. Now, um, I don't I haven't had my crystal ball delivered to me yet. Um, but maybe you have one. So looking forward, what kind of innovations do you in AI do you think are gonna have the biggest impact on things like patient safety? And then what might be some of the pitfalls that the healthcare industry should be aware of as it all progresses and moves forward?

 

Dr. Ashok Gupta: 22:52

Right. This is most exciting time because we're building a foundation of the future right now, yeah, in terms of the AI or tech innovation uh that we're talking about. After I guess like early 2000s, this is the most uh critical um uh phase of this innovation. So what I'm gonna say is like a fast forward as the AGI is like on the horizon, as the quantum computing is on the horizon. And if you've been following the news like last week, uh Google also pulled back the Q date from 2031 to um uh 2029. So that means like for your listeners, anyone who's not followed, just a backstory means all this data that we put today is encrypted by an algorithm. You have a public key and you have a private key. You put them together, it opens up, and now you can read what's in there. But if that private key becomes exposed, now everyone can read what you have written. All this PHI, all of this protected data that is being transferred and transmitted in the internet via the internet is exposed to anyone to read. Yeah, that is right now not possible. It's so well encrypted, you can't normal computers cannot actually do those mathematical prime number calculations. It's a factor of prime number, right? But this qubits our quantum computers are becoming so intelligent and so smart and so fast that they will be able to break these encryptions with the calculations, and that's gonna happen in 2029. That means all this data that we are actually encrypting today is not gonna be encrypted. So there are obviously not it's not a D-Day, don't worry about it. There's always a when there when problem is solved, then there's a new problem created. So now we have come up with a post-QBIT uh encryption about um the so knowing this is very important, and then building a product around, like, okay, there's a threat to everything that we are building as a healthcare system or compliance officer. Uh you may think everything is protected today, but and I can just throw out there because there's a big lock on it. A day will come when somebody will get the key for it and they're holding this information back, right? So you have to start thinking from that perspective what kind of encryptions are you looking at? So, majority of innovation that is happening today is actually uh first, this is a one step towards the future. And then second is the consolidation. Um, look at the example of AI Scribes. Uh every other company has AI Scribe, and then the EMRs and EHRs came up with their own native EM AI scribes, and everything is consolidating. So you're gonna see that story play out more and more, but towards the end, you'll find some winners and losers. Uh, but the winners will have a one common trait that they had a vision towards AGI, they had a vision towards quantum computing, and then they actually protected their products from the foundation level to ensure that like they can survive that domain.

 

CJ Wolf: 25:53

Wow. Great, great stuff here. You know, I we're kind of coming towards the end, and I want to make sure I give you kind of the last word or if you have any last-minute thoughts or comments. But what I've really enjoyed so far, Dr. Gupta, is in a lot of my other episodes we talked about uh AI in general. This conversation has been fascinating to me because you have a very specific um example in physical therapy and in physical medicine. And I love how that example then can show us other issues that might translate to other areas as well. So I really appreciate you kind of bringing this very specific um uh kind of setting uh to light. Um and so thank you so much. But I want to give you the last word. Anything that uh maybe I didn't ask you or anything that you you think our listeners need to hear before we wrap up?

 

Dr. Ashok Gupta: 26:41

Right. Um that's that's a very loaded question. Um, but this is how I will leave you with uh is um there's a lot of fluff in today's time. And it is quite easy to parse through the noise and actually see the real value of anything. The integrated workflows are the key to any successful patient care. So um any vendor, any product that you actually come across, um be the uh be the shepherd of the open integrated environment. If you're a health system and if you're a software company or a product builder, then uh focus on how you would take this example of you before you build the house, you need to pull the utilities to it. So build the foundation, build pull the integrations and pull the utilities on before even you put the first wall on. So that's my two cents. And um, for any good conversation related to AI, telehealth, physical therapy, I'm always excited on LinkedIn and um pull up my name uh with the word Theranow because there my name is pretty common. Uh so uh type it in and I would be happy to. And my website is theranow.com, t-h-e-r-a now.com. And we're pretty, pretty um um open with the conversations, even outside the domain of what we do as a virtual physical therapy company.

 

CJ Wolf: 28:05

Awesome. Wonderful stuff here. And thank you so much, Dr. Gupta, for your time. Really appreciate it.

 

Dr. Ashok Gupta: 28:10

Thank you. Appreciate it.

 

CJ Wolf: 28:11

And um, we as Dr. Gupta gave you some contact information, we'll include uh any links that he sends us uh for his contact as well, so that you can reach out and continue the conversation if you need to. Um thank you to all of our listeners. We love um having you uh listen to these. We always welcome your your input and feedback. We'd love some topics. If there's certain topics you want to hear more about, if there are guests that you know, um please let us know and we'll see if we can get them on the show. Uh until next time, everyone, take care.

Questions or Comments?