Using AI In Your Practice: How To Innovate Without Violating Compliance With Daniel Hirsch

Most therapists are already experimenting with AI. But very few understand the compliance risks that come with it.
In this episode of the Private Practice Owners Podcast, host Adam Robin sits down with compliance expert Daniel Hirsch of Risk and Compliance Analytics to talk about one of the biggest shifts happening in healthcare right now: AI. From documentation automation to billing analysis and scheduling optimization, AI tools are rapidly entering clinical practices. But while these tools can dramatically improve efficiency, they also introduce serious compliance risks if used incorrectly. Daniel breaks down what private practice owners and clinicians need to understand before integrating AI into their workflows.
This conversation is not about hype — it’s about using AI responsibly, ethically, and within regulatory boundaries. If you're curious how AI will impact documentation, compliance, audits, and patient care in the coming years, this episode will give you a practical framework for thinking about it.
In this episode, you’ll learn:
- Why AI is becoming one of the biggest operational shifts in healthcare
- The difference between using AI as a tool vs. letting AI replace clinical judgment
- The compliance risks many therapists overlook when using AI documentation tools
- Why HIPAA, security standards, and vendor agreements still apply to AI platforms
- What a BAA (Business Associate Agreement) is — and why every AI vendor must have one
- Why Medicare doesn’t care if AI wrote your note — they care if it’s medically necessary
- The dangers of repetitive AI-generated documentation and why auditors flag it immediately
- How AI can actually improve compliance by identifying patterns, billing mistakes, and underbilling
- Why AI analytics can help clinic owners detect operational risk earlier
- What a scheduling automation and communication tools improve patient adherence
- Why therapists must still review and verify everything AI generates
- The leadership responsibility owners have when implementing AI tools in their clinic
- Why governance, training, and internal auditing are essential when adopting new technology
AI is not replacing therapists. But therapists who understand how to use AI responsibly will outperform those who ignore it. This episode explains how to adopt AI without putting your practice, license, or compliance at risk.
🎙️ AI is not the risk. Using it incorrectly is.
Show Notes:
Join the upcoming PPO Club workshop:
https://ppoclubevents.com/04-17-26-workshop
Want help building a stronger practice model? Book a call with Nathan: https://calendly.com/ptoclub/discoverycall
💡 Love the show? Subscribe, rate, review, and share! https://ptoclub.com/.
99.5% of successful owners interviewed on this podcast have leveraged a business coach at some point in their journey. Private Practice Owners Club is the coach you need — ppoclub.com
Explore upcoming workshops, free resources, and tools to help you scale revenue without burning out your team: https://linktr.ee/ppoclub
---
Listen to the Podcast here
Using AI In Your Practice: How To Innovate Without Violating Compliance With Daniel Hirsch
AI Is Taking Over Healthcare Conversations
I’ve got my guy here, Daniel Hirsch, with Risk and Compliance Analytics and we're going to talk about the one topic everyone wants to hear more about, AI, because it's taken over, Daniel. It's taken over. It's coming. I feel like I have more and more conversations with people about AI and how people are leveraging it, integrating it. Some people are using it in a good way. Some people are not.
I heard that somebody was using ChatGPT to document their plan of care. I just thought about you. I was like, “Daniel would flip upside down if he heard this.” I know other people are doing that. If we're going to leverage AI in our practice, in our business, we want to do it the right way, do it the safe way, and we want to make sure that we leverage it in a way that doesn't put us at a compliance risk. Daniel, I'd love to hear your take on AI and what people need to know about it.
AI As A Powerful Tool—But With Risks
First of all, thank you, Adam. I know this is our last episode in the Compliance Masterclass series, so very exciting. I think you mentioned AI, we're going to talk about probably the one the biggest shifts happening in forget the world. In healthcare right now, AI is everything you need to know is it's a tool. PTs have a tool at their advantage. It can improve documentation, efficiency, and patient care, but there are obvious risks.
I had the privilege of attending the CSM conference from the APTA in Anaheim and I got to tell you, probably 75% of the sessions spoke about AI in some way. The problem is that research is always lagging behind, so a lot of the content was already outdated, meaning that technology was already available, a lot of what they were talking about. I want to answer how to use it responsibly, ethically, and really in a compliant manner.
I think a lot of people, maybe you're hearing this too, they're saying, “AI, is it going to replace therapists and maybe bad therapists?” Just kidding. AI tools could help PTs be better PTs. Let's say you forgot a component in your note, forget the ChatGPT thing you mentioned a second ago, but it reminds you. It provides insight into the coding, how you're billing, the accuracy of it, and obviously, a lot of administrative tasks that should not be so manual anymore.
AI tools could help PTs become better practitioners.
When Medicare also goes on record to say how long and tedious documentation really could be, the obvious reflex to anyone should be, “Fine, I’ll solve for that. I don't want to deal with this.” Our profession is so heavily regulated that you have to make sure that you're not violating like HIPAA privacy and security rules or billing rules and stuff like that. I don't know if you're a car guy, but think of this like you have emissions tests every year or so.
People, you go and you don't cheat, you follow the rules. I don't know if you remember this, but it stuck with me when Volkswagen got caught with emissions. They got penalized, huge penalties. The point I'm making is that, like when you're using AI, you have to make sure that the vendor has to be reliable, they have to have a really good reputation and this is not a component of let's just figure out a way to cheat the system to be able to use technology to do that. You don't want that.
Compliance Foundations: Privacy, Security, And BAA Requirements
Let me explain that a little bit better. There are really three concepts that you have to understand from a regulatory standpoint. First, patient privacy and security, that's what we're dealing with. The tool that you're using for AI, it has to interact with the patient data but it has to still meet your privacy and security standards. That's under federal law. Department of Human Services, Health and Human Services, you have to be able to secure it in transmission as well as at rest.
Authorized users have to be maintained. You can't just let anybody have access to this. Also, you have to have a BAA in place. If you're thinking about how you're processing notes and intake forms and schedule data, that's all great. That's fantastic. Baas, I know a lot of people complain about, “Do I really have to track it down?” This is probably one of the easiest audits you can do. Having a BAA in place is something that everyone that you do business with, Adam, has to have.
The reason is that it safeguards. It protects you and the business. From a vendor standpoint, you're transferring that responsibility back to them to maintain basic safeguards. Also, for breaches and reportability, you have to have accountability and baas specifically outline who is responsible for what. That's very easy to do. If anyone does not know how to do it, again, this is basic 101 of being in business.
Finally, I think the third component is really for billing and what's medically necessary. Medicare, in my opinion, doesn't care about whether AI was used. They really care about outcomes. As professionals, I think we're being held to the same standards that we've always been held to and that's whether you're providing medically skilled necessary services? Is that taking place? Is the documentation accurate?
Is it individualized? Does it really reflect like the actual care that's being provided? Are you objectively improving your patients throughout the episode of care? If you can't answer that, then A, you probably shouldn't be a therapist, but B, you're just not ready. You're just not ready to be able to be responsible in either outpatient, private practice or not having someone watch over you 24/7.
The Biggest Risk: Repetitive, Unjustified AI Documentation
AI-generated documentation, in my opinion, is not the problem. The issue is when therapists let the AI generate templates and becomes repetitive and it's just not clinically justified anymore. As an auditor, I’ll tell you, it's the indifference problem. You know it right away. When therapists just don't care. AI is also going to reflect that because whatever you put in, you're going to get the same thing back. If you're putting quality in, you're going to just enhance what you're already putting in.
I think that's one of the biggest problems we see in this realm and it's a human problem, but it's going to be exemplified with AI. The safest, highest value I think that people reading will get is probably not a surprise because it's been around for years and that's really from a burnout standpoint. Documentation, I made the joke about Medicare, they don't care if you're using this. They just want to know whether it is still objective?
Is it still meeting their definitions? The manuals didn't change. They're all the same. Yet we get a fantastic technology tool to use now. If you're drafting your notes, you're drafting language, and it suggests clinical phrasing it's going to flag certain things that are missing. That's fantastic. It validates accuracy, it confirms medically necessary language, that's really good. The notes are not really taking away what your clinical judgment should be.
Even if it's AI-assisted, your license, and I think therapists forget this when they're documenting, that your license is still on the line when you're documenting every time. Not just when you're touching the patient but when you're recording what took place. This is just the next chapter to we've done for many years. Now this is just the next chapter that you have a great technology tool to be able to make it faster, quicker, smarter. That's great, but your license is still on the line every time you're signing that off.
From an audit standpoint, I could tell you I love that this is available. Replacing analytics and all those days of calculating KPI, you know how important this is to owners. You need the data. Now, instead, you could just say, “I know where it is, I just need to interpret it and execute on the data.” You got units and coding and utilization outliers and all these things, it's all to your advantage now.
I think also, I’ve said many times before, peer to peer, forget that. Those days are done. Audits can also be accomplished without hiring people like me, which is nice. Obviously, there's a cost of going this route. Technology is going to cost something, but the options are there now. If you could identify risk early on before they become major problems, I think that's the beauty of risk management.
AI Adoption Is The Future—And A Competitive Advantage
If people don't want to take advantage of it, I think that's fine, but they have to realize this is the reality of where everything is going and you should take advantage. I think something where I don't really live a lot, but for marketing and patient engagement with these tools, there's a lot of leverage. All these appointment reminders, I love that. That's great. Scheduling optimization. It helps the whole communication for not just complying, not just getting people to comply with their program, but to keep people on track because we're so distracted nowadays. This does a really nice job.
Finally, obviously, with revenue cycle, when it comes to billing, if you're underbilling, we love doing that. We always hear the bad stories. We always hear the bad stories of people getting caught and making the front headline news, but a lot of us underbill because we're so kind, we're so generous. It could detect denial. If there's a certain pattern, a lot of times, we get stuck in patterns.
I told you, even myself, every three months, you'd be in the same rut of billing the same type of thing, creating the same sentences. Looking at behavior, this takes that human element out of it and says, “By the way, this is what's happening objectively. It's not my opinion, this is just what you're objectively doing.” By the way, it's a great way to measure performance. Those performance evaluations, there's no like awkward, “Let's come in and discuss it.” It's just simply, “Here are the numbers.”
Let's talk about those numbers. To avoid certain mistakes, here's where practices get into trouble. Don't expect perfection. People make mistakes and so do AI models. When you're reviewing your notes, you’ve got to see if it's accurate. You're still going to save like a tremendous amount of time. It's really good. The AI is very good at generating information, but identical notes, I could tell you, Adam, is a huge red flag. Massive.
Just like we're able to make all this fantastic content but if it's repetitive in nature, it's just going to get flagged so quickly. Many AI platforms are not automatically compliant, so you have to check. You do have to ask about what their security standards are, maybe there are contractual protections that you have to do. Even at CSM, everyone's shouting about how AI is going to impact everything and honestly, maybe that's true.
However, if you want to be successful, in my opinion, you have to apply the same standard to AI that you would for any other program or operation, hands down, which is simply it goes with defining the purpose of what you're trying to do. You have to establish governance, you have to train and educate your staff, you got to obviously monitor that, and then you have to audit it on a routine basis. As long as you're doing the same thing you would do for any other operation, you're going to be just fine.
Maybe one day like when you're reading this, maybe it's just my perspective now, but I don't know any therapist who's out there and thinks, “What I know now is all that's needed to help my patients improve. There's nothing better that I could offer possibly. I know the most, nothing ever will be better.” My message that I took home on the plane from CSM was that the profession, it is doing well, even with all the negativity.
I usually stay away from these types of gatherings of tons of students. They're very enthusiastic. Outpatient therapists are really at the tip of the profession and they take on all the necessary risks. When you see it come into play, and when you combine the compliance and the operations and all that clinical decision making that's necessary, you'll see that the growth is really it's possible when you have the right pieces in place.
Growth is possible when you have the right pieces in place.
It was so evident to me after coming out of a couple of days of listening to all the smart people out there and all these hot topic buttons that staff are talking about, burnout and audits and administrative chaos and reimbursements and on and on and how that is directly correlated. I think if you're using AI, you're going to outperform those who are not. It's just that simple.
There are really great ways to do it, there's really great tools out there that have AI embedded in it. It's rapidly advancing and I think it's a shame that people are so hesitant. The upside, in my opinion, there's so many great boundaries that you could stay within. There's no reason not to be taking that on right now.
As I listened to you speak, I thought about my experience with AI, my relationship, I think like most people, I initially had this overdependence on AI and give all the reasoning away. I made the same mistake we all did, like, “It wasn't me, it was AI that did that. I I'm not responsible for that.” I will agree that I had that initial reaction and now I understand. We've rolled out AI in our business and we've had to actually train our therapists like, “This isn't just a tool. This isn't the wild, wild West anymore. You're still responsible for whatever you put in your note.” I think that if you approach AI just like you would with anything, with a level of professionalism and responsibility, then you can use it as a tool.
If you're going to be careless and reckless, then maybe grow up a little bit more first. Don't be like me. Daniel, thank you for your time. I really appreciate it. Thank you for doing these five episodes. For those that are reading, check them out on their YouTube channel. They're going to be posting in our Facebook group. If you need compliance work in your business, check out Daniel Hirsch with Risk and Compliance Analytics. Daniel, see you next time.
Thanks, Adam. Peace.
Important Links
- Risk and Compliance Analytics
- Private Practice Owners Club on YouTube
- Private Practice Owners Club on Facebook
- Join the upcoming PPO Club Workshop
- Book A Call with Nathan
- PPO Club Linktree










