Implementation realities
5 questions to ask before buying any AI tool
By Nirav Desai · Apr 2, 2026

We watch practices buy AI tools with genuine optimism, only to watch them collect digital dust six months later. The tool itself is rarely the culprit. The problem is what didn't happen before the purchase: basic questions about fit, workflow, and cost that would have either justified the investment or flagged it as wrong for your practice right now.
The best AI purchases in healthcare don't start with the vendor demo. They start with clarity about what you're actually trying to solve.
What specific workflow is this supposed to fix?
AI vendors will speak in grand abstractions. They'll mention "clinical efficiency" or "administrative burden reduction" as if those are specific problems. They're not. A specific problem looks like: "Our intake staff spends four hours a day on phone intake, and we have a waiting list."
Before any evaluation, map where the tool sits in your actual workflow. Ask your team exactly how they work today, not how they're supposed to work, but how they actually work. Where are the bottlenecks? Where do tasks get handed off? Where does information get re-entered? This is where AI can do real work, or where it can become a pointless addition to an already-crowded workday.
If you can't name the specific workflow in two sentences, you're not ready to buy. The tool might be good, but you haven't defined what "good" means for your practice yet.
Who will actually use this every single day?
This is where many AI implementations break. The champion, usually the practice owner or an enthusiastic manager, gets trained, loves the tool, and then it lands in front of the people who actually have to use it, and they hate it.
Talk to the people who will use the tool daily. Not in a group setting. Sit with them individually and ask: Does this fit how you work? Will this slow you down initially? How much training will you actually have time for? Be prepared for honesty that's uncomfortable. They're the ones whose resistance will determine whether this tool becomes part of your practice or an expensive footnote.
A 2024 McKinsey survey found that 70% of healthcare organizations that implemented AI reported adoption rates below 50% among intended users. The tool wasn't the problem—buy-in was. Ask yourself now whether you have the bandwidth to handle resistance and iteration, because you will need both.
What happens to patient data?
This is non-negotiable. You need to know: Where does the data go? Who can access it? Is it used for training the vendor's model? What are the security standards? What's the data retention policy?
Talk to your compliance and IT people. Sit down with the vendor contract. Some AI tools are genuinely secure and transparent. Others are intentionally vague. If a vendor gets defensive or evasive about data handling, that's your answer. You don't need that tool badly enough to accept vagueness around patient information.
Also ask about integration. How does data move between your EHR and the AI tool? Are you manually copying and pasting information into a separate system? That's not efficiency; that's a second job someone has to do, and it's a security gap. Real integration should be seamless or the setup cost becomes invisible overhead.
What's the actual cost in year two and beyond?
The demo pricing is almost never the real cost. Vendors front-load low prices to get you in the door. After the honeymoon, they add seats, increase per-user fees, introduce minimum user commitments, or announce price increases for version upgrades.
Ask for:
- Setup costs broken out separately
- Per-user, per-transaction, or per-month pricing (which applies to you?)
- Whether licenses require annual commitment
- Support and training costs beyond the base price
- Historical price increases for their product line
- Whether the tool requires infrastructure investment on your end
Model the total cost for year two with at least 20% buffer for surprises. Then ask: What's the actual return on this? If you're saving one FTE (around $75,000 a year loaded cost), and the tool costs $30,000 annually, that's a real business case. If you're paying $30,000 to save three hours a week, you need to be honest about whether that math works.
What does success look like after 90 days?
Before you sign anything, define what "working" means. Not in vendor language. In your practice language. Does it mean intake time drops from four hours to two? Does it mean your clinicians spend less time on documentation? Does it mean fewer patient callbacks?
Pick one or two metrics. Make them measurable. Make them achievable. Then commit to checking them at 90 days. If the tool hasn't moved the needle on at least one of them, you have a real conversation to have about whether it stays.
This also gives you an off-ramp. If 90 days in, you realize the tool isn't for you, you can exit with dignity and minimal sunk cost. Most healthcare AI tools have early termination options for exactly this reason. Use them.
The practices that get AI right don't fall in love with the tool. They stay skeptical and data-driven about whether the tool is actually solving a real problem in their world. Ask these five questions now. Your future self, six months from now, will either be grateful or confused about why you didn't.
READY TO ACT
Ready to see where AI fits in your operation?
Most clients start with the AI Readiness Assessment. It takes 60–90 days, costs $15,000–$35,000, and gives you a scored roadmap you can act on immediately.
Start a conversation
Download the Practitioner's AI Power Pack
9 hours of Google's prompt engineering course, distilled into 10 templates built for healthcare and wellness. Revenue Protection, Marketing and Growth, Operational Efficiency, Strategy and Training.