← Back to Your AI Agent Has a Security Hole You Have Not Tested For
2026-04-13·Ryan Bolden·Part of: Your AI Agent Has a Security Hole You Have Not Tested For

"We signed a BAA with OpenAI" is not a security architecture

I hear this sentence at least once a week. A healthcare company tells me about their new AI product, I ask about their security architecture, and they say: "We signed a BAA with OpenAI."

That is not a security architecture. That is one document in a stack of requirements, and it covers approximately 5% of the actual security surface area of a healthcare AI deployment.

Let me explain what a BAA actually does and does not do.

A Business Associate Agreement is a legal document required by HIPAA when a covered entity (your practice) shares protected health information with a third party (in this case, an AI provider). The BAA establishes that the third party will safeguard PHI according to HIPAA requirements. It allocates liability. It defines breach notification procedures.

What a BAA does NOT do: it does not prevent prompt injection attacks. It does not stop a user from extracting your system prompt. It does not validate that the AI's outputs are clinically appropriate. It does not ensure that patient data is not being used to train models. It does not prevent the AI from hallucinating medical information. It does not create audit trails for AI-patient interactions. It does not establish access controls between different patients' data. It does not protect against adversarial inputs designed to bypass safety guardrails.

A BAA is a legal agreement. Security is an engineering problem. Signing a document does not make your system secure any more than buying car insurance makes you a good driver.

I have spent over a year building AI systems for healthcare. Over a million lines of code. Real patients. Real PHI. And I can tell you that the security architecture required for a healthcare AI system is orders of magnitude more complex than most companies realize.

Here is a partial list of what actual security architecture looks like for a healthcare AI system. Input validation and sanitization for every piece of text that enters the system. Output filtering to prevent PHI from leaking into responses it should not. Prompt injection defenses — multiple layers, because no single defense is sufficient. Data isolation between tenants so that Practice A's patient data can never appear in Practice B's interactions. Audit logging for every AI interaction, stored in tamper-resistant format. Access controls that limit the AI's ability to retrieve data to only what is necessary for the current interaction. Clinical guardrails that prevent the AI from providing medical advice outside its authorized scope. Fallback mechanisms that route to human oversight when the AI encounters uncertainty.

That is not an exhaustive list. That is the starting point.

The reason I take this seriously is not abstract security consciousness. It is because I have researched what happens when AI systems in healthcare are not properly secured. I have documented ten categories of prompt injection attacks and built six layers of defense. I have studied production incidents where AI systems hallucinated medical information. I have seen what happens when patient data crosses tenant boundaries.

None of those protections exist inside a BAA. The BAA is the legal wrapper. The engineering is the actual protection.

When I talk to practice owners about AI, I ask them a specific question: "Beyond the BAA, what is the vendor's security architecture?" If the answer is blank stares or marketing language about "enterprise-grade security" without specifics, that tells you everything you need to know.

Here is what I tell those practice owners. A BAA is necessary but not sufficient. It is the cost of entry, not the finish line. Any vendor who leads with "we have a BAA" and cannot articulate their actual security architecture — input validation, prompt injection defense, tenant isolation, audit logging, clinical guardrails — is selling a liability, not a product.

I do not expect practice owners to become security engineers. I expect the companies building healthcare AI to take security as seriously as they take features. Right now, most of them do not. The BAA gives them legal cover. The marketing gives them customer cover. The engineering has not caught up.

When it does catch up — probably after the first major healthcare AI breach — the companies that invested in real security architecture from day one will be the ones still standing. The ones that led with "we have a BAA" will be the ones explaining to regulators why they confused a legal document with a security system.

This is one piece of a larger framework we built and operate in production. The full picture — and how it applies to your business — is in the playbook.

We specialize in healthcare because it is the hardest vertical — strict HIPAA regulation, PHI handling, BAA chains, and zero tolerance for failure. If we can build it for healthcare, we can build it for any industry. We work across verticals.

Written by Ryan Bolden · Founder, Riscent · ryan@riscent.com