← Back to The 11 Things That Will Break Your AI in Production
2026-04-13·Ryan Bolden·Part of: The 11 Things That Will Break Your AI in Production

When your security is a checkbox, not an architecture

"We are HIPAA compliant. We signed a BAA with OpenAI."

I have heard this sentence from three different companies in the past six months. Each time, I asked the same follow-up questions. Does your AI agent enforce row-level security so patients can only access their own records? Is there an audit log of every interaction between the AI and a patient? Does the system filter PHI patterns from outbound responses before they reach the user? Can a prompt injection extract the system prompt, which contains tenant-specific configuration?

Each time, the answer to all four questions was no. Each time, the company believed they were compliant because they had signed a legal document.

A Business Associate Agreement is a contract. It says that your vendor — OpenAI, Anthropic, ElevenLabs, whoever processes your patient data — agrees to protect that data according to HIPAA requirements. It is necessary. It is also the easiest part. Signing a BAA takes an afternoon. Building the engineering that HIPAA actually requires takes months.

Here is what HIPAA actually requires when an AI agent handles patient data.

Row-level security. When a patient logs into your portal and interacts with your AI, that AI should only have access to that patient's records. Not the next patient's. Not all patients'. The database queries that power the AI must enforce access controls at the row level, based on the authenticated user. If your AI can see Patient B's records while Patient A is logged in, you have a HIPAA violation regardless of what your BAA says.

Audit logging. Every interaction between the AI and a patient must be logged. What the patient said. What the AI responded. What tools were called. What data was accessed. This is not optional. When a breach investigation happens — and in healthcare AI, it is when, not if — the investigators will ask for a complete record of what the AI said and did. If you do not have that record, the investigation becomes adversarial instead of collaborative.

Output filtering. The AI's response must be scanned for PHI patterns before it reaches the patient. Social security numbers, dates of birth, phone numbers, medical record numbers, diagnostic codes — any of these appearing in a response where they should not be is a potential breach. This filter must operate at the code level, not the prompt level. A prompt that says "never include social security numbers in your response" can be bypassed. A function that scans the output string and redacts matching patterns cannot.

Minimum-necessary data flows. The AI should only receive the data it needs for the current interaction. If a patient is asking about their next appointment, the AI does not need access to their full medical history, insurance details, or billing records. The queries that feed data to the AI must be scoped to the minimum information necessary for the task.

Encryption at rest and in transit. Patient data stored in your database must be encrypted. Data moving between your systems must be encrypted. This is table stakes but I have seen production healthcare AI systems storing conversation logs in plaintext in an unencrypted database hosted on a shared server.

We built all five of these layers because we had to. Not because a compliance officer told us to — because our system handles real patient data for a real psychiatric practice, and the consequences of a breach in behavioral health are not just fines. They are patients whose most sensitive medical information — mental health diagnoses, medication histories, crisis records — becomes exposed.

The engineering took months. The BAA took an afternoon. If the only security your healthcare AI has is the BAA, you do not have security. You have a checkbox that will not protect you when it matters.

This is one piece of a larger framework we built and operate in production. The full picture — and how it applies to your business — is in the playbook.

We specialize in healthcare because it is the hardest vertical — strict HIPAA regulation, PHI handling, BAA chains, and zero tolerance for failure. If we can build it for healthcare, we can build it for any industry. We work across verticals.

Written by Ryan Bolden · Founder, Riscent · ryan@riscent.com