What is Zero Trust AI ?
Our Approach
At Costa, our background is in network security. We take an opinionated, no-holds-barred approach toward securing AI. We believe A represents a danger greater than anything we have seen in the history of computing — and we are here to help. We call our approach Zero Trust AI, and to us, that means:- Do not trust the model, no matter the “good intent” of the creator,
- Do not trust the model provider, no matter the “definitely next level” security they promise,
- Do not trust the tools, no matter how “absolutely safe” they claim to be, and
- Do not trust the human or agent operating the model, no matter how much they protest that they will never make a mistake
AI is a Dynamic Landscape
Zero Trust AI is a moving target - there is no definitive list that you could set up today and be done. At Costa, we believe it’s our job to sit at the edge of cybersecurity (what we call the cybersecurity ‘coast’, hence ‘Costa’) and make sure that we always apply Current Best Practices to AI infrastructure.Costa’s top five for security
The Costa platform includes quite a few security features built in. Here are the five most important things we give you:1. Sensitive information filtering
Every request is filtered for personal information. See OWASP LLM02: 2025. We extract sensitive information and replace it with “dummy” information that is sent to the model, then re-replace before it gets back to the user.2. Dynamic agency control
We use a combination of the current and prior tool requests and conversation outputs to give each individual request aRisk Score. This score is based on things like whether the tool has read or write access to internal information, whether it talks to the outside world, how powerful the model is, and the nature of any information provided by the user. See OWASP LLM06: 2025 for a description of excessive agency and why it is critical to prevent it.