Our LLM Policy
Sovereign LLM Infrastructure
All LLMs used by CybeDefend are deployed on sovereign cloud infrastructure within your chosen region (EU or US). Your code and vulnerability data never leave your selected geographical boundary.No Training or Fine-Tuning
CybeDefend has a strict zero-training policy: Your code, vulnerabilities, and interactions with our AI agents are never used for training, fine-tuning, or improving our models.
- Complete confidentiality: Your proprietary code remains private
- No data leakage: Your security findings never contribute to model training
- Compliance: Meets the strictest regulatory requirements (GDPR, SOC 2, etc.)
Data Processing
When you use CybeDefend AI features (Cybe Analysis, Cybe AutoFix, Cybe Security Champion):- Your code is parsed into our proprietary knowledge graph
- Queries are sent to sovereign LLMs within your chosen region
- Responses are generated using your specific codebase context
- All data remains within your regional boundary
LLM inference happens in real-time and is not persisted beyond the immediate request/response cycle.
Regional LLM Deployment
Region | LLM Location | Compliance |
---|---|---|
Europe | EU sovereign cloud | GDPR, ANSSI SecNumCloud, BSI C5 |
United States | US cloud infrastructure | SOC 2, HIPAA, PCI-DSS |
Your Control
You can enable or disable AI features at the project level:- Cybe Analysis: Can be toggled in project settings
- Cybe AutoFix: Requires explicit activation and Git integration
- Cybe Security Champion: Requires Cybe Analysis to be enabled
Related: Cybe Analysis Configuration · Data Storage & Privacy · Cloud Region Selection