Intelligence Layer¶
SafeAI's intelligence layer provides 5 AI advisory agents that help you configure and understand SafeAI. The agents generate configuration files, explain incidents, recommend policy improvements, produce compliance policy sets, and generate framework integration code.
AI outside the enforcement loop
The intelligence layer is purely advisory. AI generates configs and explanations -- SafeAI enforces deterministically. AI never makes runtime enforcement decisions.
Core Constraints¶
| Constraint | What it means |
|---|---|
| Metadata-only default | AI agents never see raw protected data (secrets, PII values). They work on audit aggregates, code structure, and tool definitions. |
| BYOM (Bring Your Own Model) | You configure your own AI backend (Ollama, OpenAI, Anthropic, etc.). SafeAI doesn't bundle or mandate any provider. |
| AI outside enforcement | AI generates configs, SafeAI enforces deterministically. AI advises on audit events after the fact. |
| Human approval | All AI-generated configs are written to a staging directory for human review before taking effect. |
Setup¶
The easiest way to configure the intelligence layer is through the interactive CLI:
The CLI prompts you to choose an AI backend from 12 supported providers (Ollama, OpenAI, Anthropic, Google Gemini, Mistral, Groq, Azure OpenAI, Cohere, Together AI, Fireworks AI, DeepSeek, or any OpenAI-compatible endpoint) and writes the configuration to safeai.yaml automatically.
Intelligence Layer Setup
Enable the intelligence layer? [Y/n]: Y
Choose your AI backend:
1. Ollama (local, free — no API key needed)
2. OpenAI
3. Anthropic
4. Google Gemini
5. Mistral
6. Groq
...
Select provider [1]: 1
Intelligence layer configured!
provider: ollama
model: llama3.2
Already ran safeai init? Re-run it — it will skip existing files and just prompt for intelligence setup.
Manual configuration¶
If you prefer to edit safeai.yaml directly, here are the backend options:
Backend Options¶
Tip
The api_key_env field is the name of an environment variable, not the key itself. SafeAI reads the key from your environment at runtime.
The 5 Agents¶
Auto-Config¶
Analyzes your project's codebase structure (file names, imports, class/function names, dependencies) and generates a complete SafeAI configuration.
What it reads: file paths, function signatures (via ast), imports, pyproject.toml deps. What it produces: safeai.yaml, policies, contracts, identities.
Recommender¶
Reads audit event aggregates (counts by action, boundary, policy, agent, tool, tag) and suggests policy improvements.
What it reads: audit aggregates (counts only, no individual events). What it produces: suggested policy YAML, gap report.
Incident Response¶
Classifies and explains a security event, with optional remediation suggestions.
What it reads: single sanitized event + up to 5 surrounding events (metadata only). What it produces: classification, explanation, optional policy patch.
Compliance¶
Maps regulatory frameworks (HIPAA, PCI-DSS, SOC2, GDPR) to SafeAI policy rules.
What it reads: built-in compliance framework requirements, current config structure. What it produces: compliance policy set, gap analysis report.
Integration¶
Generates framework-specific integration code for connecting SafeAI to your target framework.
What it reads: target framework name, project structure (file names, deps). What it produces: integration code (hooks, adapters, config).
What the Agents Never See¶
None of the agents see:
- Secret values
- PII content
- Raw input/output text
- Matched regex values
- Capability token IDs
The MetadataSanitizer strips all banned metadata keys before any data enters an AI prompt. Banned keys include: secret_key, capability_token_id, matched_value, raw_content, raw_input, raw_output.
SDK API¶
All intelligence methods are on the SafeAI class and use lazy imports (the intelligence package is never loaded unless called):
from safeai import SafeAI
sai = SafeAI.from_config("safeai.yaml")
# Backend management
sai.register_ai_backend("ollama", backend, default=True)
sai.list_ai_backends()
# Advisory methods (all return AdvisorResult)
result = sai.intelligence_auto_config(project_path=".", framework_hint="langchain")
result = sai.intelligence_recommend(since="7d")
result = sai.intelligence_explain(event_id="evt_abc123")
result = sai.intelligence_compliance(framework="hipaa")
result = sai.intelligence_integrate(target="langchain", project_path=".")
AdvisorResult¶
Every intelligence method returns an AdvisorResult:
@dataclass(frozen=True)
class AdvisorResult:
advisor_name: str # "auto-config", "recommender", etc.
status: str # "success", "error", "no_backend"
summary: str # Human-readable summary
artifacts: dict[str, str] # {"safeai.yaml": "...", "policies/rec.yaml": "..."}
raw_response: str # Full LLM response
model_used: str # Model that generated the response
metadata: dict[str, Any] # Agent-specific structured data
Proxy Endpoints¶
The intelligence layer adds these proxy endpoints:
| Endpoint | Method | Description |
|---|---|---|
/v1/intelligence/status | GET | Returns enabled/disabled status, backend, and model |
/v1/intelligence/explain | POST | Classify and explain an incident |
/v1/intelligence/recommend | POST | Suggest policy improvements |
/v1/intelligence/compliance | POST | Generate compliance policies |
All endpoints return HTTP 503 with a clear message when not configured.
Dashboard Integration¶
The dashboard adds an "Explain this incident" button on incident detail views:
- RBAC permission:
intelligence:explain(available toviewerand above) - Admin users get
intelligence:*for all intelligence operations - Endpoint:
POST /v1/dashboard/intelligence/explain
Staging and Human Review¶
All generated artifacts are written to a staging directory (default: .safeai-generated/) for human review:
# Generate configs
safeai intelligence auto-config --output-dir .safeai-generated
# Review the generated files
cat .safeai-generated/safeai.yaml
cat .safeai-generated/policies/generated.yaml
# Apply when satisfied
safeai intelligence auto-config --output-dir .safeai-generated --apply
The --apply flag copies files from the staging directory to the project root. Without it, nothing takes effect.
Error Handling¶
| Level | Behavior |
|---|---|
| Config | intelligence.enabled: false (default). CLI commands fail with: "Intelligence layer is disabled." |
| Runtime | intelligence_*() methods raise AIBackendNotConfiguredError with instructions. |
| Proxy | Returns HTTP 503 {"error": "Intelligence layer not configured"}. Dashboard hides intelligence buttons. |
Next Steps¶
- Configuration -- full
safeai.yamlreference - Audit Logging -- understand the audit events that feed the recommender
- Policy Engine -- how the generated policies are enforced