AI Usage Policy
Last updated: March 8, 2026
Detectory is built by TacticalEdge AI to help security teams detect and respond to identity-based threats in cloud environments. This policy explains how we use artificial intelligence, how we handle data, and the safeguards we have in place to ensure responsible, transparent AI use.
1. How Detectory Uses AI
Detectory leverages AI capabilities in two primary areas:
- Investigation Reports — When an anomaly is detected, Detectory uses large language models (powered by Anthropic's Claude via Amazon Bedrock) to generate detailed, contextual investigation reports. These reports summarise the anomalous activity, correlate related events, assess risk, and recommend next steps.
- Anomaly Correlation — AI is used to correlate seemingly unrelated identity events across your AWS environment, identifying patterns that may indicate credential compromise, privilege escalation, or lateral movement.
2. Data Handling
We designed Detectory with a "your data stays yours" architecture:
- All processing happens in your AWS account. Detectory deploys into your environment. CloudTrail logs, identity events, and generated reports never leave your cloud boundary.
- Amazon Bedrock invocations run in your account. When Detectory calls Claude for investigation reports, the API call is made from your infrastructure using your Bedrock configuration. TacticalEdge AI does not receive, store, or have access to the prompts or responses.
- No exfiltration of customer data. The Detectory control plane communicates only configuration metadata (e.g., deployment status, feature flags, billing metrics) — never your security telemetry.
3. No Training on Customer Data
TacticalEdge AI does not use customer data to train, fine-tune, or improve any AI or machine-learning models. This applies to all data types: CloudTrail events, investigation reports, anomaly findings, and any other information processed by Detectory within your environment.
Amazon Bedrock's data usage policies further ensure that prompts and completions are not used by Anthropic or AWS for model training when invoked through Bedrock.
4. Human Oversight — Progressive Trust Model
Detectory follows a progressive trust model for AI-driven actions. Automated responses are introduced gradually, with human review at every stage:
- Level 1 — Observe — AI detects anomalies and generates reports. All actions require human approval.
- Level 2 — Advise — AI recommends specific response actions (e.g., revoke a session, restrict a role). A human must explicitly approve each recommendation before execution.
- Level 3 — Act with Guardrails — Pre-approved response playbooks may execute automatically for well-understood threat patterns (e.g., disabling a compromised access key). Every automated action is logged, auditable, and reversible. Humans can override or pause automation at any time.
Customers control which trust level they operate at. No automated action is ever taken without explicit customer opt-in.
5. Transparency Commitments
We commit to the following transparency principles:
- Explainability — Every AI-generated finding includes a clear explanation of why it was flagged, what data contributed to the assessment, and the confidence level.
- Auditability — All AI invocations, prompts (without customer data in our systems), and resulting actions are logged and available for audit.
- Model disclosure — We disclose which AI models we use (currently Anthropic Claude via Amazon Bedrock) and will communicate any changes before they take effect.
- Bias and fairness — Detectory operates on technical telemetry (API calls, authentication events) rather than personal attributes. We regularly review our detection logic to ensure it does not produce systematically unfair outcomes.
- Incident reporting — If we discover an AI-related issue that could affect the accuracy or safety of findings, we will notify affected customers promptly and provide remediation guidance.
6. Third-Party AI Services
Detectory currently uses Anthropic Claude via Amazon Bedrock as its primary AI model provider. Bedrock provides enterprise-grade security, including VPC isolation, encryption in transit and at rest, and compliance with SOC 2, ISO 27001, and HIPAA. We evaluate any third-party AI service against strict data privacy, security, and reliability criteria before adoption.
7. Changes to This Policy
We will update this AI Usage Policy as our technology evolves. Material changes will be communicated through our website and, for active customers, via email. The "Last updated" date at the top indicates the most recent revision.
8. Contact Us
If you have questions about how Detectory uses AI, please reach out: