Transparent proxy for multi-provider LLM cost tracking, budgeting, and optimization. Supports OpenAI, Anthropic, Azure, Bedrock, and Vertex AI with real-time cost headers.
Cryptographic ledger for AI output verification using Merkle trees and digital signatures. Tracks, timestamps, and proves the authenticity of AI-generated content with tamper-evident audit trails.
Supply chain security for LLM artifacts using Sigstore, in-toto, and SLSA frameworks. Generates signed attestations for model weights, training data, and inference outputs.
Runtime firewall that detects and blocks prompt injection attacks against LLM applications. Covers OWASP LLM Top 10 attack vectors with configurable detection rules.
Policy-first retrieval-augmented generation gateway with PHI/PII redaction, access control, and audit logging. Self-hostable with no mandatory cloud dependencies.
Multi-modal AI content detection and C2PA provenance verification across text, images, audio, and video. Explainable confidence scoring with signal-by-signal breakdown.
Kubernetes-native observability tool using eBPF to capture kernel signals and correlate them with OpenTelemetry traces for LLM SLO attribution.
CLI pipeline that pulls model metadata from HuggingFace, W&B, and MLflow, runs fairness analysis, checks EU AI Act compliance, and exports model cards.
Multi-modal safety evaluation benchmark for AI systems. Covers 20 hazard categories and 9 attack strategies aligned with MLCommons AI Safety standards.
Rule-driven scanner that checks AI systems against EU AI Act, UK AI regulations, and NIST AI RMF requirements. Deterministic pass/warn/fail scoring.