Truthlocks

Incident History

Past incidents and scheduled maintenance

AI CMO Engine — Bedrock & Search API Quota Exhaustion

Resolved
4/8/2026Affected: ai-cmo-service
02:00 PM
resolved

Rate limiting, exponential backoff, and daily quota checks deployed. Campaign orchestrator now gracefully pauses when API quotas are exhausted instead of repeated failed requests. SerpAPI and Bedrock quotas reset overnight.

09:00 AM
identified

Root cause: AWS Bedrock daily token quota exceeded (ThrottlingException), SerpAPI search quota exhausted, and Hunter.io rate limits hit. All campaign discovery and AI-driven orchestration returning 429 errors. Service is running but campaigns produce zero leads.

08:00 AM
investigating

AI CMO Engine reported as outage on status page. Investigating campaign orchestrator failures and API connectivity.

CORS Security Hardening — Gateway Configuration Update

Resolved
4/8/2026Affected: api-gateway
08:30 AM
resolved

CORS origin whitelist deployed to both dev and production gateways. Security headers (X-Frame-Options, X-Content-Type-Options, Referrer-Policy, HSTS) added. Production route configuration synchronized with all service endpoints.

07:00 AM
investigating

Scheduled security maintenance: replacing permissive CORS configuration with strict origin whitelist for *.truthlocks.com domains.

Platform-Wide Service Outage

Resolved
2/20/2026Affected: api-gateway, trust-registry, billing-service, signing-service, attestation-service, audit-service, transparency-log, verification-service
04:50 PM
resolved

The internal network routing and JWT authentication issues have been fully resolved. All services are reporting healthy responses and the platform is fully operational.

04:30 PM
monitoring

A fix has been rolled out applying the correct internal DNS health-check endpoints and enabling internal ALB routing for the API gateway. We are monitoring the recovery.

11:00 AM
investigating

We identified missing internal configuration URLs and a misconfigured database security rule that caused services to crash. Applying patches and re-deploying.

10:00 AM
investigating

We are investigating widespread 503 Service Unavailable errors affecting the API Gateway and internal service-to-service communication.