System Status

All systems operational.

Last checked · just now
99.98%
Uptime · 90 days
0
Active incidents
142ms
Avg API response
Apr 24
Last incident
API
REST API · api.silkroutelabs.org
99.99%
Operational
Workflow Engine
Workflow execution and scheduling
99.97%
Operational
TRACE Engine
Document extraction and processing
99.94%
Operational
Dashboard
Web application · app.silkroutelabs.org
100%
Operational
Webhook Delivery
Outbound webhook dispatch
99.96%
Operational
Storage
Primary data stores · US-East, US-West
100%
Operational
CDN
Edge network · global
100%
Operational
Authentication
Login, SSO, and API key validation
100%
Operational
No active incidents. All services are operating normally.
24 Apr 2026
Resolved
Resolved
Elevated TRACE extraction latency
A subset of document extractions experienced increased processing times of 2 to 4x the normal baseline. No extractions failed or returned incorrect results. The issue was isolated to one processing node in the US-East cluster.
Resolved. Affected node was drained and replaced. All extraction latencies have returned to baseline. No customer data was lost or corrupted.
Investigating elevated p95 extraction latency on TRACE. A single node in US-East is showing degraded performance. Traffic is being routed away from the affected node while we investigate.
We are investigating reports of slower than normal document extraction times. Workflows are running; extractions are completing but with increased delay.
11 Mar 2026
Resolved
Minor
Webhook delivery delays
Outbound webhook deliveries were delayed by up to 18 minutes for a 40-minute window. Retries were unaffected and all webhooks were eventually delivered. Root cause was a downstream queue consumer restart during a routine deployment.
Resolved. Webhook queue has drained. All delayed deliveries have been sent. We are adding a deployment health gate to prevent this class of issue in future rollouts.
Identified. A webhook queue consumer was inadvertently stopped during a deployment. Consumer has been restarted and the backlog is processing.
02 Feb 2026
Resolved
Minor
Dashboard slow load in EU region
Users in Europe reported slow initial page loads on the dashboard. The API was unaffected. The issue was caused by a CDN misconfiguration that routed EU traffic to a distant origin instead of the edge cache.
Resolved. CDN routing rule corrected. EU users are now served from the Frankfurt edge node as intended. Average load time in Europe is back to under 800ms.
Get notified of incidents.
Status updates delivered to your inbox the moment something changes.