Update - AWS is continuing to stabilize us-east-1. Out of an abundance of caution, we will continue to avoid routing traffic to this region until their incident is fully resolved.
Oct 20, 2025 - 22:32 UTC
Monitoring - We have removed `us-east-1` from our DNS traffic routing policy for `api.osohq.com` and `cloud.osohq.com`. We have confirmed that all API traffic is now being routed to other nearby regions. We did not observe errors from our service during this incident, but we proceeded with the failover out of caution and due to inconsistent visibility.
We have some shared services in `us-east-1` that continue to report healthy status, and we have not seen any other impacts. We will continue to monitor and update on any changes.
Oct 20, 2025 - 17:20 UTC
Update - AWS continues to experience networking issues in us-east-1, resulting in increased latency for some requests. We have applied an update to Oso Cloud to temporarily route traffic to alternate regions.
Oct 20, 2025 - 16:49 UTC
Investigating - AWS has reported that they are seeing network issues, which has coincided with customer reports of gateway timeouts attempting to reach the Oso Service in us-east-1. We are investigating the impact and exploring failover options.
Oct 20, 2025 - 15:42 UTC
Monitoring - Between Oct 20 07:11 AM UTC and Oct 20 09:27 AM UTC, our system alarms for AWS started paging the On-Call team. We confirmed from the AWS status page that AWS was having an incident: https://health.aws.amazon.com/health/status
AWS has reported that they have found the root cause and customers should be seeing recovery. Our internal monitoring dashboards have been available throughout this time and report that the Oso Cloud platform has been handling authorization decisions and write traffic without disruption. Customers using Oso Fallback nodes may have observed stale authorization decisions from Fallback instances during this time, but traffic should not have been routed to Fallback instances because Oso Cloud was responsive. Fallback instances should now be up to date. We currently believe this was the only visible impact for Oso customers.
We will continue to monitor and share any relevant status updates.
Oct 20, 2025 - 07:15 UTC
Resolved -
Between Oct 20 07:11 AM UTC and Oct 20 09:27 AM UTC, our system alarms for AWS started paging the On-Call team. We confirmed from the AWS status page that AWS was having an incident: https://health.aws.amazon.com/health/status
AWS has reported that they have found the root cause and customers should be seeing recovery. Our internal monitoring dashboards have been available throughout this time and report that the Oso Cloud platform has been handling authorization decisions and write traffic without disruption. Given this, we currently believe there is no visible impact for Oso customers. We will continue to monitor and share any relevant status updates.
Oct 20, 07:00 UTC