On May 24, 2024, at approximately 15:00 UTC, we received customer reports of connectivity issues in our Los Angeles datacenter, specifically IPv6 traffic. Investigation into this issue began immediately. On May 25, 2024 at around 18:01 UTC after initial investigation, we found the issue was related to software running on hardware responsible for that traffic where an illegitimate route was not being cleaned up as expected. At approximately 22:00 UTC, we engaged the manufacturer of this hardware for assistance in resolving the underlying issue while investigating work-arounds to mitigate the impact of these devices being in the state they were.
On May 26, 2024 at 22:37 UTC, one of the affected devices was rebooted, which cleared the initial issue with that device and brought traffic back up. It was determined to move forward with rebooting additional affected devices as a temporary fix while we continued to investigate the root cause. In order to continue with the root cause investigation, not all devices were rebooted.
Thanks to the assistance of the manufacturer, a bug in the software was identified and a patch pushed to remediate the underlying cause on May 27 2024 at 7:29 UTC. By 10:35 UTC, the fix had been applied to all affected hardware and IPv6 traffic was unsuspended. After a period of monitoring, we observed no further connectivity issues and we considered the issue resolved on May 30th 2024, at 12:02 UTC.
This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.