At approximately 22:23 UTC on August 18, 2025, Akamai observed latency being caused by one of the machines in the request path for customers in the AP-South (Singapore) region. After preliminary investigations, our teams rebooted the affected machine, which resolved any immediate impact for customers on that day.
On August 20, 2025 at approximately 22:23 UTC, Akamai began receiving reports of further latency in Singapore from customers. Investigation revealed that one of the systems supporting Object Storage traffic experienced performance degradation due to hardware instability in its host environment. Although it remained technically online, its unreliable behavior caused delays in processing requests and elevated latency for some customers' operations. Since the system was intermittently responsive, it wasn’t automatically removed from the cluster. Once this status was identified and it was manually removed, performance improved significantly, with latency dropping from over 40 seconds to under 500 milliseconds. This action was completed at 1:27 UTC on August 21, 2025.
To prevent additional recurrences of the issue, Akamai is working on a new reliability feature that will automatically remove cluster members faster if they reside inside a degraded compute host.
This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.