Update - We are currently deploying a fix for this issue in phases, which will take several weeks to complete. We will continue to share updates on our progress as the mitigation efforts move forward.
Apr 15, 2026 - 15:12 UTC
Update - We are currently deploying a fix for this issue in phases, which will take several weeks to complete. We will continue to share updates on our progress as the mitigation efforts move forward.
Apr 06, 2026 - 13:39 UTC
Update - We are currently deploying a fix for this issue in phases, which will take several days to complete. We will continue to share updates on our progress as the mitigation efforts move forward.
Mar 19, 2026 - 20:13 UTC
Update - We are currently deploying a fix for this issue in phases, which will take several days to complete. We will continue to share updates on our progress as the mitigation efforts move forward.
Feb 26, 2026 - 15:42 UTC
Update - We are continuing to test the fix before initiating the rollout. Further updates regarding the status of the mitigation efforts will be provided as progress is made.
Feb 19, 2026 - 16:41 UTC
Update - We are continuing to test the fix before initiating the rollout. Further updates regarding the status of the mitigation efforts will be provided as progress is made.
Feb 03, 2026 - 15:34 UTC
Update - We have successfully identified a solution to address the issue at hand. We are currently in the testing phase prior to initiating the rollout of the fix. Further updates regarding the status of the mitigation efforts will be provided as progress is made.
Jan 05, 2026 - 19:30 UTC
Update - We are continuing to work on a fix for this issue, and we will provide an update as soon as the solution is in place.
Jan 05, 2026 - 17:16 UTC
Update - We are continuing to work on a fix for this issue, and we will provide an update as soon as the solution is in place.
Dec 17, 2025 - 13:40 UTC
Update - We are continuing to work on a fix for this issue, and we will provide an update as soon as the solution is in place.
Dec 04, 2025 - 20:51 UTC
Identified - Our team has identified the issue affecting booting up GPU and VPU plans. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Dec 03, 2025 - 16:48 UTC
Investigating - Our team is investigating an emerging service issue affecting booting up GPU plans. We will share additional updates as we have more information.
Dec 03, 2025 - 15:15 UTC
Resolved -
We have completed our investigation and response to the “Copy Fail” Linux kernel local privilege escalation vulnerability (CVE-2026-31431). We have published a documentation article with detailed guidance on available mitigations and recommended actions for affected systems. Customers can find more information and step-by-step instructions here: https://www.linode.com/docs/guides/cve-2026-31431-copy-fail-mitigation/
We encourage all customers to review the article and apply the appropriate mitigations to their environments. If you have questions or need assistance, please contact us at 855-454-6633 (+1-609-380-7100 Intl.) or email support@linode.com for assistance.
May 5, 22:19 UTC
Update -
We are continuing to investigate this issue.
May 1, 17:52 UTC
Investigating -
Akamai is aware of the recently disclosed “Copy Fail” vulnerability (CVE-2026-31431). We are assessing the issue and are working to address it across our product portfolio and internal systems. While we have not observed any related malicious exploits targeting our infrastructure, Akamai continuously works to reduce risks and enhance our security posture. We are taking both immediate and longer-term steps to mitigate potential impacts and help ensure the continued confidence of our customers.
Per our Shared Security Model[1], customers are responsible for making sure their service’s installed applications and code are securely configured and patched. Given the nature of this vulnerability, it should be assumed that all virtual machines running Linux are at-risk until patched. We will be publishing more details as patches are incorporated into the base images that we supply, but we strongly recommend customers deploy mitigations on all instances. Furthermore, the nature of the vulnerability suggests that container escapes are possible, so customers allowing untrusted workloads to execute in their containers may need to take additional steps to secure their containerized workloads.
We will provide further information regarding our posture and recommended actions for Akamai Compute customers who may be affected.
Completed -
The scheduled maintenance has been completed.
May 4, 15:00 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 4, 13:00 UTC
Scheduled -
On Monday, May 4th from 13:00 to 15:00 UTC, we will be performing network maintenance in our OSA region. While we do not expect any downtime, a brief period of increased latency or packet loss may occur during this window.
May 1, 06:42 UTC
Resolved -
The network component has been replaced. The incident is now resolved.
May 4, 03:43 UTC
Update -
We were able to identify a solution which resulted in additional capacity for the Chennai, IN region and did not experience any degradation over the peak hours on April 29, 2026, we anticipate this to be the case going forward, but will continue providing updates until we successfully replace the impacted network component. The replacement of the network component has been briefly delayed. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Apr 29, 22:13 UTC
Update -
Our team has identified the issue affecting connectivity in our Chennai (IN-MAA) data center as a degraded network component and are working to expedite replacement as quickly as possible. At this time there is no customer impact, but due to lessened capacity as a result of this hardware degradation, we do expect intermittent delays or timeouts during peak traffic hours in the Chennai (IN-MAA) region. The current ETA for replacing this hardware is end of day on April 29, 2026. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Apr 28, 21:35 UTC
Identified -
Our team is investigating an emerging service issue affecting the network and connectivity in our Chennai (IN-MAA) data center. Customers may see intermittent delays and timeouts during peak traffic hours in the region.
If you are seeing impact that may be related, please open a Support Ticket.
Apr 28, 17:17 UTC
Completed -
The scheduled maintenance has been completed.
Apr 30, 15:00 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 30, 13:00 UTC
Scheduled -
On Thursday, April 30th from 13:00 to 15:00 UTC, we will be performing network maintenance in our AU-MEL region. While we do not expect any downtime, a brief period of increased latency or packet loss may occur during this window.
Apr 28, 15:44 UTC
Resolved -
We haven’t observed any additional issues with the Object Storage service, and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Apr 29, 19:19 UTC
Update -
We are continuing to investigate this issue.
Apr 29, 12:41 UTC
Update -
We are continuing to investigate this issue.
Apr 29, 11:42 UTC
Update -
Our team is investigating an issue affecting the Object Storage service. During this time, users may experience connection timeouts and errors with this service.
Apr 29, 10:30 UTC
Investigating -
We are aware that some customers have been experiencing 403 (InvalidAccessKeyId) errors when attempting to access object storage, beginning yesterday evening. We will share additional updates as we have more information.
Apr 29, 09:22 UTC
Resolved -
We haven’t observed any additional issues with the LKE and NodeBalancer services and will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Apr 28, 15:16 UTC
Monitoring -
At this time we have been able to correct the issues affecting the NodeBalancer service at 23:36 UTC on April 24, 2026. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a ticket with our Support Team.
Apr 24, 23:53 UTC
Update -
We are continuing work to apply the identified fix the issue affecting the LKE and newly updated/created NodeBalancer configurations and we will provide an update as soon as the solution is in place.
Apr 24, 23:06 UTC
Update -
We are continuing work to apply the identified fix the issue affecting the LKE and newly updated/created NodeBalancer configurations and we will provide an update as soon as the solution is in place.
Apr 24, 22:07 UTC
Update -
Our team has identified the issue affecting the LKE and newly updated/created NodeBalancer configurations. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Apr 24, 21:46 UTC
Update -
Our team has identified that due to this issue, customers may also experience issues creating new nodebalancers. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Apr 24, 20:48 UTC
Identified -
Our team has identified the issue affecting the LKE service. We are working quickly to implement a fix, and we will provide an update as soon as the solution is in place.
Apr 24, 20:19 UTC
Investigating -
Our team is investigating an emerging service issue affecting the Linode Kubernetes Engine (LKE) service across multiple regions. We will share additional updates as we have more information.
Apr 24, 19:27 UTC