Linode Status

Current Status

Emerging Service Issue - Block Storage - All Regions

Incident Report for Linode

Postmortem

On November 17th, 2025 at 14:25 UTC, we released an update which introduced an unexpected behavior impacting encrypted Block Storage volumes. Volumes in this configuration could fail to mount if a Linode was rebooted with an encrypted volume attached. Volumes attached to a Linode after boot were unaffected. As a result of this behavior, there was potential for existing data to be overwritten if a volume was written to via direct disk access while in this state. The release was rolled back, mitigating the impact at 18:31 UTC on November 21st.

To prevent recurrence of the issue, we are preparing an updated release which will attach encrypted Block Storage volumes correctly at boot.

This summary provides an overview of our current understanding of the incident given the information available. Our investigation is ongoing and any information herein is subject to change.

Posted Dec 08, 2025 - 16:40 UTC

Resolved

We have finished rolling back the change, and haven’t observed any additional issues with the Block Storage service. We will now consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.
Posted Nov 21, 2025 - 20:14 UTC

Monitoring

A recent system update introduced an unexpected behavior impacting encrypted block storage volumes configured for direct disk I/O and we are in the process of rolling back this change. Customers with their deployment(s) configured in this fashion will experience problems attaching and accessing encrypted volumes to their respective Linodes.

If you believe you may be impacted by this bug, you can mitigate this behavior by either detaching and reattaching your volume to your Linode while it is powered on, or rebooting your node.
Posted Nov 21, 2025 - 18:19 UTC

Identified

A recent code-change introduced a bug impacting encrypted block storage volumes configured for direct disk I/O. Customers may experience problems attaching and accessing encrypted volumes to their respective Linodes.

If your deployment is configured in this fashion or you believe you may be impacted you can mitigate this by detaching and reattaching your volume to your Linode while it is powered on. Please do not reboot your node with the volume attached as this can result in disk corruption and data loss.
Posted Nov 21, 2025 - 17:21 UTC

Update

Investigation into the recent issue continues. Currently, we recommend users refrain from rebooting Linodes utilizing direct disk boot with encrypted block storage to mitigate potential risk.
Posted Nov 21, 2025 - 11:30 UTC

Investigating

Our team is investigating an emerging service issue affecting Block Storage in all regions. We will share additional updates as we have more information.
Posted Nov 21, 2025 - 11:15 UTC
This incident affected: Block Storage (US-East (Newark) Block Storage, US-Central (Dallas) Block Storage, US-West (Fremont) Block Storage, US-Southeast (Atlanta) Block Storage, US-IAD (Washington) Block Storage, US-ORD (Chicago) Block Storage, CA-Central (Toronto) Block Storage, EU-West (London) Block Storage, EU-Central (Frankfurt) Block Storage, FR-PAR (Paris) Block Storage, AP-South (Singapore) Block Storage, AP-Northeast-2 (Tokyo 2) Block Storage, AP-West (Mumbai) Block Storage, AP-Southeast (Sydney) Block Storage, SE-STO (Stockholm) Block Storage, US-SEA (Seattle) Block Storage, JP-OSA (Osaka) Block Storage, IN-MAA (Chennai) Block Storage, BR-GRU (São Paulo) Block Storage, NL-AMS (Amsterdam) Block Storage, ES-MAD (Madrid) Block Storage, IT-MIL (Milan) Block Storage, US-MIA (Miami) Block Storage, ID-CGK (Jakarta) Block Storage, US-LAX (Los Angeles) Block Storage, GB-LON (London 2) Block Storage, AU-MEL (Melbourne) Block Storage, IN-BOM-2 (Mumbai 2) Block Storage, DE-FRA-2 (Frankfurt 2) Block Storage, SG-SIN-2 (Singapore 2) Block Storage, JP-TYO-3 (Tokyo 3) Block Storage).