Updated May 8, 2026 The most-seen failures with Raff volumes and the fix that works in each case. If your issue isn’t here, contact support@rafftechnologies.com.Documentation Index
Fetch the complete documentation index at: https://docs.rafftechnologies.com/llms.txt
Use this file to discover all available pages before exploring further.
Attach and visibility
Volume attached in dashboard but `lsblk` doesn't show it
Volume attached in dashboard but `lsblk` doesn't show it
- Kernel hasn’t picked up the new device yet. Force a re-scan:
- Wrong VM. Volumes attach to one VM at a time — confirm the dashboard shows it attached to this VM.
- Looking at the wrong device path. On Raff, volumes always appear as
/dev/vd*(virtio-blk). The OS disk is/dev/vda; the first attached volume is/dev/vdb. If you’re looking for/dev/sd*or/dev/nvme*, you won’t find them — those paths don’t exist on Raff. - VM in
provisioning. A snapshot or backup running on the VM can delay attach. Wait foractive/passiveand retry.
Cannot attach — `volume in different region`
Cannot attach — `volume in different region`
us-east volume can only attach to VMs in us-east.Fix: to “move” a volume across regions: take a snapshot, restore in the target region, then delete the original. There is no in-place region migration.Cannot attach — `VM not in active or passive state`
Cannot attach — `VM not in active or passive state`
provisioning, booting, failure, or another non-attachable state. Most often caused by a snapshot or backup running.Fix: wait for the VM to return to active or passive, then retry. If stuck for more than 10 minutes, see Troubleshooting → VM stuck in provisioning.Volume already attached to another VM
Volume already attached to another VM
Mount and filesystem
`mount` fails with `wrong fs type, bad option, bad superblock`
`mount` fails with `wrong fs type, bad option, bad superblock`
mount rejects it.Why: the volume isn’t formatted yet. New volumes (or volumes you chose Manually Format & Mount for at create time) come up raw.Fix: format first:/etc/fstab. See Attach to a VM for the full flow.Boot drops to emergency shell after adding a volume to `/etc/fstab`
Boot drops to emergency shell after adding a volume to `/etc/fstab`
/etc/fstab entry references a volume that isn’t present at boot — typo, wrong UUID, or volume detached.Fix path: see Recover a locked-out VM → Single-user mode via GRUB. Boot in single-user, mount root read-write, fix the bad fstab line, reboot.Prevention: always include nofail and x-systemd.device-timeout=10 in the fstab options — boot won’t hang on a missing volume:Wrong device name in `/etc/fstab` after a reboot
Wrong device name in `/etc/fstab` after a reboot
/dev/vdb in fstab, the volume now appears as /dev/vdc (or vice versa), and the mount fails.Why: device letters can shift if multiple volumes are attached and the order changes.Fix: always reference volumes in /etc/fstab by UUID instead of /dev/vd*:UUID=b1234… /mnt/data ext4 defaults,nofail 0 2.Permission denied writing to a freshly-mounted volume
Permission denied writing to a freshly-mounted volume
mount succeeds but the user can’t write to the mount point.Why: newly-formatted ext4 / xfs volumes are owned by root:root with 0755 permissions. Your application’s user can’t write.Fix: chown after mount:Resize
Increased size in dashboard but `df -h` shows old size
Increased size in dashboard but `df -h` shows old size
lsblk confirms) but the filesystem doesn’t see it.Why: dashboard resize grows the block device. The filesystem inside the VM is unchanged until you tell it to grow.Fix (Linux):`Increase size` action is greyed out / missing
`Increase size` action is greyed out / missing
Need to make a volume smaller
Need to make a volume smaller
Detach and delete
`Please wait! VM is not ready to detach volume`
`Please wait! VM is not ready to detach volume`
provisioning because a snapshot or backup is running.Fix: wait for the VM to return to active or passive, then retry. See Detach from a VM → If the VM is in provisioning.`umount` fails with `target is busy`
`umount` fails with `target is busy`
Volume still being billed after I deleted my VM
Volume still being billed after I deleted my VM
Deleted a volume by accident — can I recover?
Deleted a volume by accident — can I recover?
- From a volume snapshot taken before delete — create a new volume from the snapshot.
- From a copy elsewhere (Object Storage, off-platform backup, database dump).
- If neither — the data is unrecoverable. This is why production volumes should run on a snapshot schedule.
State and identity
Two volumes have the same name
Two volumes have the same name
<env>-<purpose>-<n> (e.g. prod-db-data-01).Volume status is `provisioning` for a long time
Volume status is `provisioning` for a long time
Provisioning for several minutes.Likely causes:- Large size — initial provisioning takes longer for large volumes.
- Hypervisor pressure — the underlying node is busy.
- Auto-format running — formatting a 1000 GiB volume takes a few minutes.
Still stuck?
Contact support@rafftechnologies.com with:- The volume ID (from the volumes list URL or
Get volumeAPI response) - The attached VM ID (if applicable)
- The exact error message and timestamp
- Steps already tried
- Relevant logs from inside the VM (
journalctl -xe,dmesg | tail -50)