Updated May 8, 2026 The most-seen issues with Raff VPCs and the fix that resolves each. Most come from one of three categories: the dashboard rejecting input (CIDR validation, specific-IP conflicts), the network not behaving as expected after a change (DNS lag, MTU surprises, default-route confusion), or delete/detach blocks (the safety rules in Delete a VPC). If your issue isn’t here, contact support@rafftechnologies.com.Documentation Index
Fetch the complete documentation index at: https://docs.rafftechnologies.com/llms.txt
Use this file to discover all available pages before exploring further.
Create and CIDR validation
`CIDR prefix must be between /16 and /28`
`CIDR prefix must be between /16 and /28`
/16 (e.g. /15, /8) is rejected — too large; smaller than /28 (e.g. /30, /32) is rejected — too small to be useful (a /28 only has 13 usable IPs as it is).For most workloads, the Recommended /20 preset (~4,093 IPs) is the right answer. Only use Custom CIDR when the suggested ranges conflict with your on-prem network.`CIDR must be in private range (10.x.x.x, 172.16-31.x.x, 192.168.x.x, or 100.64-127.x.x)`
`CIDR must be in private range (10.x.x.x, 172.16-31.x.x, 192.168.x.x, or 100.64-127.x.x)`
10.0.0.0/8(RFC 1918)172.16.0.0/12(RFC 1918)192.168.0.0/16(RFC 1918)100.64.0.0/10(RFC 6598 — carrier-grade NAT range, useful when 10/172/192 are exhausted)
8.0.0.0/8, 1.0.0.0/8, etc.) are rejected because the VPC’s gateway would conflict with internet routing.CIDR rejected for alignment — `192.168.1.50/20` won't work
CIDR rejected for alignment — `192.168.1.50/20` won't work
192.168.1.50/20 is invalid because a /20 block always starts on a 16-bit boundary — the correct form is 192.168.0.0/20.Quick rule: for prefix /N, the host bits (the last 32 - N bits) must all be zero. If you’re not sure, type the network address you want and let the validator confirm.VPC name already exists
VPC name already exists
prod-2, staging-east, etc.VMs in the same VPC can’t reach each other
Two VMs in the same VPC can't ping each other on private IPs
Two VMs in the same VPC can't ping each other on private IPs
- Firewall rules on the VMs — Raff’s Firewall product or the guest’s OS-level firewall (
ufw,iptables, Windows Firewall) is dropping ICMP or the port you’re testing. Open the relevant port between the VMs’ private IPs. - Wrong VPC NIC selected on the source VM — if the VM is in multiple VPCs, run
ip a(Linux) orGet-NetIPAddress(Windows) to confirm which interface holds which VPC’s IP. Bind your test (ping -I <iface>,psping -s <local_ip>) to the right NIC. - DHCP didn’t run on the new interface — after attaching to a VPC, Linux usually auto-configures
eth1/enp1s0. If the new NIC has no IP, rundhclient -v <iface>(Linux) or check Disk Management → Adapters (Windows). - MTU mismatch on a tunneled / containerized workload — see the MTU accordion below.
Two VMs with the same private IP can't reach each other
Two VMs with the same private IP can't reach each other
10.0.0.5 in different VPCs are isolated by design — that’s the feature, not a bug.Fix: put both VMs in the same VPC, or set up routing through a shared gateway VM that has one NIC in each VPC. There’s no way to “peer” the two VPCs today (Peering tab is coming soon).Large packets break, small packets work — MTU issue
Large packets break, small packets work — MTU issue
ping works at default size but fails with -s 1500.Cause: something in the path has its MTU hard-coded to 1500. Raff’s VPC interfaces use MTU 8950 (jumbo frames, 9000 minus 50 bytes of VXLAN overhead).Fixes (pick one):- Set the guest’s interface MTU to 8950 to use jumbo frames end-to-end. Linux:
ip link set <iface> mtu 8950. Windows:netsh interface ipv4 set subinterface "<iface>" mtu=8950 store=persistent. - Set the guest’s interface MTU to 1500 if you can’t use jumbo frames (some appliances, some kernels). Slower but always safe.
- For containers and overlays (Docker, Kubernetes CNI) running on top of the VPC — set the bridge’s / CNI’s MTU explicitly. Docker:
--mtu=8950or--mtu=1500. Kubernetes Calico/Flannel/Cilium: configure in the CNI manifest. - For tunnels (WireGuard, IPsec) — subtract the tunnel overhead from 8950 (or 1500) when sizing the inner MTU. WireGuard adds 80 bytes; IPsec varies.
DNS
DNS change isn't applied to existing VMs
DNS change isn't applied to existing VMs
/etc/resolv.conf (or the Windows DNS settings) on existing VMs still shows the old value.Why: DNS is delivered to VMs over DHCP. Existing VMs only pick up the change when they renew their lease — which can take minutes to hours depending on the image’s DHCP client config.Force renewal inside the guest:- Linux —
dhclient -r && dhclient(releases and re-requests). Orsystemctl restart systemd-networkd/systemd-resolved. - Windows —
ipconfig /renewfrom an elevated prompt. Thenipconfig /flushdnsto clear cache. - Either — reboot the VM. Always works.
VMs can't resolve anything after pointing DNS at an internal resolver
VMs can't resolve anything after pointing DNS at an internal resolver
- The resolver VM is down — start it; DHCP only re-checks on lease renewal, so the existing VMs won’t recover until that runs.
- The resolver VM isn’t listening on its VPC IP — bind it to
0.0.0.0or to the specific VPC IP.ss -lnup | grep :53. - The resolver VM has a firewall blocking UDP/TCP 53 —
ufw allow 53or open in Raff’s Firewall. - A DNS loop — the resolver itself uses Raff’s VPC DNS as upstream, which now points back at it. Set its upstream to a real resolver (
8.8.8.8,1.1.1.1).
8.8.8.8 from the Edit DNS dialog, force-renew on the VMs, then debug the resolver out-of-band.Internet Gateway and port forwarding
VMs in the VPC have no internet after enabling Platform Router
VMs in the VPC have no internet after enabling Platform Router
8.8.8.8.Most likely causes:- The VM has its own Public IP — and its default route goes through the public NIC, not the VPC NIC. The VM is using its public IP for outbound, not the router. This is usually fine; check
ping 8.8.8.8works (it does, just not via the router). - The VM’s default route doesn’t point at the VPC gateway —
ip routeshould showdefault via 10.x.0.1for the VPC the router sits on. If not, set it:ip route add default via 10.x.0.1 dev <iface>. - The router is still provisioning — first deploy takes a minute or two. The detail page’s gateway badge flips to active when ready.
- DNS isn’t set or isn’t reachable — see the DNS accordions above.
Port forwarding rule doesn't work
Port forwarding rule doesn't work
public_port → private_ip:private_port but external traffic to that port doesn’t reach the VM.Checklist, in order:- Platform Router enabled? — port forwarding is only available with the Platform Router gateway. The Port Fwd tab is hidden / blank otherwise.
- Private IP is in the VPC’s range and currently in use — typo’d IPs silently fail. Confirm the VM’s IP via the VMs tab on the VPC detail page.
- The destination VM is listening on
private_port—ss -lnt | grep :<port>(Linux) ornetstat -an | findstr :<port>(Windows). - The destination VM’s firewall allows inbound on
private_port— both Raff Firewall rules and the guest’s OS firewall must allow it from the router’s private IP. - You’re testing from outside the VPC —
curl <router_public_ip>:<public_port>from your laptop. Testing from inside the VPC takes a different path and may not exercise the rule. - At the 10-rule limit? — Platform Router supports up to 10 forwarding rules per VPC. Beyond that, deploy a Firewall Appliance (OPNsense) for unlimited rules.
Firewall Appliance stuck on Deploy
Firewall Appliance stuck on Deploy
Lost the OPNsense admin password before saving it
Lost the OPNsense admin password before saving it
Attach, detach, and delete blocks
VM doesn't appear in the Add VM dropdown
VM doesn't appear in the Add VM dropdown
- Already attached to this VPC — the dropdown only shows VMs not currently in the target VPC. Check the VMs tab.
- In a different region — VPCs are region-scoped. A
us-eastVPC can only holdus-eastVMs; cross-region attach is not possible. - VM is in a transient state —
creating,deleting,migrating. Wait for it to settle, then refresh the dropdown. - Caching — close and reopen the dialog, or refresh the page.
Specific IP rejected — already in use or in reserved range
Specific IP rejected — already in use or in reserved range
- Reserved IPs —
.0(network),.1(gateway), and.255(broadcast on/24and larger) are always reserved. Pick anything else in the Available range. - In-use — another VM in the same VPC already holds that IP. The dialog shows the conflict; pick a different one or detach the holder first.
- Outside the CIDR — typing
10.7.1.5into a10.7.0.0/24is silently outside the range. Stay inside the Available range shown in the dialog.
Detach blocked — `VM has no other interfaces`
Detach blocked — `VM has no other interfaces`
- Attach a Public IP to the VM directly, or
- Attach the VM to another VPC first, then come back and detach from this one.
`Delete (has VMs)` — delete action greyed out
`Delete (has VMs)` — delete action greyed out
- Detach every VM (VPC detail → VMs tab → Detach on each row) if you want them to keep running on other interfaces.
- Delete the VMs from the Compute page if you’re tearing down. Use the VM-delete dialog’s “delete attached VPCs” option to delete the VPC at the same time as the last VM — saves a step.
Delete still fails after the VMs are gone
Delete still fails after the VMs are gone
0 VMs but the API or the detail page still rejects the delete.Possible causes:- A pending Internet Gateway provision — if you clicked Enable on Platform Router or Deploy on Firewall Appliance and the gateway VM is still creating, the VPC is locked. Wait for it to finish or fail, then retry.
- A locked operation — rename or DNS update in flight. Refresh the page and retry.
- Stale dashboard cache — the API is the source of truth. Refresh, or call the API directly with
gh-style tooling to confirm.
”Coming soon” features
VPC Peering tab is empty / unclickable
VPC Peering tab is empty / unclickable
- Route through a shared gateway VM with one NIC in each VPC, manually configured to forward traffic
- Have both source and destination VMs hold public IPs, route over the public internet
- Re-architect to put the workloads in the same VPC
Services tab shows `Load Balancer & VPN Gateway — coming soon`
Services tab shows `Load Balancer & VPN Gateway — coming soon`
- Load balancer — run HAProxy / Nginx / Traefik on a small VM in the VPC; or use Raff Kubernetes (which provides ingress controllers).
- VPN gateway — deploy the Firewall Appliance (OPNsense) on the VPC; it ships with WireGuard, IPsec, and OpenVPN built in.
Still stuck?
Email support@rafftechnologies.com with:- VPC name and ID — copy from the detail page URL or the row’s
⋮→ Copy CIDR (and add the name) - The exact error message and the time you saw it
- What you tried from this page