Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.rafftechnologies.com/llms.txt

Use this file to discover all available pages before exploring further.

Updated May 8, 2026 A Raff VPC is a VXLAN segment. That’s the one fact that explains everything else on this page — the CIDR rules, the overlap behavior, the MTU, the per-region scope. Read this once and the rest of the product makes sense.

What VXLAN is

VXLAN (Virtual Extensible LAN, RFC 7348) is the standard way modern clouds build private networks on top of shared physical infrastructure. It works by encapsulating each tenant’s Ethernet frames in UDP packets and tagging every packet with a 24-bit identifier — the VNI (VXLAN Network Identifier). Two VMs on the same physical host, two VMs on different hosts, two VMs in different racks — all of them only see traffic carrying their VPC’s VNI. The hypervisor drops everything else before it ever reaches the guest. That 24-bit VNI gives you ~16 million distinct private networks per region. No two VPCs on Raff share a VNI, ever.

Why this matters: CIDR overlap is a non-issue

Most cloud users come in expecting the AWS rule: “VPC CIDRs must not overlap if you ever want to peer them.” On Raff, that rule doesn’t apply.
You can haveAnd it works
vpc-a with CIDR 10.0.0.0/24A VM in it, IP 10.0.0.5
vpc-b with CIDR 10.0.0.0/24A different VM in it, also IP 10.0.0.5
Both VMs hold the same private IP, in different VPCs, on the same hardware. They cannot reach each other — not because of routing tables, not because of firewall rules, but because their packets are tagged with different VNIs. The encapsulation layer never delivers one VPC’s frames to another VPC’s VM. This is by design. You can name your VPCs prod, staging, dev and use 10.0.0.0/24 for all three without thinking about it. VPC names must be unique inside your account; CIDRs don’t have to be.

What is reserved inside the CIDR

Every VPC reserves three addresses in its range:
AddressUse
.0Network address (the CIDR itself)
.1Gateway — every VM gets this as its default gateway for the VPC subnet
.255Broadcast (for /24 and larger; smaller subnets follow standard IPv4 broadcast rules)
So a /24 VPC has 253 usable IPs (256 − 3). The dashboard shows this directly: 1/253 IPs us-east on each VPC card means one IP is in use out of 253 available.

CIDR sizes Raff allows

You pick a prefix between /16 and /28 at create time:
PrefixUsable IPsWhen to pick it
/2813Tiny — bastion + one or two app VMs
/24253Default — fits almost every workload
/221,021Mid-size cluster
/204,093Large fleet
/1665,533The maximum — entire address-space of one private block
The dashboard surfaces three named sizes — small (/24), medium (/20), large (/16) — when you click “List CIDRs” while creating a VPC. Use them or type a custom CIDR; both go through the same validator. You cannot resize a VPC after creation. If you outgrow your range, create a new VPC and migrate VMs into it.

Allowed private ranges

CIDRs must fall inside a private IPv4 block (RFC 1918) or the carrier-grade NAT block (RFC 6598):
  • 10.0.0.0/8
  • 172.16.0.0/12
  • 192.168.0.0/16
  • 100.64.0.0/10 (rare; useful when you’ve already exhausted RFC 1918)
Public IP ranges are rejected at create time. The validator also enforces that the CIDR is correctly aligned — 192.168.1.50/20 is invalid because /20 starts on a 16-bit boundary; the correct form is 192.168.0.0/20.

Region scope and the gateway

A VPC lives in exactly one region. VMs from another region cannot join it; the VXLAN layer is bounded by the regional underlay. Every VPC has an internal gateway (the .1 address) that every member VM uses as its default route for the VPC subnet. Inter-VM traffic inside the VPC is L2 — frames go directly between hypervisors over the VXLAN multicast group, no routing hop. Traffic leaving the VPC (to the public internet, to another VPC, etc.) goes through the gateway and out via the host’s NAT or a public IP if the VM has one attached. This is why same-region VM-to-VM traffic inside a VPC is free and high-throughput — it never leaves the underlay.

MTU — the one practical caveat

VXLAN encapsulation adds 50 bytes of overhead to every packet. Raff’s VPC interfaces use MTU 8950 (jumbo frames, 9000 minus 50 for VXLAN headers). For most workloads — TCP between Linux/Windows guests, HTTP/gRPC between services, Postgres replication — this is invisible. The OS negotiates it via Path MTU Discovery and you never see it. Where it occasionally surfaces:
  • Custom kernels or appliances that hard-code MTU 1500 may fragment or drop oversized frames. Set the VM’s interface MTU to 8950 (or 1500 if you want classic-Ethernet behavior — slightly slower but always safe).
  • Containers and overlays inside the VM (Docker, Kubernetes CNI) often default to 1500. They work, but won’t benefit from jumbo frames unless you raise their MTU too.
  • Tunneling on top of the VPC (WireGuard, IPsec) adds yet another header. Subtract that tunnel’s overhead from 8950 when you size the inner MTU.
If you’re seeing strange “works on small payloads, breaks on large” behavior between VMs in the same VPC, MTU is the first thing to check.

What VPCs do not give you (yet)

To set expectations against AWS-style features:
FeatureOn Raff?
VPC peering between two VPCsComing soon — the Peering tab is in the dashboard but disabled. Today, route cross-VPC traffic through public IPs or a shared gateway VM
Load balancer / VPN gateway as a serviceComing soon — placeholder under the Services tab
Cross-region VPCNo — each VPC is regional
Transit gatewayNot available; build a hub VM if you need a star topology
Custom route tablesNot exposed; the gateway handles default routing
VPC endpoints / PrivateLinkNot today; same-region traffic to Object Storage and Kubernetes is already on the private network
For outbound internet access without exposing a public IP, see the Internet Gateway options on each VPC — the Platform Router gives you managed NAT for free, and the Firewall Appliance (OPNsense) gives you full firewall control as a paid VM.

VPC membership is opt-in

A VM created without a VPC has no private network interface — it’s reachable only via its public IP, and it can’t talk privately to any other VM. That’s a perfectly normal configuration for a single web server, a one-off worker, or any VM that doesn’t need peer-to-peer private traffic. When you do want multiple VMs to share a private network, create a VPC first and then attach VMs to it from the create-VM flow (Networking → Private Network) or from the Networking Diagram ( → Add VM on the VPC card). Shared VPCs survive any single VM’s deletion — see the VM delete dialog for the safety rules.

Create a VPC

Pick a CIDR and region.

Attach a VM

Add an existing VM to a shared VPC.

Features & limits

CIDR ranges, MTU, per-account limits.
Last modified on May 8, 2026