Skip to content

Core Fabric & Networking

The core fabric is a private, high-speed network mesh that interconnects all nodes in a VergeOS system. It is the backbone of every VergeOS deployment — all internal cluster communication flows over this fabric. The core fabric is never exposed to external traffic.

Traffic types carried by the core fabric include:

  • vSAN replication — Primary and redundant data block writes between storage-participating nodes
  • Cluster coordination — Node health checks, leadership election, and system state synchronization
  • VM live migration — Memory and CPU state transfer when moving running VMs between nodes
  • Control plane — API calls, configuration updates, and management communication between VergeOS services

The core fabric is designed for low latency and high throughput. Because vSAN performance depends directly on the speed and reliability of inter-node communication, the core fabric is the most performance-critical network in a VergeOS system.

For fault tolerance, the core fabric runs across two independent physical networks — referred to as Core Fabric 1 and Core Fabric 2 (or Core 1 / Core 2). Each fabric network is its own isolated Layer 2 broadcast domain.

Every node connects to both fabric networks. If one switch or cable path fails, the other fabric network maintains full inter-node connectivity with no interruption to vSAN replication, live migration, or cluster coordination.

RequirementDetail
IsolationCore Fabric 1 and Core Fabric 2 must be on their own dedicated Layer 2 networks, completely isolated from each other and from external traffic
Jumbo framesMTU 9216 or higher on all core fabric switch ports (9216 accommodates the 9000-byte payload plus VLAN tags, headers, and tenant overhead)
Zero switch hopsAll nodes must be on the same switching fabric with no inter-switch hops in the core fabric path — additional hops introduce latency that degrades vSAN performance
Port modeAccess ports (untagged, single VLAN per core fabric network)
Spanning treeDisabled on core fabric ports (not needed for isolated access ports)
Speed10 Gbps or higher recommended

Playground note: The Terraform playground uses MTU 9142 for its virtual core fabric networks. Production deployments should use MTU 9216 or higher per the official VergeOS documentation.

On top of the two physical fabric networks, VergeOS creates a virtual core network — a logical overlay with the address range 100.96.0.0/24. This core network provides each node with a stable internal IP address used by VergeOS services.

The relationship is:

  • Core Fabric 1 and Core Fabric 2 are the physical-layer transports (Layer 2 networks)
  • Core network (100.96.0.0/24) is the logical overlay that rides on top of both fabric switches

The core network abstracts the underlying dual-path redundancy so that VergeOS services communicate using a single address per node, regardless of which physical fabric is active.

NetworkNode 1Node 2Node 3+
Core network100.96.0.2100.96.0.3100.96.0.(N+1)
Core Fabric 1172.16.1.1172.16.1.2172.16.1.N
Core Fabric 2172.16.2.1172.16.2.2172.16.2.N
ExternalStatic (configured)Static (configured)DHCP or static

Every VergeOS node has a minimum of three network interfaces, each serving a distinct purpose:

NICConnectionPurpose
NIC 1 (e.g., enp1s1)External networkManagement UI/API access, user traffic, internet connectivity
NIC 2 (e.g., enp1s2)Core Fabric 1Primary path for all inter-node traffic
NIC 3 (e.g., enp1s3)Core Fabric 2Redundant path for all inter-node traffic

In production deployments with 4+ NICs per node, the external network typically uses two bonded NICs (LACP or active-backup) for redundancy, while the two core fabric NICs remain dedicated and unbonded — each connected to its own independent switch.

External Network vs. Core Fabric Separation

Section titled “External Network vs. Core Fabric Separation”

VergeOS enforces a strict separation between external-facing traffic and internal cluster traffic. These are two fundamentally different network domains:

  • Connect VergeOS to existing LAN/WAN infrastructure
  • Carry user-facing traffic: management UI access, VM workload connectivity, internet access
  • Use standard MTU (1500) unless workloads require jumbo frames
  • Configured as VLAN trunks (802.1Q tagged) to support multiple VLANs for tenant and workload separation
  • Typically bonded (LACP or active-backup) for redundancy
  • Can have multiple external networks per system (e.g., management VLAN, production VLAN, DMZ VLAN)
  • Completely private — never exposed to external traffic or users
  • Carry all inter-node system traffic (vSAN, migration, coordination)
  • Require jumbo frames (MTU 9216+) for storage efficiency
  • Configured as access ports (untagged, single VLAN per fabric)
  • Always two independent fabrics for redundancy
  • Must have zero switch hops between nodes for low latency

VergeOS automatically creates a DMZ network during installation. The DMZ serves as the central connection point for all virtual networks in the system. Every VergeOS cloud (whether the host system or a tenant) has exactly one DMZ network.

The DMZ provides Layer 3 routing between networks — when an internal network needs to communicate with another internal network or reach an external network, the traffic passes through the DMZ. This architecture enables fine-grained network rules and firewall control at the routing boundary.

In production, VergeOS supports several network design models depending on the number of NICs per node and the external network requirements. All models maintain the dual core fabric for redundancy.

The standard production configuration uses 4 NICs per node:

NICAssignmentConfiguration
NIC 1Core Fabric 1Access port, dedicated VLAN, MTU 9216
NIC 2Core Fabric 2Access port, dedicated VLAN, MTU 9216
NIC 3External 1 (bond primary)Trunk port, LACP, MTU 1500
NIC 4External 2 (bond secondary)Trunk port, LACP, MTU 1500

This provides full redundancy on both the core fabric (two independent paths) and external network (bonded pair).

For smaller deployments, proof-of-concept, or edge sites, a 2-NIC model combines core fabric and external traffic on the same physical ports using VLAN tagging:

NICAssignmentConfiguration
NIC 1Core Fabric 1 + External VLANsNative VLAN for core, tagged VLANs for external
NIC 2Core Fabric 2 + External VLANsNative VLAN for core, tagged VLANs for external

This sacrifices external network bonding but maintains the dual core fabric redundancy.

In the Terraform playground, the core fabric is modeled as two vergeio_network resources on the host VergeOS system:

  • core_fabric_1 — Layer 2 network, MTU 9142, no DHCP
  • core_fabric_2 — Layer 2 network, MTU 9142, no DHCP

Each node VM gets three NICs:

  1. NIC 1 → External network (management and user access)
  2. NIC 2 → Core Fabric 1
  3. NIC 3 → Core Fabric 2

The playground uses MTU 9142 (instead of the production-recommended 9216) because the virtual network infrastructure of the host system introduces additional overhead. The core network overlay (100.96.0.0/24) rides on top of both fabric networks, just as it does in production.

ConceptSummary
Core fabricPrivate inter-node mesh — carries vSAN, migration, coordination, and control-plane traffic
Dual redundancyTwo independent fabric networks (Core 1 + Core 2) on separate Layer 2 domains
MTU requirements9216+ in production (9142 in the playground) — jumbo frames are mandatory
Core network overlayLogical 100.96.0.0/24 network riding on both fabric switches, providing stable internal IPs
3 NICs per nodeMinimum: 1 external + 2 core fabric. Production typically uses 4+ NICs with bonded external
Zero switch hopsCore fabric ports must be on the same switching fabric — no inter-switch hops allowed
Traffic isolationCore fabric is never exposed to external traffic; external networks are completely separate
DMZ networkAuto-created Layer 3 routing hub connecting all virtual networks in the system

Now that you understand how VergeOS nodes are interconnected, the next topic covers how those nodes are organized into clusters: Clusters & Node Types →