Proxmox VE Hyperconverged Cluster OVS (Open vSwitch) Virtual Machine Network Isolation


The Need

In a high-configuration Proxmox VE hyperconverged cluster, to make full use of resources, virtual machines need to be isolated by network based on specific requirements, to support larger-scale scenarios.

Basic Conditions for Network Virtualization

A network switch that supports VLANs.

Proxmox VE with Open vSwitch installed.

Servers with multiple network interfaces.

Switch Configuration

I used a Cisco Catalyst 4500 switch. The specific steps are as follows:

1. Create VLAN 80, VLAN 81, VLAN 82, and VLAN 83. In global configuration mode, execute the command:

STYF1(config)#vlan 80-83

2. Three node servers are connected to switch ports 1 to 3 with network cables. Set these three ports as trunk ports, enable non-negotiation mode, and exclude VLAN 1 from passing through. Execute the command:

STYF1(config)#interface GigabitEthernet1/1

STYF1(config-if)#switchport mode trunk

STYF1(config-if)#switchport nonegotiate

STYF1(config-if)#switchport trunk allowed vlan 2-4094

STYF1(config-if)#interface GigabitEthernet1/2

STYF1(config-if)#switchport mode trunk

STYF1(config-if)#switchport nonegotiate

STYF1(config-if)#switchport trunk allowed vlan 2-4094

STYF1(config-if)#interface GigabitEthernet1/3

STYF1(config-if)#switchport mode trunk

STYF1(config-if)#switchport nonegotiate

STYF1(config-if)#switchport trunk allowed vlan 2-4094

3. Verify if the configuration is effective by executing the command:

STYF1#show running-config

Example output:

Building configuration…

Current configuration : 2889 bytes

!

version 15.0

vtp mode transparent

… (omitting part of the output) …

vlan internal allocation policy ascending

!

vlan 80-83

!

!

interface FastEthernet1

ip vrf forwarding mgmtVrf

no ip address

speed auto

duplex auto

!

interface GigabitEthernet1/1

switchport trunk allowed vlan 2-4094

switchport mode trunk

switchport nonegotiate

!

interface GigabitEthernet1/2

switchport trunk allowed vlan 2-4094

switchport mode trunk

switchport nonegotiate

!

interface GigabitEthernet1/3

switchport trunk allowed vlan 2-4094

switchport mode trunk

switchport nonegotiate

!

… (omitting part of the output) …

4. In configuration mode, enter the command `write` to permanently save the configuration.

Operations on the Proxmox VE Hyperconverged Cluster

The operations include software installation, network configuration, and configuration validation. Below are the steps.

Install Open vSwitch on the Proxmox VE Hyperconverged Cluster

Each node in the cluster needs to install Open vSwitch. The installation is simple—just enter the command:

apt install openvswitch-switch

Network Planning and Configuration

Each node in the cluster server has four network interfaces. Interface `eno3` is used for the management address, and interface `eno4` is virtualized to create two network segments with VLAN IDs 80 and 81. Log in to the Proxmox VE cluster web management interface and perform the following operations on each node.

Step 1: Create an OVS Bridge. Select the node submenu “Network,” click the “Create” button in the upper left of the page, and fill in “Bridge port” as “eno4” and other required information.

Step 2: Create an OVS IntPort. You must fill in the VLAN value created on the switch and assign the pre-planned IP address to the interface.

Repeat this step to create the remaining OVS IntPorts using the same OVS Bridge, and configure the network addresses. Be careful not to set multiple default gateways.

Step 3: Identify VLAN Trunk for the physical interface, specifying the allowed VLAN traffic. Edit the network interface that serves as the OVS bridge interface, and fill in the VLAN values set on the switch.

Step 4: Apply the virtual network settings. Click the “Apply Configuration” button at the top of the page, and it will take effect immediately (older versions may require a manual reboot of the host system).

Step 5: Verify the correctness of the configuration. Log into any Proxmox VE hyperconverged cluster node’s host system (Debian), and use the command to ping the virtual IP address set on another node, for example, `ping 172.16.80.20`. If successful, the configuration is considered correct.

Virtual Machine Network Isolation

On each node in the Proxmox VE cluster, create two virtual machines. Set the network interfaces to VLAN 80 and VLAN 81, respectively, and assign corresponding IP addresses after the virtual machines are installed. For example, on node `pve1`, virtual machine `s80-101` has the address `172.16.80.101`, and virtual machine `s81-101` has the address `172.16.81.101`. On node `pve2`, virtual machine `s80-102` has the address `172.16.80.102`, and virtual machine `s81-102` has the address `172.16.81.102`.

Log into any virtual machine, for example, log into `172.16.80.101`, and use the command `ping 172.16.80.102` (another virtual machine on a different physical node). If you can ping it, the setup is normal. Then log into the virtual machine `172.16.81.102` and use the command `ping 172.16.81.101`. If you can ping it, the setup is normal.

On the same node, for example, on `pve1`, create a virtual machine, set the VLAN tag to 81, and set the virtual machine’s IP address to `172.16.80.103`. Log into virtual machine `172.16.80.101` or `172.16.80.102`, and use the command `ping 172.16.80.103`. If the ping fails, then isolation is effective, even though it is using the same physical network interface as the other virtual machines.


Leave a Reply

Your email address will not be published. Required fields are marked *