By configuring a Nutanix VM Network we are able to manage not only the connections between the Nodes and the CVM but also to dictate how virtual machines should communicate with each other and the outside world.
In this article we’ll explore the options to familiarize and create a dedicated Nutanix VM Network. We’ll use a sample configuration in our homelab to test and learn more about the Nutanix CE platform.
We can use both a command line and a graphical tool built-in the main Dashboard to create and edit the Nutanix VM Network configurations. For the purpose of this article we’ll leave the default ones in the Controller VM Interfaces with their original setup. Namely these are the Management LAN, Hypervisor LAN and the Backplane LAN.
Our focus at the moment will cover the User created networks that VMs will use to communicate internally and externally from the Nutanix Cluster.
When creating a Nutanic VM Network the process and concept is very simple. In essence the virtual nic on the Nutanix Guest is associated to a Virtual Network name which is associated to Nutanix VM Network we are going to create. Each Nutanix VM Network is by default assigned to a custom VLAN or to a default one with VLAN ID “0”.
That is. For each Nutanix VM Network we can also have specific settings managed directly from the Nutanix Cluster like DHCP and Domain Services. This includes also the option to specify details for a TFTP server we can use in conjunction with a PXE Server when booting machines from the network to run Operating System Deployments tasks for example.
So let’s a take a quick look on how to configure a Nutanix VM Network.
Manage Virtual Machines with Nutanix VM Network
From the main dashboard on the Nutanix CE login page let’s click on the settings wheel on the top right and select the Network Configuration option. A new wizard is starting and let’s move to the “User VM Interfaces” section.
From here we can create the Nutanix VM Network configurations we want to use in our environment. In my homelab at the moment will be provisioned with a couple of VM Network configurations to separate virtual machines in Production from the ones running for Development.
So all we have to do here is to choose a friendly name for the Nutanix VM Network and assign a VLAN ID. By setting the VLAN ID to “0” essentially we are letting the VMs to be able to also talk with the external world. And this is what I want to test at the moment as these VMs will be used for other purposes as well. More on this in dedicated articles.
Of course if we have VLANs already created in our environments we can use the same identifiers as well. Let’s make sure all network cards on virtual and physical switches are talking to the same VLAN.
So at this point we have created our “VM Prod” Nutanix VM Network.
We can repeat the same steps to create other VM Networks. Each one will be associated to a single VLAN.
If we check the option “Enable IP Address Management” this will give the Hypervisor (AHV in this case) the option to control the IP Address Management including IP, Gateway, Domain Settings, TFTP and Network Pools.
In addition, also the option to override the built-in DHCP Server in this configuration and use the default one in our network if already available.
What about the Controller VM interfaces? By default there are three of them. Management, Hypervisor and Backplane LANs.
Management LAN is used for all the administrative traffic including the ones to an from Prism, SSH, Remote Logging, SNMP and last but not least also the communication between VMs and Nutanix CVM.
Hypervisor LAN is used to segment all traffic specific to the Hypervisor.
Backplane LAN is used for communication between clusters and other components like CVMs. So this is all traffic between CVMs, CVMs and Hosts and also Storage traffic.
For advanced scenarios it is possible to segment and physically separate traffic types by using VLANs and different Network Cards. Infact already by default the Hypervisor LAN is using a separate NIC and IP Subnet (192.168.5.x/24).
When we take a look at the Management LAN by default it uses a dedicated nic (eth0) and both Nutanix Nodes and CVM are on the same Network.
The Hypervisor LAN uses a separate nic (eth1) and separate IP subnet.
And finally the Backplane LAN can be enabled with a separate nic (eth2) and VLAN. The CVM is a multihomed VM and effectively can communicate with both networks on eth0 and eth1 depending on the traffic types. Default configuration is “unsegmented” In a separate article we’ll cover the steps to create “segmented” Network Traffic.
The Nutanix CE platform also includes a Network visualizer we can use to simply understand the current netwotk configuration and also view more detailed information. In the first example a view by VM type (User and CVM). This can also be done by Host and Power State.
Per VM we can check detailed information simply choosing the desired NIC. In this case the first VM NIC associated to Management Traffic.