With this article we’ll start a new series focusing on VMware virtual Distributed Switches. In particular this article shows the initial steps on how to create virtual distributed switches. The idea is then to not only migrate the networking configuration from virtual Standard Switches (vSS). In our home lab we’ll be able to also review and use advanced settings the virtual Distributed Switches (vDS) include in their configurations.
There are many advantages choosing virtual Distributed Switches over Standard ones. As a matter of truth when managing an environment with multiple Hosts it might be a time consuming task making sure all settings are exactly the same across different Hosts.
Of course leveraging virtual Distributed Switches is helping to solve this scenario and a lot more. In particular with the extra features included in their configuration we have more control over monitoring, shaping and managing various types of traffic.
How actually virtual Distributed Switches work? How do they compare to virtual Standard Switches?
Virtual Standard Switches can only operate at the Host level. The actual configuration of a vSS consists of two layers: Management and Data. Both of them reside on the same Host.
In the case of virtual Distributed Switches the Management layer is centralized and operated by VMware vCenter. All the configurations including changes to the Data layer on each Host are performed through a Proxy component. The Proxy component resides on the Host. So effectively the Data layer can also be called Host Proxy switch. All the network changes and configurations are sent to the Host Proxies automatically thus sharing the same configuration.
In addition the virtual Distributed Switches introduce two new components:
Uplink Port Group: a container for all the physical network connections we want to use with all Distributed Port Groups or specific ones. This is very useful when creating policies on which physical connections the Port Groups should use also depending on traffic type and teaming with fail-over.
Distributed Port Group: Provide network connectivity to Virtual Machine groups and different type of VMkernel traffic. Ideally for best performances we want to separate Management traffic from vMotion and Provisioning traffic. Likewise using Distributed Port Groups we can also create groups for Production, DMZ or external access. Moreover the ability to specify groups for IP Storage and have a dedicated set of uplinks also to ensure network availability.
Ideally the article series is going to show all steps with specific screenshots. For this purpose the journey is consisting of different steps including:
- Create virtual Distributed Switches
- Create Distributed Port Groups
- Free up physical network cards for the vDS
- Add Hosts to vDS and manage their configuration
- Migrate from virtual Standard Switch to virtual Distributed Switch
- Migrate vCenter to virtual Distributed Switch
- Migrate Storage Network to a virtual Distributed Switch
The idea is trying to cover all these aspects mentioned above and raising the stakes in our home lab!
VMware virtual Distributed Switch setup
First thing to create virtual Distributed Switches is to have a Data Center object existing in our vCenter environment. If not available yet we can just follow these easy steps as per this article and build a Data Center in VMware vCenter.
From the Data Center level in the context menu we have the option to create virtual Distributed Switches anew. For my home lab I have opted to create virtual Distributed Switches to support mainly two scenarios: Infrastructure and Storage Management.
The Infrastructure vDS will serve the following Distributed Port Groups:
- Management Traffic
- vMotion Traffic
- Provisioning Traffic
- VM Production Network
- VM External Network
The Storage vDS will serve instead the following Distributed Port Groups:
- Primary iSCSI Storage Network connection
- Secondary iSCSI Storage Network connection (for redundancy and fail-over purposes)
The reason to create virtual Distributed Switches to separate these types of traffics also provides us the option to use different settings like the MTU size. When for example the physical switch also supports the Jumbo frames we can configure this setting for the Storage vDS.
The first virtual Distributed Switch I’m going to create is the one for the Infrastructure.
Next is to select the version of the vDS. Unless we are working in a mixed environment with older versions of the VMware vSphere Hosts let’s go for the latest version available. Older versions are upward compatible with latest ones just in case.
From here we can now define the number of Uplinks. The Uplinks will map to the physical connections available into the Host. it is a good practice to have multiple Host with similar configurations if not the same to bring consistency.
The wizard also suggests to automatically create a default Port Group. I actually prefer to do this my own and un-check this box. Additionally if the physical network cards support the I/O Control we can also enable this functionality.
And finally a summary showing the main settings before committing the configuration.
As soon as the virtual Distributed Switch is created with a right click let’s edit the settings. In the General section we can change the number of available Uplinks and also rename them with a user friendly name. In my case I have a total of 5 physical network cards. The first 3 are dedicated to the Infrastructure virtual Distributed Switch. Whereas the remaining two will work with the other vDS to carry the Storage traffic.
To make things easier I rename them to Management, vMotion and Provisioning.
When adding new physical nics to the the Host they will look something like:
- vmnic0 >> Management Traffic
- vmnic32 >> vMotion Traffic and Management fail-over in case vmnic0 is down
- vmnic33 >> Provisioning Traffic
In particular the Management traffic will leverage two nics or I should say Uplinks. We’ll see in the next article how to configure this.
In the Advanced section we can add more info including the MTU size for the TCP packets as mentioned earlier. We’ll use a different one (Jumbo Frames) on the Storage virtual Distributed Switch.
Another interesting setting is the switch Discovery Protocol. Both Cisco CDP and LLDP are supported. So owners of non-Cisco switches can still leverage the LLDP alternative when available in a multi-vendor environment. Although we are configuring this on a virtual switch some options still require features parity with the physical ones. At some point the VMs need to communicate with other network hosts on the network!
By default a new container for Uplinks is created. With a right click on this one let’s review and edit settings. For now I would suggest to change the name to something descriptive. From here we can control other popular settings like using vLANs and enable configuration monitoring and shaping the traffic. For example this is useful for auditing purposes when monitoring traffic to a particular network or VM.
The first part to create virtual Distributed Switches is complete. In the next article we’ll review the steps to create the Distributed Port Groups.