Let’s take a look in this article on how to migrate VMkernel in vShpere Host. This is a follow up from the previous step where we reviewed the main details for configuring the Physical Network Cards in VMware vSphere. Also in my home lab environment I have the luxury to play with multiple physical network adapters thanks to the excellent network drivers I’m using to install the USB network adapters onto my vSphere Hosts.
In this article we’ll review the steps on how to migrate VMkernel from the a virtual Standard Switch to a vSphere virtual Distributed Switch. The main advantage is that we can pretty much keep all the settings associated to the existing VMkernel configuration and bring them to the vSphere vDS making the environment consistent.
In a nutshell the VMkernel is a “software network layer” which provides connectivity to the Hosts for different types of traffic by mean of physical network cards or vmnics associated to them. This means we can physically separate and isolate. Management, Provisioning, vSAN, IP Storage, Replication and virtual machine traffic can therefore use specific domain broadcasts and even on different wires!
Following this example in my home lab I have something similar to this:
- vmnic0 > vmk0 > VMkernel for Management Traffic
- vminc32 >Â vmk1 > VMkernel for vMotion Traffic
- vminc33 >Â vmk2 > VMkernel for Provisioning Traffic
- vminc34 >Â vmk3 > VMkernel for iSCSI-01 Traffic
- vminc35 >Â vmk4 > VMkernel for iSCSI-02 Traffic
In addition I’m using vmnic32 to provide redundancy to vmnic0 for Management Traffic and vmnic0 as uplink to the virtual machines in the so called external network. The Production virtual machines have no direct internet access or associated vmnic. All the connections are routed by a firewall using ClearOS.
This is just a sample setup and of course we also have the flexibility to change the configuration based on current requirements. That why it is essential to make sure the VMkernel are configured correctly to make sure the routing between different networks is working as expected.
This is extremely important because when working with different vSphere Hosts versions there might be different behaviors. Namely the ability to store one or multiple routing tables into memory. We’ll touch upon this topic in a separate article.
As per VMkernel setup and configuration we can leverage the default TCP/IP stacks for Management, Provisioning and vMotion traffic types. In the large majority of installation these are perfectly fine. Should this be required it is also possible to create custom TCP/IP stack and associate a VMkernel with it.
Right at this point we are ready to visit the steps to migrate VMkernel in our vSphere environment.
How to migrate VMkernel in VMware vSphere
From the virtual Distributed Switch we created for the Infrastructure let’s do a right-click to start the wizard and manage the Host Networking. Let’s go for the second option as per screenshot below.
In this step let’s add all the vSphere Hosts we want to manage in one go. Ideally all of them form the same VMware Cluster. It is possible to add standalone vSphere Hosts.
Additionally we can also enable the Template Mode and then choose the Template Host in the next screen. Very useful when all the vSphere Hosts have identical configurations.
At this point from the wizard let’s choose the option to manage and migrate VMkernel adapters.
For each vSphere Host the wizard now shows the current settings. This includes the names of the virtual switches and the Port Groups. In this example we’ll migrate VMkernel adapter from vSwitch0 using the “Management Network” Port Group.
Next is to click on “Assign Port Group”.
In my example I have already created a “DPort-Management” Port Group on the virtual Distributed Switch.
Let’s repeat the same steps for the intended vSphere Hosts where we intend to migrate VMkernel for the Management Traffic.
As a next step the wizard will now analyze the traffic and determine if the changes will have any impact. In particular this steps is making sure that the Storage traffic has no impact. Failing this we might loose access to our virtual machines including the vCenter or VCSA!
As per screenshot below the wizards determines no impact.
We are now ready to review the main information in the summary before committing the changes.
If we now check the settings for the virtual Distributed Switch we can see the vmnic32 (used for Management Traffic fail-over) is associated to newly created distributed Port Group. Effectively the Management Traffic is now running on the vmnic32 on the virtual Distribued Switch.
If we take a look at the virtual Standard Switch the vmnic0 is not is use as expected. This means we can now remove the vmnic0 from the virtual Standard Switch and make it available for the Distributed one.
Let’s go into the properties of the vSwitch0 and manage the Network adapters. As per screenshot below we can see a list of unused adapters or vmnics. It is safe to select them and hit the red button to remove them from the switch configuration. The vmnic0 can now be used by the virtual Distributed Switch. Also let’s make sure to repeat this step on all intended vSphere Hosts.
This concludes a quick overview on how to migrate VMkernel from a virtual Standard Switch to a virtual Distributed Switch. Same considerations are valid for the remaining Port Groups managing other types of Network Traffic. For details specific to the VMkernel associated to Storage Network traffic there is a separate article.
Hopefully this article covered the basic need-to-know for our home lab. In the next step we’ll review how to migrate vCenter to a distributed Switch.
Add Comment