VMware vSphere: Setup VMkernel Network

After reviewing the settings and configurations of additional Physical Network Adapters and how to setup the VMware TCP/IP Stacks it’s now time to take a quick look at the configuration details for the VMkernels. This will also allow us to complete the configuration for the custom TCP/IP Stack we created to isolate the iSCSI Storage traffic. As previously stated the combination of a VMkernel together with a TCP/IP Stack provides network connectivity to Host and will accommodate system traffic for vMotion, IP Storage and more.

VMkernels need to be created on each vSphere Host participating to the cluster for example when sharing the same storage. Same principle applies when it comes down to destination vSphere serving Replication services. In our home lab there will be at least 4 separate VMkernels with the following functions:

  • VMkernel 0: will serve the Management traffic
  • VMkernel 1: will serve the Network Storage traffic
  • VMkernel 2: will serve the vMotion traffic
  • VMkernel 3: will serve the Cold Provisioning  traffic

As per usual by clicking on the Add Host Networking Icon we can start the wizard selecting the first option

vmware-vmkernel-01

We can now choose which vSwitch this VMkernel will be serving. This also depends on the current requirements and ideally spreading this over different vSwitches can give more flexibility with configuration but also add level a complexity. There are plenty of combination available depending on the available resources and desired state. For my home lab I would suggest 2 separate vSwitches, Management and Storage:

  • Management
    • Host Management
    • VM Traffic
    • vMotion
  • Storage
    • IPStorage service
    • Provisioning service

One other element to take into consideration is the number of physical switch available for “uplink” and supported features. A dedicated post will cover this topic in more detail

vmware-vmkernel-02

So let’s give a name to the VMkernel and specify the Custom TCP/IP Stack created earlier. For now I will not use the vLAN tagging as I want to cover this topic together with other advanced settings in a separate article

vmware-vmkernel-03

Let’s provide a static IP Address. I always recommend to have DNS records and resolution to work properly along with FQDN names

vmware-vmkernel-04

And finally a summary before committing the configuration changes

vmware-vmkernel-05

At this point if we take a look at the VMkernel Adapters view we notice a new entry similar to one in the screenshot below. Very importantly other useful information can be cheked from this view, for example which VMkernels will serve which services. The vMotion service can be operated by one TCP/IP Stack at a time. Initially on the Default System Stack if configured in a separate TCP/IP stack will be disabled from the first one and vice versa

vmware-vmkernel-06

So now that a VMkernel is associated to a custom TCP/IP Stack let’s finish the configuration of the intended Stack by providing the DNS settings and making sure the Routing is configured and working as expected

vmware-vmkernel-07

Now if we want to dedicate a specific physical network adapter by clicking on the 3rd icon form the left we can start the wizard to manage the physical NICs associated to the vSwitch now working with the first available network card “VMnic0”

vmware-vmkernel-08

In this case I’m adding the “VMnic36”

vmware-vmkernel-09

Once saved the refreshed view will show both physical NICs will work in NIC Teaming.  But now this not what I want as I want to dedicate named network adapters

vmware-vmkernel-10

By selecting the IP Storage Port Group served by the VMkernel iSCSI and editing the properties in the Teaming and Failover I can just change the order of the Adapters making sure “VMnic36” is in the list of Active ones. So essentially for this Port Group we are overriding the default settings inherited from the vSwith configuration level

vmware-vmkernel-11

And now after a save and refresh the IPStorage is using the intended Physical Network Adapter with a Custom TCP/IP Stack

vmware-vmkernel-12

In this article we have seen how many configurations are possible for our home lab and real scenarios. Ideally we want to probe every single component that will be part of our design making sure we dont find single points of failures. Definitely the vSphere Networking provides the required options and tools to cover all scenarios with regards to Clustering, Load Balancing and Fail-Over requirements. Next on this series we’ll take a look on putting all these things together rolling out our first Distributed vSwitches. The home lab has never been more entertaining than this!

Michele Domanico

Passionate about Virtualization, Storage, Data Availability and Software Defined Data Center technologies. The aim of Domalab.com is sharing with the Community the knowledge and experience gained with customers, industry leaders and like minded peers. Always open to constructive feedback and new challenges.

Leave a Reply

%d bloggers like this: