Home » Virtualisation » VMware vSphere TCP/IP Stack configuration

VMware vSphere TCP/IP Stack configuration

With a vanilla installation of VMware vSphere Hosts there are 3 TCP/IP Stacks that are created by default: Default, Provisioning and vMotion. This article covers the setup of VMware vSphere TCP/IP stack.

The purpose of a VMware vSphere TCP/IP Stack configuration in VMware vSphere ESXi Hosts is to setup the Networking Parameters which will allow the communication between the Hosts themselves including the Virtual Machines, other Virtual Appliances and last but not least the Network Storage. Each of the built-in TCP/IP Stacks or System Stacks from now on can be used to specify the “traffic profiles”. Together with VMkernel configuration they allow the traffic separation between different services like Management, Hot and Cold Provisioning and Storage Traffic to name a few over separate Physical Network Adapters. Different Physical Network Adapters can then be used in several scenarios that provide NIC Teaming with Load Balancing or Fail-over Networks.

The good news is that VMware vSphere also supports custom TCP/IP Stacks. One more reason to create a custom TCP/IP Stack is to isolate the Network Traffic which connects the ESXi Hosts to the Network Storage. Of course this makes a lot of sense for environments leveraging Shared Storage configurations where the VMware Datastores are hosted in SAN or NAS. All the network traffic pertaining the connections between the Hosts and Storage Providers by mean of iSCSI or FC can be isolated with a specific combination of a dedicated VMware TCP Stack and one VMkernel.

In this post I would like to provide an overview of the System and Custom VMware TCP/IP Stacks. In my home lab since I will be leveraging an iSCSI Shared Storage I will use the built-in “Default” Stack for Management Traffic and a custom “iSCSI” VMware Stack to manage the iSCSI communications between the Hosts and my NAS

 

Configure VMware vSphere TCP/IP stack

So let’s start from the System Stack with the “Default” profile.

domalab.com VMware vSphere TCP/IP configuration

Some fields are not editable like the Name in this case.

domalab.com VMware vSphere TCP/IP name

We can provide the vSphere Hostname. Let’s make sure that also both FQDN name resolutions are working as expected.

domalab.com VMware vSphere TCP/IP DNS configuration

Let’s provide the Gateway IP Address. In my case I’m only using an IPv4 network.

domalab.com VMware vSphere TCP/IP routing

And finally we can choose the congestion algorithm between the “New Reno” and “CUBIC” together with the desired max number of connections.

domalab.com VMware vSphere TCP/IP advanced settings

At this point in order to create the TCP/IP Stack for the iSCSI Storage Traffic we can leverage the command line as shown in the screenshots below by issuing the esxcli command:

esxcli network ip netstack add -N=”YourNetStackName”

domalab.com VMware vSphere TCP/IP custom

This will create the desired Custom Stack. To get more details from the command line:

esxcli network ip netstack get -N “NetStackName”

domalab.com VMware vSphere TCP/IP custom netstack

Although it is possible to view the details of the custom VMware vSphere TCP/IP Stack it is not possible to edit some settings using the GUI until we associate this TCP/IP Stack to the intended VMkernel which will be covered in the next article.

Another topic to keep in consideration is that although the creation of Custom Stacks can help with the isolation of different traffic types it is also true that they have to be created manually on each vSphere Host that will participate to that traffic. So for example the “iSCSI” Custom Stack needs to be created on each vSphere Host sharing the Datastores hosted into the Shared Storage. From this perspective and for this particular example it is beneficial also to setup the Port Binding for the Storage Devices associated to “iSCSI” TCP/IP Stack. Failing doing so will determine loss of connectivity with the intended Storage. Which is ultimately the situation you want to avoid especially when you are configuring a vSphere Cluster and expecting all resources to be redundant. I will cover this in a dedicated article in more details.

With the same principles and depending on the available Hardware we can also use a System Stack to provide network connectivity for Hot and Cold Provisioning. This will ensure no impact on other networks during the VM Migration, Snapshots and Cloning operations.

Tags

About the author

Michele Domanico

Passionate about Virtualization, Storage, Data Availability and Software Defined Data Center technologies. The aim of Domalab.com is sharing with the Community the knowledge and experience gained with customers, industry leaders and like minded peers. Always open to constructive feedback and new challenges.

Add Comment

Click here to post a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Browse articles

April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Articles by Category

Archives

About Domalab

Welcome to my personal Blog. Feedback is welcome and in case of questions you can also contact me at 

doma-blog@outlook.com

error: Content is protected !!

Discover more from domalab

Subscribe now to keep reading and get access to the full archive.

Continue reading