Here we are with a new article on how to migrate Storage Network from a virtual Standard Switch to a virtual Distributed Switch. The steps are very similar to what is anticipated in the first article of the series.
In particular in this instance we’ll configure a virtual Distributed Switch specific for the Storage Traffic. In essence the ability to separate the various Traffic types like Management, hot and cold Provisioning or even VM Traffic from the one dedicated to read the actual content of the Data blocks sitting into the VMFS file system.
By doing so we cannot just improve the overall performances. At the same time we can harden even more our environments with regards security by simply adding the option to migrate Storage traffic to a dedicated network.
This in simple terms means we can isolate and separate the Storage Network traffic leveraging all the features available at the virtual Distributed Switch and Port Group levels.
As per considerations in the previous article in this step we are now ready to focus on the creation of a virtual Distributed Switch to help migrate Storage Network. In the next steps we’ll also visit the options for advanced settings and further optimize performance, security and fail-over scenarios.
Create new Distributed Switch to migrate Storage Network
From the Data Center object in vCenter console let’s do a right-click to create a new virtual Distributed Switch. This will start the wizard to create our Storage vDS.
In this step let’s go for the latest version available unless we have specific needs due to mixed vSphere Hosts environments. For existing vDS it is also possible to upgrade to the latest version after upgrading the vSphere Host.
Let’s specify the number of uplinks available to the Hosts. Ideally all vSphere Hosts should have the same physical network cards available. For my home lab I’m currently using 2 separate physical network cards. They will serve for a Primary and Secondary Network to support redundancy scenarios.
Let’s review the main settings and commit the changes in the wizard.
Next is to edit some basic settings for the virtual Distributed Switch. For example I like to name the uplinks for easier management. For this purpose the 2 physical network cards (vmnic34 and vmnic35 in my case) will provide connectivity for Primary network on iSCSI-1 and Secondary network on iSCSI-2 for fail-over scenarios.
In this case as well we can also change advanced settings like for example the TCP packet size (MTU). By default it is set to 1500 Bytes (1492+8). In reality we can also change this value to 9000 and use the so called the Jumbo Frames. In order for this to work it is imperative that all physical switches and physical network cards used for Storage Network support Jumbo Frames. Failing this, the big TCP packets might be fragmented causing delays or even dropped completely depending on physical switch Network Policy. So in this case it is better check the documentation of the physical switch in use.
In my case I’m currently using 4 small 8 ports NetGear Gigabit Switches and Jumbo Frames are supported.
We are now ready to create our first Port Group for the Primary Storage Network connection. As per usual a right-click on the vDS for the Storage to initiate the wizard and add the Distributed Port Group. Will call mine iSCSI-1.
We can leave the default settings. Unless we have specific requirements like changing the VLANs configuration or other advanced settings. All of them are automatically inherited from the parent virtual Distributed Switch upon creation. Of course we can change them at any time.
Review the settings and finish the wizard.
Configure virtual Distributed Switch to migrate Storage Network
In this second part of the article we now select the created vDS to migrate Storage Network settings. So from the Storage virtual Distributed Switch let’s do a right-click to manage the vSphere Hosts and start the configuration wizard. Let’s make sure we also check the option for the template mode and add the desired vSphere Hosts.
At this point we can select the desired Host to use as a template.
As per screenshot below we can start managing the physical adapters for the uplinks and the associated VMkernel adapters.
In this instance the vmnic35 is currenlty not in use. So let’s move this one to the virtual Distributed Switch and assign the Uplink. Of course before proceeding let’s make sure to apply changes to all Hosts below.
We need to associate the physical network card to the intended uplink. In my home lab vmnic35 will serve the iSCSI-2 network connection. Essentially the one for redundancy and fail-over. We’ll move this one first as not currently in use.
As per usual let’s review the uplink port is correct and then apply the changes to all vSphere Hosts below.
We are now ready to migrate and associate the VMkernel used on the virtual Standard Switch to the newly created one. In my home lab vmnic35 is associated to vmk4. I will make this setting consistent on the new Storage Switch as well as per screenshot below. Next let’s apply to all vSphere Hosts.
As per previous considerations we’ll choose the iSCSI-2 Port Group.
At this point since the VMkernel brings the TCP/IP stack settings we are required to specify the IP Settings for the vSphere Hosts for which the VMkernels will be configured. In the case of multiple vSphere Hosts it is possible to specify the next IP address together with the number of Hosts to configure.
The screenshot below shows the same configuration should be applied to Hosts number “2” and “3”. Effectively the Host number “1” is used as Template!
Next the wizard will analyze the impact for migrating the VMkernel and Uplinks responsible for carrying the Storage Traffic. In this case no impact is detected on any vSphere Host. Failing this we might loose connectivity to the VMs including the vCenter or vCenter Server Appliance. So proceed with caution and read the information carefully in case if issues at this step!
The wizard now shows all the basic information we can review before final commit.
The wizard will kick off the task in the background and we should still be able to access our virtual machines including the vCenter Server.
As a next step if we now take a look at the iSCSI-2 Port Group Related Objects tab we can clearly see the associated vSphere Hosts.
And this concludes this part on how to migrate storage network to a virtual Distributed Switch.
Final considerations
In order to successfully migrate Storage Network Traffic let’s make sure we observe a few rules that can help us with this task:
- Let’s make sure we have free Physical Networks Cards we can use for this purpose.
- Let’s create the vDS and Port Groups in prior with desired settings to make our job easy
- On the vSS the 2 network uplinks should be configured with Round Robin as per Multi-Path configuration
- On the vSS the 2 network uplinks should be configured on dedicated Port Groups. They cannot be both active at the same time for the same Port Group
- Let’s make sure the proper NIC Binding configuration is in place for the vSS
- Only non-active uplinks and VMkernels can be moved at one time. So let’s determine which ones are in use (I/O) and migrate them later.
Add Comment