Home » Virtualisation » Migrate Storage Network to VMware virtual Distributed Switches

Migrate Storage Network to VMware virtual Distributed Switches

Here we are with a new article on how to migrate Storage Network from a virtual Standard Switch to a virtual Distributed Switch. The steps are very similar to what is anticipated in the first article of the series.

In particular in this instance we’ll configure a virtual Distributed Switch specific for the Storage Traffic. In essence the ability to separate the various Traffic types like Management, hot and cold Provisioning or even VM Traffic from the one dedicated to read the actual content of the Data blocks sitting into the VMFS file system.

By doing so we cannot just improve the overall performances. At the same time we can harden even more our environments with regards  security by simply adding the option to migrate Storage traffic to a dedicated network.

This in simple terms means we can isolate and separate the Storage Network traffic leveraging all the features available at the virtual Distributed Switch and Port Group levels.

As per considerations in the previous article in this step we are now ready to focus on the creation of a virtual Distributed Switch to help migrate Storage Network. In the next steps we’ll also visit the options for advanced settings and further optimize performance, security and fail-over scenarios.


Create new Distributed Switch to migrate Storage Network

From the Data Center object in vCenter console let’s do a right-click to create a new virtual Distributed Switch. This will start the wizard to create our Storage vDS.

Migrate Storage Distributed Switch wizard

In this step let’s go for the latest version available unless we have specific needs due to mixed vSphere Hosts environments. For existing vDS it is also possible to upgrade to the latest version after upgrading the vSphere Host.

Migrate Storage distributed switch version

Let’s specify the number of uplinks available to the Hosts. Ideally all vSphere Hosts should have the same physical network cards available. For my home lab I’m currently using 2 separate physical network cards. They will serve for a Primary and Secondary Network to support redundancy scenarios.

Migrate Storage distributed switch uplinks

Let’s review the main settings and commit the changes in the wizard.

Migrate Storage distributed switch summary

Next is to edit some basic settings for the virtual Distributed Switch. For example I like to name the uplinks for easier management. For this purpose the 2 physical network cards (vmnic34 and vmnic35 in my case) will provide connectivity for Primary network on iSCSI-1 and Secondary network on iSCSI-2 for fail-over scenarios.

Migrate Storage uplink settings

In this case as well we can also change advanced settings like for example the TCP packet size (MTU). By default it is set to 1500 Bytes (1492+8). In reality we can also change this value to 9000 and use the so called the Jumbo Frames. In order for this to work it is imperative that all physical switches and physical network cards used for Storage Network support Jumbo Frames. Failing this, the big TCP packets might be fragmented causing delays or even dropped completely depending on physical switch Network Policy. So in this case it is better check the documentation of the physical switch in use.

In my case I’m currently using 4 small 8 ports NetGear Gigabit Switches and Jumbo Frames are supported.

Migrate Storage distributed switch settings

We are now ready to create our first Port Group for the Primary Storage Network connection. As per usual a right-click on the vDS for the Storage to initiate the wizard and add the Distributed Port Group. Will call mine iSCSI-1.

Migrate Storage Port Group Production

We can leave the default settings. Unless we have specific requirements like changing the VLANs configuration or other advanced settings. All of them are automatically inherited from the parent virtual Distributed Switch upon creation. Of course we can change them at any time.

Migrate Storage Port Group settings

Review the settings and finish the wizard.

Migrate Storage Port Group summary


Configure virtual Distributed Switch to migrate Storage Network

In this second part of the article we now select the created vDS to migrate Storage Network settings. So from the Storage virtual Distributed Switch let’s do a right-click to manage the vSphere Hosts and start the configuration wizard. Let’s make sure we also check the option for the template mode and add the desired vSphere Hosts.

Migrate Storage manage Hosts

At this point we can select the desired Host to use as a template.

Migrate Storage select template Host

As per screenshot below we can start managing the physical adapters for the uplinks and the associated VMkernel adapters.

Migrate Storage select network adapter

In this instance the vmnic35 is currenlty not in use. So let’s move this one to the virtual Distributed Switch and assign the Uplink. Of course before proceeding let’s make sure to apply changes to all Hosts below.

Migrate Storage manage physical network

We need to associate the physical network card to the intended uplink. In my home lab vmnic35 will serve the iSCSI-2 network connection. Essentially the one for redundancy and fail-over. We’ll move this one first as not currently in use.

Migrate Storage select uplink

As per usual let’s review the uplink port is correct and then apply the changes to all vSphere Hosts below.

Migrate Storage apply to all uplink settings

We are now ready to migrate and associate the VMkernel used on the virtual Standard Switch to the newly created one. In my home lab vmnic35 is associated to vmk4. I will make this setting consistent on the new Storage Switch as well as per screenshot below. Next let’s apply to all vSphere Hosts.

Migrate Storage manage VMkernel

As per previous considerations we’ll choose the iSCSI-2 Port Group.

Migrate Storage assign Port Group

At this point since the VMkernel brings the TCP/IP stack settings we are required to specify the IP Settings for the vSphere Hosts for which the VMkernels will be configured. In the case of multiple vSphere Hosts it is possible to specify the next IP address together with the number of Hosts to configure.

The screenshot below shows the same configuration should be applied to Hosts number “2” and “3”. Effectively the Host number “1” is used as Template!

Next the wizard will analyze the impact for migrating the VMkernel and Uplinks responsible for carrying the Storage Traffic. In this case no impact is detected on any vSphere Host. Failing this we might loose connectivity to the VMs including the vCenter or vCenter Server Appliance. So proceed with caution and read the information carefully in case if issues at this step!

Migrate Storage analyze impact

The wizard now shows all the basic information we can review before final commit.

Migrate Storage wizard summary

The wizard will kick off the task in the background and we should still be able to access our virtual machines including the vCenter Server.

As a next step if we now take a look at the iSCSI-2 Port Group Related Objects tab we can clearly see the associated vSphere Hosts.

Migrate Storage Port Group associated Hosts

And this concludes this part on how to migrate storage network to a virtual Distributed Switch.


Final considerations

In order to successfully migrate Storage Network Traffic let’s make sure we observe a few rules that can help us with this task:

  • Let’s make sure we have free Physical Networks Cards we can use for this purpose.
  • Let’s create the vDS and Port Groups in prior with desired settings to make our job easy
  • On the vSS the 2 network uplinks should be configured with Round Robin as per Multi-Path configuration
  • On the vSS the 2 network uplinks should be configured on dedicated Port Groups. They cannot be both active at the same time for the same Port Group
  • Let’s make sure the proper NIC Binding configuration is in place for the vSS
  • Only non-active uplinks and VMkernels can be moved at one time. So let’s determine which ones are in use (I/O) and migrate them later.

About the author

Michele Domanico

Passionate about Virtualization, Storage, Data Availability and Software Defined Data Center technologies. The aim of Domalab.com is sharing with the Community the knowledge and experience gained with customers, industry leaders and like minded peers. Always open to constructive feedback and new challenges.

Add Comment

Click here to post a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Browse articles

December 2023

Articles by Category


About Domalab

Welcome to my personal Blog. Feedback is welcome and in case of questions you can also contact me at 


error: Content is protected !!