Home » Highlight » Home lab Network overview: Putting all together

Home lab Network overview: Putting all together

VMware Standard Switch configuration
Easy guide for your home lab on how to configure VMware vSphere networking achieving redundancy and performance

In the previous articles we have visited the options and configurations to setup the networking for our home lab environment. The journey started with the physical setup of the Host (in my case an Intel NUC 6i5SYH) including the installation of the ESXi Hypervisor. As a next step we have also seen how to add extra network cards to the vSphere Host in order to configure them with different purposes namely separating traffic types and increasing security.

From an hardware perspective we almost covered everything with regards to the first steps. From a software point of view we have also covered the following:

It’s now time to put everything together trying to get the most from our home lab.

The number of possible configuration might vary and heavily depend on the available resources and ultimately desired requirements. The purpose of this article is to cover a sample configuration which analyses the possible point of failures and how to avoid them.

Let’s assume for example we have the following configuration:

and ideally we would like to satisfy the following criteria:

  • Ensure the best network performances available
  • Provide connectivity to Shared Storage from multiple Hosts
  • Separate traffic by type (Management, Hot and Cold Provisioning, Storage)
  • Provide network redundancy
  • Provide network Load Balancing

One possible configuration that can satisfy all the criteria as mentioned above is the one as per diagram below:

domalab.com home lab virtual switch configuration
VMware Standard Switch configuration

So let’s go step by step:

Each Host is using VMware virtual Standard Switches:

  1. Across Hosts virtual Standard Switches have the very same names, configurations and Port Groups. This is a requirement for vSphere HA not to encounter issues with vMotion operations.
  2. It is a good practice to create separate Switches based on purpose, For example one for Management and VM Traffic types and one specific for Storage Traffic. This is also a good technique to isolate Production VMs in a switch with no uplink network adapter.
  3. By dedicating a separate vSwitch for Storage traffic it is possible to manage finer settings like using Jumbo frames and/or different Network policies

On Management Switch:

  1. Create different Port Groups based on traffic type. This applies to Management, VMotion, Provisioning and other VM Traffic types. Another separation can be between Production VMs from DMZ or External LANs.
  2. Each Port Group by default inherits all the settings from the virtual Switch. For each Port Group it is possible to use different settings like the Network Policies like security and shaping but also which physical network uplinks available to the virtual Switch should be used and how: network load-balancing and fail-over
  3. When using multiple VMkernel adapters they should be connected to physical separate network switches. In the diagram above the Management network is redundant. This means that if either vmk0 or vNIC0 experience issues when communicating with physical switch 1, Host 1 can still  communicate with the other Host and the network by using vmk1 and vNIC32 mapped to physical switch 2. Of course this is 2 way street. Let’s assume the physical switch 1 is now experiencing issues the Host 1 can still communicate using it’s own Management Network served by physical switch 2
  4. In the case of Management Port Group the Fail-over policy settings will look to something similar to this where vNIC32 will be used in case of failures of the default one vNIC0domalab.com home lab port group settings
  5. Hot Provisioning is using a dedicated VMkernel Adapter vmk1 mapped to vNIC32 for vMotion and Storage vMotion operations. By using a dedicated physical network will insure no impact to the Management network. Moreover it is highly suggested to configure and use a dedicated vSphere TCP/IP stack for the vMotion network. This can be configured as per previous article here. The good news is that vSphere already ships with 3 TCP/IP Stacks fit for the purpose: Default (used by Management Network), vMotion and Provisioning. It’s just a matter of configuring them with the desired domain broadcast and vLANs when required and make sure both routing and FQDN name resolution is working as expected as VMware vSphere Services rely on working FQDN name resolution.
  6. Cold Provisioning is using a dedicated VMkernel Adapter mapped to vNIC33 for operations related to moving/coping snapshots or Powered off Virtual Machines. As the amount of data can be considerable it is a good idea to dedicate a physical adapter along with a a separate broadcast domain. Again routing and FQDN name resolution should be tested before applying these configurations. vSphere 6 also supports one routing table so the entries should be tested and consistent. For those environments where multiple networks are not available it is also possible to leverage vLANs to logically separate networks into separate segments. The highest degree of isolation, separation and security of course would be to configure separate physical networks with vLANs.

On Storage Switch:

  1. Create at least one or more physical dedicated VMkernel network Adapters (vmk3 and vmk4) connected to different physical switches by mean of different physical uplink adapters (vNIC34 and vNIC35) for redundancy. As shown in this diagram the iSCSI 1 is using the first physical switch. The other iSCSI 2 is instead pointing at the second physical switch available. Should the failure occur on the Host side or physical switch the VMs can still operate regularly reaching the Shared Storage. When editing the Port Group Fail-over settings it should look to something similar to this.
  2. In this case the second uplink adapter should be set to unused rather than Standby. The very next step is to configure the Network Port Binding for iSCSI as previously mentioned in this article. This will now ensure that all storage traffic (iSCSI in our case) will not use the Management network.
  3. Another step that can make a big difference with performances when accessing the home lab storage is about the Multipathing configuration as per screenshot below.In the path selection policy there are 3 options:
    1. Most Recently Used (VMware)
    2. Round Robin (VMware)
    3. Fixed (VMware)

The Round Robin will ensure Load Balancing when multiple paths are available at the same time rather using the next one available.

On the second vSphere Host in our home lab there is pretty much a specular configuration mimicking the very same network configuration settings on the virtual Standard Switches. Again this is very important for vSphere HA operations go smoothly.

The remaining ports on the physical switches can be used to connect and provide redundancy to the Storage appliances providing the Shared Storage where our Virtual Machines are sitting. In this case for my home lab I’m using Synology NAS. Interestingly enough they support multiple NICs with bonding configuration which means they can use 2 separate network adapters sharing the same IP Address. Hopefully both Network cards are not mounted on the same controller on the back of the NAS enclosure!

As we can see from this picture two separate network cards are responding to the same IP Address and are connected to separate physical switches to provide redundancy. Another great feature Synology NAS supports is the LACP both Static and Dynamic. I want to cover these in more detail in a separate post including vLANs, Jumbo Frames and other advanced configurations. Last but not least another important piece of this configuration as shown in the diagram is the uplink port between the physical switches. This is something to take in consideration as well to avoid switches going in loop mode.

Conclusions

This is just an example of how redundancy and high level security can be achieved together with high performances with regards to the network configuration of a vSphere environment in our home lab. Indeed there are multiple types of configurations also based on different requirements and available resources. On purpose I did not include other features like virtual Distributed Switches, vLANS, Jumbo Frames, LACP and other advanced configurations as I would like to cover them in a separate topic. I hope this article can be useful and provide ideas for your personal home lab. I’m open to feedback and suggestions.

Tags

About the author

Michele Domanico

Passionate about Virtualization, Storage, Data Availability and Software Defined Data Center technologies. The aim of Domalab.com is sharing with the Community the knowledge and experience gained with customers, industry leaders and like minded peers. Always open to constructive feedback and new challenges.

2 Comments

Click here to post a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

    • Hi Mike, thanks for checking the article! I couldn’t find the original diagram and can share instead another one which is instead a glorified version 🙂 ! Btw I used draw.io (now diagrams.net) to create my drawings. It works great and it’s free. I have uploaded the new version at https://we.tl/t-uI3V8hUoew. The link will expire in 1 week. Hope this helps. Regards, Michele.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Browse articles

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Articles by Category

Archives

About Domalab

Welcome to my personal Blog. Feedback is welcome and in case of questions you can also contact me at 

doma-blog@outlook.com

error: Content is protected !!

Discover more from domalab

Subscribe now to keep reading and get access to the full archive.

Continue reading