In the previous articles we have visited the options and configurations to setup the networking for our home lab environment. The journey started with the physical setup of the Host (in my case an Intel NUC 6i5SYH) including the installation of the ESXi Hypervisor. As a next step we have also seen how to add extra network cards to the vSphere Host in order to configure them with different purposes namely separating traffic types and increasing security.
From an hardware perspective we almost covered everything with regards to the first steps. From a software point of view we have also covered the following:
- how to setup physical network adapters
- configure the VMware TCP/IP Stack
- setup VMkernel Network Adatpers
- configure Network Port binding for ISCSI traffic
It’s now time to put everything together trying to get the most from our home lab.
The number of possible configuration might vary and heavily depend on the available resources and ultimately desired requirements. The purpose of this article is to cover a sample configuration which analyses the possible point of failures and how to avoid them.
Let’s assume for example we have the following configuration:
- 2x vSphere Hosts
- 2x Network Switches 1Gb 8Ports
- Shared Storage connected through iSCSI
and ideally we would like to satisfy the following criteria:
- Ensure the best network performances available
- Provide connectivity to Shared Storage from multiple Hosts
- Separate traffic by type (Management, Hot and Cold Provisioning, Storage)
- Provide network redundancy
- Provide network Load Balancing
One possible configuration that can satisfy all the criteria as mentioned above is the one as per diagram below:
So let’s go step by step:
Each Host is using VMware virtual Standard Switches:
- Across Hosts virtual Standard Switches have the very same names, configurations and Port Groups. This is a requirement for vSphere HA not to encounter issues with vMotion operations.
- It is a good practice to create separate Switches based on purpose, For example one for Management and VM Traffic types and one specific for Storage Traffic. This is also a good technique to isolate Production VMs in a switch with no uplink network adapter.
- By dedicating a separate vSwitch for Storage traffic it is possible to manage finer settings like using Jumbo frames and/or different Network policies
On Management Switch:
- Create different Port Groups based on traffic type. This applies to Management, VMotion, Provisioning and other VM Traffic types. Another separation can be between Production VMs from DMZ or External LANs.
- Each Port Group by default inherits all the settings from the virtual Switch. For each Port Group it is possible to use different settings like the Network Policies like security and shaping but also which physical network uplinks available to the virtual Switch should be used and how: network load-balancing and fail-over
- When using multiple VMkernel adapters they should be connected to physical separate network switches. In the diagram above the Management network is redundant. This means that if either vmk0 or vNIC0 experience issues when communicating with physical switch 1, Host 1 can still communicate with the other Host and the network by using vmk1 and vNIC32 mapped to physical switch 2. Of course this is 2 way street. Let’s assume the physical switch 1 is now experiencing issues the Host 1 can still communicate using it’s own Management Network served by physical switch 2
- In the case of Management Port Group the Fail-over policy settings will look to something similar to this where vNIC32 will be used in case of failures of the default one vNIC0
- Hot Provisioning is using a dedicated VMkernel Adapter vmk1 mapped to vNIC32 for vMotion and Storage vMotion operations. By using a dedicated physical network will insure no impact to the Management network. Moreover it is highly suggested to configure and use a dedicated vSphere TCP/IP stack for the vMotion network. This can be configured as per previous article here. The good news is that vSphere already ships with 3 TCP/IP Stacks fit for the purpose: Default (used by Management Network), vMotion and Provisioning. It’s just a matter of configuring them with the desired domain broadcast and vLANs when required and make sure both routing and FQDN name resolution is working as expected as VMware vSphere Services rely on working FQDN name resolution.
- Cold Provisioning is using a dedicated VMkernel Adapter mapped to vNIC33 for operations related to moving/coping snapshots or Powered off Virtual Machines. As the amount of data can be considerable it is a good idea to dedicate a physical adapter along with a a separate broadcast domain. Again routing and FQDN name resolution should be tested before applying these configurations. vSphere 6 also supports one routing table so the entries should be tested and consistent. For those environments where multiple networks are not available it is also possible to leverage vLANs to logically separate networks into separate segments. The highest degree of isolation, separation and security of course would be to configure separate physical networks with vLANs.
On Storage Switch:
- Create at least one or more physical dedicated VMkernel network Adapters (vmk3 and vmk4) connected to different physical switches by mean of different physical uplink adapters (vNIC34 and vNIC35) for redundancy. As shown in this diagram the iSCSI 1 is using the first physical switch. The other iSCSI 2 is instead pointing at the second physical switch available. Should the failure occur on the Host side or physical switch the VMs can still operate regularly reaching the Shared Storage. When editing the Port Group Fail-over settings it should look to something similar to this
- In this case the second uplink adapter should be set to unused rather than Standby. The very next step is to configure the Network Port Binding for iSCSI as previously mentioned in this article. This will now ensure that all storage traffic (iSCSI in our case) will not use the Management network.
- Another step that can make a big difference with performances when accessing the storage is about the Multipathing configuration as per screenshot belowIn the path selection policy there are 3 options:
- Most Recently Used (VMware)
- Round Robin (VMware)
- Fixed (VMware)
The Round Robin will ensure Load Balancing when multiple paths are available at the same time rather using the next one available.
On the second vSphere Host there is pretty much a specular configuration mimicking the very same network configuration settings on the virtual Standard Switches. Again this is very important for vSphere HA operations go smoothly.
The remaining ports on the physical switches can be used to connect and provide redundancy to the Storage appliances providing the Shared Storage where our Virtual Machines are sitting. In this case for my home lab I’m using Synology NAS. Interestingly enough they support multiple NICs with bonding configuration which means they can use 2 separate network adapters sharing the same IP Address. Hopefully both Network cards are not mounted on the same controller on the back of the NAS enclosure!
As we can see from this picture two separate network cards are responding to the same IP Address and are connected to separate physical switches to provide redundancy. Another great feature Synology NAS supports is the LACP both Static and Dynamic. I want to cover these in more detail in a separate post including vLANs, Jumbo Frames and other advanced configurations. Last but not least another important piece of this configuration as shown in the diagram is the uplink port between the physical switches. This is something to take in consideration as well to avoid switches going in loop mode.
This is just an example of how redundancy and high level security can be achieved together with high performances with regards to the network configuration of a vSphere environment in our home lab. Indeed there are multiple types of configurations also based on different requirements and available resources. On purpose I did not include other features like virtual Distributed Switches, vLANS, Jumbo Frames, LACP and other advanced configurations as I would like to cover them in a separate topic. I hope this article can be useful and provide ideas for your personal home lab. I’m open to feedback and suggestions.