One of the recurrent configurations available in Production environments is Network VLANs. Virtual Local Area Networks create logical network layers using the existing physical network. The advantage is primarily to reduce the cost of additional switches for extra Network Ports. In fact, VLANs simply use the existing ones with an independent configuration. Moreover, VLANs offer the opportunity to further “physically segregate” and secure the traffic over specific networks, for example into separate domain broadcasts. TPlink VLANs are not any different and the purpose of this article is to cover the basic setup and a quick overview using two network switches simulating a Production and a Fail-over scenario. This particular homelab is based on VMware running on Intel NUCs.
In order to make things a bit more interesting for a homelab and take advantage of the virtual network configurations this article explore the basic implementation of TPlink VLANs to accommodate and separate traffic from a a VMware based homelab. In particular, the idea is to segment the traffic types based on:
- VMware vSphere Management Traffic
- VPN Traffic
- Hot Provisionig (vMotion)
- Cold Provisioning
- VM Traffic (Production)
- VM LAB
- VM Nested
- Storage Traffic (Primary and Secondary)
These are only a few examples and definitely the list can get a lot longer also according to specific requirements. If setting up the VLANs might look a daunting task at the beginning, in reality it makes more sense once the terminology and the flow of the components become more familiar. The big advantage is that the same Network Ports can support multiple VLANs configurations at the same time. Thus reducing the cost of additional hardware switches. Configuring TPlink VLANs helps isolating and segregating the network traffic into different silos. As a Layer2 network device no routing will happen between separate VLANs. When routing between separate VLANs is required, this is obtained using Layer3 devices like routers and Layer3 switches. In a separate article series, this homelab includes a pfSense as main router managing the VLANs traffic, Firewall and more.
How VLANs work?
Simply put, in a VLAN network the packets traversing a specific Network Port are tagged with a VLAN ID. If the port is configured with the ID than the packets go through or can be rejected depending on configuration. There are two types of VLAN: Port and Network Protocol based. Whereas the former allows packets based on the port number (and is limited to the physical total number of ports in the switch) the latter leverages the 802.1Q protocol which allows up to 4094 IDs, hence 4094 different VLANs. In addition, there are also Private VLANs (PVLAN) who can even extend and separate the network configuration by using VLANs in Promiscuous, Community and Isolated modes. Essentially a way to determine which Private VLANs are allowed to talk with other Private VLANs. Whilst the majority of modern network switches these days support the VLANs, not all of them include support for Private VLANs. Better check this before as all components part of the VLAN configuration need to have the same level of support. In the case of TPlink T2600G-18TS several VLAN types are supported excluding the Private VLAN. Configuring TPlink VLANS include MAC, Protocol, VPN and GVRP VLANs. Still powerful for a homelab as it will be covered in dedicated articles.
How to setup VMware homelab to use the VLANs?
Since network packets tagging happens at the Network Ports, VMware supports VLAN tagging in 3 locations:
- External Switch Tagging (EST)
- Virtual Switch Tagging (VST)
- Virtual Guest Tagging (VGT)
The most popular configuration is the Virtual Switch Tagging (VST) where the packets are tagged with specific VLAN IDs before leaving the virtual switch uplink to the physical network. The big advantage of this configuration is that there is no need to touch any network configuration for the network cards of each individual VM on the that particular VMware Port Group. All the tagging process is done centrally on the virtual switch level. With the External Switch Tagging (EST) the physical Network Switch adds the tagging to the packets. In the case of Virtual Guest Tagging (VGT) the actual VM Guests add the VLAN tagging. It is indeed a more granular option and not all OSes might support this configuration based on the network drivers used for the virtual network adapter on the VM Guest. The VST offers the flexibility of controlling the VLAN tagging and other policies directly on the Port Groups attached to the VMware virtual switch.
The rest of the article below offers a quick overview of a sample configuration using the VLANs in conjunction with TPlink network switches and a VMware homelab based on Intel NUC 7i7 DNHE series.
TPlink VLANs homelab setup in VMware
To help visualize the network setup, the image below references the main components. Essentially there two TPlink T2600G-18TS acting as Production and Fail-over network switches. For each network switch there are 2 Intel NUC 7i7 DNHE. The Storage Network (iSCSI) is redundant on the other network switch. In addition, each network switch has a “path” to the Synology NAS (not included in this picture) providing a primary and secondary connection. The idea is to use a color coding to group different VLANs based on the traffic type:
- Blu (VLAN ID “0” or native)
- Green (VLAN “21” for vMotion and 22 for Provisioning)
- Orange (VLAN “30” for VM Prod, “31” for VM LAB, “32” for VM Nested and “33” for VSAN traffic)
- Red (VLAN ID “0” or native)
- Purple (VLAN ID “0” or native)
- Yellow (VLAN ID “0” or native – internet traffic)
- Grey (VLAN trunks)
- Black/Pink/White (VLAN ID “0” or native – Primary and Secondary traffic to Synology)
Each Intel NUC is configured with 5 network cards and all of them are configured exactly in the same way:
- VMnic0 – Management traffic
- VMnic32 – vMotion/Provisioning traffic
- VMnic33 – VM Prod/VM LAB/VM Nested/VSAN traffic
- VMnic34 – Primary iSCSI traffic to Synology NAS (DS416Play/DS916/DS620)
- VMnic35 – Secondary iSCSI traffic to Synology NAS (DS416Play/DS916/DS620)
From a VMware perspective there are two virtual Distributed Switches called vDS Infrastructure and vDS Storage. The first one covers all the Management and VM traffic on different Port Groups with different VLANs. The second vDS is just specific to Storage traffic. This should provide a basic overview of the components involved. A separate article will share more details on the VMware configuration. This image has been edited with Diagrams.net (formerly known as draw.io) and a copy is available here.
Production VLAN configuration
Using the web console to access the TPlink switch configuration, the L2 Features menu offers access to the Layer2 features of the ISO-OSI model for the 7 layers network model. In this case the VLAN setup is based on the 802.1Q protocol. From here the ability to review, create and edit the running configuration on the switch. By default all Network Ports are configured with the default VLAN (0 or 1 depending on vendors). The table also presents which ports are members of the same VLAN configuration. The Port Config button shows also the packet policy as covered later.
First step is to add a new VLAN configuration. Based on the previous image this will be VLAN 21 for the vMotion traffic. This traffic is specific to VM live migrations. For each VLAN it is possible to specify which ports are associated and if the action will be Tagged or Untagged. Tagged will accept by default all incoming connections where the packets have the corresponding VLAN ID. So generally it is for incoming connections. In this case all the vMotion packets are coming from the VMware Port Group with VALN ID 21. So it makes sense to enable VLAN ID 21 on this port. Untagged ports in general is for packets leaving the Network Port on the switch and the VLAN ID tag will be removed. In this particular case no vMotion connections leaving from other ports are required. Ports 3 & 4 are used respectively from Intel NUC 05 and 06 for the vMotion traffic (Green cable). Ports 12 and 15 are used to bring the traffic to other switches and packets need to leave still with the VLAN Tag on. Once done it is a matter to hit on create to save the first VLAN configuration.
Repeating the same steps is now possible to create the VLAN configuration for the other traffic types. In this case for cold Provisioning traffic. For example when copying VMs into a new location maybe with a different CPU architecture from Intel to AMD and vice versa. Or when the application needs to be shut down before any migration takes place. the principle is the same and the same Network Ports can be used with a different VLAN. vMotion and Provisioning traffic will use separate domain broadcasts and separate networks. As part of the Layer2, these wont be able to see each other reducing to the minimum the broadcast messages.
Next VLAN configurations for example can refer to different types of VM traffic to isolate from each other. For example the traffic for VMs in Production. The considerations are exactly the same for what has been covered earlier. In this case the Network Ports used are different (Port 5 & 6) and color coded Orange.
With a similar configuration and using a different VLAN ID again it is possible to add the VLAN for the VM LAB traffic. For example to cover Dev and Testing environments completely isolated from the Production environment.
Another configuration that it is nice to add as part of the homelab is the ability to run nested hypervisors to test particular features rather than using additional hardware. VMware provides a very flexible approach and is the most suited to run a s test-bed also for other hypervisors like Microsoft Hyper-V and Nutanix Acropolis. Certainly a good reason to isolate the traffic!
Last but not least it is a good idea to isolate the VSAN traffic also considering that like the vMotion and Provisioning traffic, even the VSAN can have a dedicated VMKernel and associated VMnic. The considerations for creating this VLAN are exactly the same as per the previous ones so nothing specific.
After adding all the desired VLANs the final result should like something similar to the one below.
Optionally moving to the Port Config section it is also possible to restrict the Policy to Tagged Only. The recommendation is to leave to a default value of “Admit All” test the VLAN and than eventually restrict the Frame Policy.
Another interesting detail is the list of active VLANs per Port. For example Ports 3 & 4 show the System VLAN, the vMotion and Provisioning. Total of 3.
As per configuration earlier the Ports 5 & 6 show the System VLAN plus the VM traffic ones. Total of 5. Once the VLAN configurations are working as expected it is also possible to remove the System VLAN as part of the network switch hardening. Careful in not locking out the Port used to manage the switch, barring an hard reset and start all over again!
Fail-over VLAN configuration
Similarly to the Production switch, it is now time to move to the Fail-over Switch and create the same VLANs configuration paying attention to Network Ports where the other Intel NUCs are connected and also the port used to connect to the Primary switch. In this case no Untagged Ports are required.
In the Tagged Ports section now it is a matter to identify the required ones. The preference is to go for a “mirrored” configuration as it makes easier to set it up and troubleshoot later. Most of all it is a good idea to have a have a design of the network topology before starting. For the vMotion VLAN operated by the second pair of Intel NUCs, the VLAN ID is still the same “21”, it is using Network Port “3 and 4”. Port “14” is used to interconnect with the primary or Production switch to pass the vMotion packets data for example when live migrating a VM from NUC-05 on Production switch to NUC-07 to Fail-over switch. Since Network Port “14” will carry other VLANs traffic this port effectively is running in Trunk mode.
After saving the first VLAN in the secondary switch the result is similar to the one below.
After adding the other TPlink VLANs configurations the final result will look at something similar.
Again also in this case it is possible to restrict the Acceptable Frame Policy only to tagged ones and automatically discard the other ones. Better testing these one at a time and evaluate the sorted effect.
At this point, if the configuration is working as expected, it is a good idea to save the “running configuration” on the switch making this consistent across switch reboots. Of course same operation needs to be done on both switches.
Another important step is to take a backup of the configuration. Should the switch be reset, importing the backup file is the quickest way to have the latest working configuration for all configured TPlink VLANs.