Home » Highlight » VMware home lab: 2020 easy and fun setup

VMware home lab: 2020 easy and fun setup

domalab.com VMware home lab

It’s a New Year and why not start building a new VMware home lab? Based on a previous home lab experience with 4x Netgear GS108v3 managed switches for the Networking side of things, this setup is instead using a slightly different physical and logical topology. In this instance the aim is to create a new article series focusing on the following:

The goal is to provide easy step by step articles with simple screenshots from start to finish and use this article as a sort of placeholder for all future links and updates. At least for 2020!

What is the intended purpose for the new setup?

The main idea is to leverage all the new hardware and the nice software features generally used in the enterprise or large deployments. Features like VLANs, LAGs, custom firewall rules, software defined storage, traffic separation and a lot more. These are only a few of the topics this article series will touch upon when it comes down to a VMware home lab. The purpose is to create an affordable environment serving as a sandbox for learning and improving main setup in real production environments.

What’s next?

Moving from an existing setup based on the Intel NUC 6i5 series + Netgear GS108 v3 and Synology DS416 and DS916 which has been working great for the last three years, the idea is to add a level of sophistication. The purpose is to get the VMware home lab closer to a real deployment. At the time of writing the choice went for the ability to segregate different types of traffic and create a sandbox environment for testing vSAN and vVOL among other things. Also the idea to keep all these separate from nested hypervisors and other HCI solutions. So in summary the main purpose for this new VMware home lab is to:

  • Separate different types of VMware VM traffic
    • The 20.1 version (January 2020) provides segregated networks for:
      • VM Prod
      • VM LAB
      • VM Nested
      • vSAN traffic
  • Provide a Lab for testing new solutions
    • This is a dedicated network where to test, verify backups and replicas leveraging the Veeam Datalab. An independent bubble where is possible to play with a copy of the production data without touching the production environment.
  • Provide a sandbox environment for Nested hypervisors
  • Provide a sandbox environment for VMware vSAN and vVOL
    • vSAN will run on nested ESXi Hosts in VMware

These are only a few of topics and configurations that will leverage and benefit from the new VMware home lab setup. More will be added, updated and even improved during deployment.

VMware home lab physical topology

How to put all this together? There are different components involved and surely multiple ways to accomplish the end result. The aim is to try to go the extra mile. Trying to reproduce real environments without over complicating the setup. Another important aspect is to use a “scalable approach” simply by replicating the building block. So this is a sort of scale out vs. scale up approach. The former allows more flexibility and cost control (based on the cost for extra hardware). Ultimately this is also the approach for the current VMware home lab. In particular, the building block consists of:

  • 1x Intel NUC with extra USB NIC adapters (used for separate networks and VLANs)
  • 1x network switch
  • 1x network storage

The current VMware home lab is simply scaling out by adding pertinent hardware in order to provide redundancy at:

  • Compute level (add more Intel NUC)
  • Network level (add secondary switch for fail over scenario)
  • Storage level (add storage controllers / storage path)

Rome wasn’t built in a day and home labs shouldn’t be either! The amount of hardware all together might be a bit daunting. In fact, this setup is the result of different components purchased in a span of 3 years.

domalab.com VMware home lab

By using the picture above as a reference, the design idea behind the VMware home lab project is to accomplish the following:

  • Each Intel NUC to have 5 network adapters as distinct VMnics:
    • All Intel NUCs share the same configurations and patch levels
    • All Intel NUCs have the same VMkernel configuration. Naming convention and configurations across the VMware vSphere Host is consistent
    • VMnic0: used for VMware Management Network
    • VMnic32: used for both Cold and Hot traffic like VMware Provisioning and vMotion. Each traffic uses a separate network subnet and dedicated VLAN
    • VMnic33: used for all VM traffic types including Production, LAB, Nested, vSAN and more. Each VM Traffic sits on a separate network subnet with dedicated VLANs
    • VMnic34: primary connectivity to the iSCSI storage as it is presented to the VMware vCenter. At the time of writing shares the same Management network but uses a dedicated VMnic. In the future it will use a VLAN configuration.
    • VMnic35: secondary connectivity to the iSCSI storage as it is presented to the VMware vCenter. At the time of writing shares the same Management network but uses a dedicated VMnic. In the future it will use a VLAN configuration different from the primary connection on VMnic34
  • All Network Switches share the same configurations
    • All switches are updated to the same firmware and patching levels
    • Same VLANs are created across both Primary and Secondary switches
    • VLAN Port configurations association is different based on the switch role (Primary / Secondary)
  • All Storage boxes are upgraded to the latest software version available
    • Network Storage is running on Management Network (next update will move DS916 and DS620 to dedicated VLAN)
    • Each storage runs 2 network connections pointing at two switches for redundancy
Tags

About the author

Michele Domanico

Passionate about Virtualization, Storage, Data Availability and Software Defined Data Center technologies. The aim of Domalab.com is sharing with the Community the knowledge and experience gained with customers, industry leaders and like minded peers. Always open to constructive feedback and new challenges.

31 Comments

Click here to post a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • Hello Michele,

    I am interested in a new homelab setup. Your setup make me re-think my choices for renewal. Always used VMware Workstation for development..

    Can you explain me the benefit for a physical setup over a virtual build.

    How good do the VM’s run over 1GB ethernet connection to the NAS solution. Does it perform enough. Is it fast enough to run NSX and Automation appliances.

    Hope you have the time to answer

    • Hi Borg,

      Thanks a lot for your comment. I personally love the physical setup as the combination of Intel NUC, network switches and NAS gives you the option to get your “hands dirty” and understand more about the deployment in all aspects.

      Despite the VM density level per NUC (perfectly fine for a home use), these are great for a 24/7 homelab. I personally leave my homelab always on and I barely noticed big changes to the electricity bill (to give you an idea each NUC consumes max 0.4/0.5 Kwh/day and both NAS 1.4 Kwh/day combined). 1 GB network throughout Mgmt, Backup, VM Traffic, iSCSI, Nested LABs is working like a champ (lowest speed during backup 70MB/s top 95MB/s). I guess lots depends on storage configuration as well 🙂 Personally I don’t see the need for 10Gb or even 40Gb networks for now. VM backups (through Veeam) are pretty quick and efficient. Avg VM size is 40GB. Running about 55 VMs and growing. Some of them are VSAs (NetApp, Dell, HPE, Nutanix, Quantum and others).

      On the other hand if you have a powerful box you could virtualise everything and run nested Hosts and VMs. It might make sense if the “big box” has a decent power consumption when left running 24/7. Otherwise get ready for a big energy bill!

      In the end “your mileage might vary”! Yeah I know it is very generic.. and the number of possible scenarios/requirements is massive.

      The beauty of virtualisation is that it allows a very sophisticated scheduling of the resources from the Hosts. The VMs will be consuming: CPU, RAM,Disk,Network: very unlikely in a homelab you’ll get a high contention of resources unless these are not properly shared.. or set to unneeded values. My Exchange for example uses 4GB or RAM. SharePoint the same. AD and SQL are constantly below 2GB..
      Of course I did a bit of config on these applications to reduce general consumption.

      One of the next projects in my homelab is to include VSAN, VVOL, NSX and vCloud as there are nice integration features with Veeam.

      Last but not least: my physical setup is based on a concept similar to a converged infrastructure. So all the components are the same and configured the same. I like to use this approach to “scale” my homelab and it something I built over the years (3 in a row now and first hardware still rock solid). At times it can be pain and forced me to do/learn thing the right way! A virtualised build avoids all this. Thing is I personally have fun with the former rather than the latter so I’m a bit bias. Everyone is different 🙂

      Hopefully this was informative and not too long.

      Kind Regards,
      Michele

  • Which NUC model is sufficient for this type of setup? I want to build a home lab and budget is that absolute most important thing at this point. I would probably end up only purchasing one NUC and maybe just one synology to start off with.

    • Hi Nicholas,

      Thanks for your comment! I would say all the latest models have enough CPU power for decent a VMware homelab. Ideally any i5 and higher is recommended. My NUC 6i5SYH are still running the majority of VMs and never had any single issue. The newer NUC7i7DNHE in my opinion are the best “compromise” for Cost/CPU/Performance/Energy in Watts. For Storage M.2 SSD are great/cheap/fast and don’t dissipate too much heat. For RAM memory all modern NUC (from version 6 and above) can run 64GB without problems: https://domalab.com/intel-nuc-64gb/

      Synology DS620Slim is a lovely NAS too and can expand on memory as well for additional apps and storage options: https://domalab.com/synology-ds620slim/

      Hope this helps,
      Michele

  • Michele,

    As many others have stated, I would love to setup a similar lab like yours with my QNAPs 1277s and Cisco switches.
    This being said, I am looking for the most powerful Intel NUCs or other similar footprint I can get. Do you have any recommendations as to which NUC Kit I should be looking at?

    • HI JC,
      Thanks for your question and welcome to the Community! Intel NUCs are a great kit and very popular. They might be a bit pricey in terms of VM density (assuming this is what you are using them for..) compared to other solutions but the overall cost is way smaller in the long run (for example the electricity bill if you keep them 24/7). At the moment the Intel NUC Gen10 are available and they are great. The only thing I dont like (if I want to be picky 🙂 ) is the Core Base frequency of 1.1 Ghz and 4.7 Ghz max frequency.. This might be annoying with some OVA deployments and some VMs even though it’s enough for a homelab. Gen8 has base 2.7 / 4.5 max – really a nice one. Gen7 has base 1.9 / 4.2 max and only 15W TDP! All these values are for i7 CPUs. Personally I have both 6i5 and 7i7 series and love them. Also consider Gen11 with Tiger Lake might be available by end of the year. It really depends on how much “POWER” do you think you need. Also consider the beauty of virtualization is effectively scheduling the HW resources between different VMs. This means a less powerful Socket in favour of more RAM / Storage allocation on the host can allow way more configurations. In the end it all depends on what you’ll be using this for. Take a look at this link to have an idea of the apps and platform I have installed and run on mine. The list is not complete and I’m planning to update this soon 😉
      https://domalab.com/build-homelab-setup-idea/

      Hope this helps,
      Michele

      • Michele,

        Thanks for the prompt response. I have not purchased the Intel NUCs at this point as I am doing more research on what would be the best approach as I would certainly like to have “powerful” systems that are above the usual “OK for home lab”. I plan on starting with 3 x NUCs with 128GB NVMe drives and 64GB RAM. I was looking at the NUC10i7FNH Core i7 Frost Canyon, but may wait for the Gen11 if they will indeed be released this year.

        This being said, my current needs for my home lab is as follow:

        VMware ESXi 6.7 or 7
        Citrix XenServer 7.x
        Hyper-V Core 2019
        Nutanix CE
        Linux CentOS
        Linux RedHat Enterprise
        Microsoft Windows Server 2012 R2
        Microsoft Windows Server 2016
        Microsoft Windows Server 2019
        DELL EMC Isilon
        DELL EMC Unity
        HPE 3PAR Simulator
        HPE StoreOnce
        HPE StoreVirtual
        NetApp ONTAP Simulator
        NetApp Virtual Storage Console
        StarWind Cloud VTL for AWS
        StarWind Virtual SAN
        Microsoft Active Directory
        Microsoft Exchange
        Microsoft Office 365
        Microsoft SharePoint
        Microsoft SQL Server
        Oracle Database 19C
        Veeam Backup & Replication
        Veeam ONE
        VMware VCSA 6.7

        Thanks for your time and the excellent write-ups. Lots of ideas and learning to do! 🙂

      • Hi Julio,
        Thanks for you comment. Where possible try to go for an i7 family. i5 works great but having that extra kick makes things better. Also the combination with a NAS with SSD is working great for my setup and definitely something I recommend. DAS storage on each host is not redundant and could be also expensive. I run this list of apps and server absolutely fine on vmdk shared storage. This is also helps keeping low temp inside the NUCs. Citrix, hyperv and nutanix are all nested in VMware!
        Hope this helps and enjoy your future homelab 🙂

  • Hi Michele,

    I’ve just discovered your amazing “Homelab publication”, thanks a lot.

    few weeks ago I’ve decided to upgrade to Intel architecture my own “home lab” that was build on ARM (heterogeneous pico cluster of 2 Raspberry pi-4 and 3 Jetson Nano) and was running Ubuntu/maas. The root cause is: Intel is mandatory to try and learn vmware!!!!!

    Until yesterday my short-listed shopping list for Black Friday was based on 4 x NUC 10 gen:
    – Intel NUC 10 Performance Kit BXNUC10i7FNH2 i7-10710U (EU version of you NUC), each of them:
    – 2 x 32 GB,
    – 970 EVO PLUS 2 TB,
    – 2 x StarTech USB 3.0 to Dual Port Gigabit Ethernet Adapter NIC with USB Port),

    Unfortunately, I’ve read on hardware forum, that NUC11th was announced between [2020/Q4 – 2021-Q1] that fit exactly my calendar, so my NUC 10th gen is at least frozen, or aborted as NUC 11th gen will come with:
    – Tiger Lake CPU (4 cores/8 threads)
    – vPro available with i5 and i7 CPU
    – PCIe x4 Gen 4 NVMe
    – Dual Thunderbolt 4
    – USB 3.2 Gen2
    – Intel I225-LM Ethernet adapter with 2.5GBASE-T support
    – Expansion slot with a second Intel i225-LM adapter
    – DDR4-3200 SODIMM
    – RS232 serial port header
    – Qualified for 24/7 operation

    That from my side is a major upgrade from 10th generation, even if I have to shrink my homelab from 4 to 3 nodes (as I will buy out of black Friday calendar).

    I am also looking at AMD as ASRock 4X4 BOX-4800U SoC are very sexy also:

    – AMD Ryzen 4000U-Series
    – 2 x 260-pin SO-DIMM up to 64GB DDR4 3200 MHz
    – 1 x M.2 (KEY M, 2242/2260/2280) with PCIe x4 and SATA3 for SSD
    – 3 x USB 3.2 Gen2, 2 x USB 2.0, 1 x M.2 Key M, 1 x M.2 Key E, 1 x SATA3
    – 1 x Realtek 1 Gigabit LAN, 1 x Realtek 2.5 Gigabit LAN (Support DASH)
    – Supports Quad display, 1 x HDMI 2.0a, 3 x DP 1.2a (2 from Type C)

    But, it seems that Intel NUC 11th gen are in trouble with vmware on network as noticed by https://www.virten.net/2020/09/11th-gen-nuc-first-details-on-intels-tiger-canyon-nuc/

    “Unfortunately, there is a drawback. The I225-LM, and in general all 2.5GBASE-T / 5GBASE-T adapters are not supported with ESXi and due to the native driver requirement in vSphere 7.0 it has become harder to get community supported drivers. The I225-LM is suspected to come with PCI Device ID 8086:15f2 if you want to check for drivers.”

    Perhaps you can help me with following questions:

    Q1) Do you think that vmware will support this 2.5 Gb/s NIC of 11th generation of NUC, (is it just a question on time (2021-Q1))?
    Q2) do you know if AMD/Ryzen 4800U NUC like are/will into vmware HCL?
    Q3) Is Ubuntu/KVM to run nested vmware esxi a good mitigation plan to NIC limitation?
    Q4) what will your ordered list between NUC10/NUC11/4800U?
    Q5) which ethernet switch will you buy to allow 2.5 Gb?
    Q6) if you should have to rebuild your network (1Gb/s), would you continue with TP-Link 2600, or switch to 1700, 2700 series or something else?

    PS: English is not my natural language, apologize if sometime my sentence are “French oriented”

    Fred,

    • Hi Fred,

      Thank for your comment. Apologies but for some reason I couldn’t see this one before from the notification page in WP!

      Quickly on the answers below:
      1) VMware might support 2.5Gb adapters and typically the “enterprise” ones. The consumer grade (like NUCs and similar) very unlikely. I did a quick check for the Intel i225LM on ESXi 7.0 u1 with Dev 8086 Sub/V 15f2 and is not there on the VMware official HCL yet. Without the physical hardware is difficult to say. Not supported does not mean it doesn’t work. So of course until you have the NUC v11 in your hands is difficult to say 😉 If you want to check directly on the HCL, here you are the official link https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io

      2) Same as above. Actually there is a great article covering 4800U is available at virten https://www.virten.net/2020/09/esxi-on-amd-ryzen-based-asus-pn50/#more-23161

      3) My personal preference is to run environments nested with VMware as a base for everything. On top of VMware 6.7 for example I have “smaller labs” running Hyper-V, Nutanix AHV and VMware ESXi for quick and dirty tests. For example you might use VMware 6.7 as a base and allow PCI pass-through the Intel network card to the Nested VMware 7.0u1 and check the native/community/fling drivers. Personally I’m not very familiar with specific KVM so it might take extra time for me to understand what to check during troubleshooting. And of course this is my preference; your experience/knowledge can be different! Another thing to consider is the Community support you get for the unknown/exotic questions. Both Ubuntu KVM and ESXi are very popular platforms, so ultimately choice is yours!

      4) Good and difficult question! Probably #1 NUC10 – #2 NUC11 – #3 4800u. The only thing I personally don’t like about the NUC10 is the low CPU base rate. This can be annoying when deploying some VSAs and similar vApps which require/reserve at least 2.0Ghz and have to bypass/disable the option in the VM configuration. Other than that you can go up to 6 Cores / 12 CPUs and builtin support for native network card. Which is one less thing to think about for future updates or at least until VMware (in this case) drops official support as deprecated hardware. BTW if you are thinking of using an external USB network adapters for the Management traffic, simply don’t! You are better off using auxiliary USB network cards with all the VLANs / IP Storage you need just for the other traffic types. It is way more stable and performance on a 1Gbit Network are great for general homelab use (based on 1Gbit Networks of course!!).

      5) Not sure I can help here. I’m very happy with TPlink/Anker 1GB Usb adapters based on Realtek 8152/3 chipset. VLANs and iSCSI IP storage work like a champ! With VMware 7.0u1 stick to the list supported by VMware Fling USB Driver https://flings.vmware.com/usb-network-native-driver-for-esxi#requirements. It might add support for upcoming 2.5/5/10Gbit Network cards!

      6) Difficult to say. When I chose mine is because I had a voucher to spend on Amazon and found the TPlink 2600 are feature rich and price competitive when compared to known brands. Overall they are doing great. I would suggest carefully download the user manuals of each model and compare which options are available in the firmware. I did this mistake (reading too quickly!) and find out the PVLANs are not available in the model I have but just the bigger one. As trade off I have more “homelab” features I can still use with my models. So happy in the end. The software is super easy to use, probably too much with little explanation in the GUI, but they have a big community forum. So in case of questions you should be fine. They also have a low consumption.

      One final suggestion I would like to share if I can. When choosing components think about what is the cost of replacing/upgrading them. Surely 2.5/5/10Gbit Networks and more are very tempting. But if you need to make same purchase multiple times (eg. same adapter x number of ESXi hosts) it all adds up to the final bill. At the moment I’m very happy with a 1Gb Network. In my homelab I run about 50 VMs of mixed OS types and specs. Another thing to consider is the network latency. Try to choose network adapters with solid drivers that help reducing latency. Surely things can be faster but what really made the difference was switching to SSD RAIDs. Yes SSD. Nowadays the cost for SSD is not prohibitive and are super silent compared to HDD. To a point where these could be used for storing backup data as well. Maybe no need to buy need HDD anymore! In terms of VM Data to move around, is it big or small? Always remember to compare these numbers in relation to the available network throughput. In general for a homelab, 1Gbit would be a decent option for carrying dozen and hundreds of GBs in a relative small time. I use Veeam to run backups and in general jobs take 5-10 mins to complete the incremental changes across all VMs. I have seen transfer rates up to 980/990 MB/s. Consistent on a 1Gbit USB adapter! Massive credits to Jose Gomes and his drivers; take a look at the article on this blog for install instructions and links. Since VMware 7.0, I will use only drivers from Flings due to new kernel model. So when planning for your future purchases try to consider the usability/cost in the long term. All in all I have been building this homelab in more than 3 and 1/2 year and loving it. Of course everything depends on expectations and the available budgets. So for this reason I don’t necessarily go for the very latest in everything!

      PS. Careful with Dual/Tri ports USB adapters. Might need special drivers and can be tricky looking for ESXi. Maybe just try one before investing in multiple ones. Also consider that USB 3.0 has up to 5Gbit/s bandwidth

      Thanks you very much for your kind words and your english is great. I’m Italian and can fully understand it! 😉
      All the best with your choices and would be happy to see your final choices in the end!
      Thanks,
      Michele

      • Hello Michele,

        I couldn’t postpone my new lab to 2021, so my CB is burning on Amazon
        and 2 NUC s10i7 + 64GB RAM and EVO-970-plus SSD arrived this morning

        I have some issues, with the product download and the license, I could get vsphere for home-lab without retriction but vcenter seems to be under a 60 day trial license only.

        Even if i register … only a 60 day trial version is available.

        I think i am missing something, but as i am a pure vmware’s novice if anyone can share the url, and step by step process to download / install / configure from scratch
        My objective is esxi 7.0 + vcenter and finally my first vm, I will be very happy.

        let it be as simple as possible:
        1 NUC 10i7:
        – 64RAM
        – 2TB EVO970plus SSD
        – 1 internal NIC (MNGT LAN)
        – 2 USB / Ethernet adapters (Startech.com) – not yet connected, let just one NIC configuration, I’ll add more complexity later …

        target configuration ESX7.0U1 + Vcenter

        already downloaded:
        – Rufus,
        – ESXi-Customizer-PS.ps1 (2.8.1)
        – ESXi-7.0U1b-17168206-standard.iso
        – ESXi701-VMKUSB-NIC-FLING-40599856-component-17078334.zip
        – VMware-VCSA-all-7.0.1-17004997.iso (which I downloaded on a 60 day trial license agreement).

        Network configuration:
        GW 192.168.10.1
        MGT IP of ESX: 192.168.10.100/24
        domain: mylab.local
        DNS: 192.168.10.1
        please help, do not let my Friday shift into dark side

        Fred.

      • Hi Fred,
        Great start and congratulation on the purchase. VMware provides free ESXi which comes with builtin ESXi Web client to manage it. vCenter (or better VCSA these days) is available only as a trial for 60 days or purchase. If you dont want to use the trial (btw you can always backup restore config to a new one) I would suggest to go for a VMUG advantage which includes for a fixed cost access to many products and lot more.
        I have specific sections on this blog with step by step about vCenter and VCSA deployment and install. For the IP addressing scheme, put everything on paper first, start simple and then build up. For example use same network for all components. Make sure all gateway configurations are correct. This will save you a lot of time later trying to understand what to fix/change. Talking by experience!
        Enjoy your homelab,
        Michele

      • Hi Michele,

        Thanks a lot for the information,
        I’ve just received the stack of 2 T1700G-28TQ and 4 StarTech.com USB 3.0 dual Gigabit Ethernet Adapter to upgrade to 5 NIC per NUC.

        I run a test, USB adaptors are operational with vpshere 7.0u1, a 5 NIC configuration is OK.

        Now I gonna deep dive into hybrid architecture to get a VSAN Cluster on 2 Intel (NUC) nodes and a witness on ARM, as it seems than it is possible to install ESXi on a Raspberry PI4B with 8GB of RAM!

        so next steps are:
        1) Find documentation, do some scheme and build psecifications.
        2) Download and install ESXi on ARM,
        3) Subscribe to VMUG and get 365d trial licences for VSAN, VCSA.
        4) Build my own VSAN cluster

  • Hello Michele,

    verry nice work and verry writing! 🙂

    Iam also in the building-step of my own homelab using 3x Nuc8i7beh which i got quite cheap from a retailer. In your (nice) Network-Diagram, you have 5 NICs on each nuc. I guess you are using a usb-hub for connecting the Anker-1GB Adapters?
    Do you have any negs about using a usb hub with these adapters? Are you satisfied with the speed?

    Thanks a lot for sharing your Info!

    Cheers,
    Tobias

    • Hi Tobias,
      Thanks for your kind comment and welcome to the Community! It’s great one more member is enjoying building a homelab too 🙂
      Yes, I’m using a combination of both Anker and TPlink USB adapters all based on Realtek 8152/3 chipset. I’m also using Jose Gomes drivers with a 6.7u3 and work like a champ. With VLANs and vDS I can see the full speed up to 980/990 MB/s. Backups with Veeam are simply flawless. Also connection to Synology storages are working great. Pretty much 1 GBit iSCSI connections everywhere. For VMware 7 you might want to use the excellent USB Network adapter drivers from VMware Fling. For my usage I dont see upgrading to 10 Gbit Network in a long time..
      No negatives so far and definitely can recommend.
      Out of curiosity where did you get your NUCs from? I might think of getting a new one as a little project for this xmas 🙂
      thanks,
      Michele

      • Ah i see now. Your usb-nix is directly connected to all of your nuc ports. Because i only have 2 usb ports on each nuc left, i will give the dual-port usb nic (DELOCK 62583) a try. they are based on realtek and hopefully supported by the fling driver (iam running esxi 7.0.1).
        So i will use the internal intel nic for management, and the 2×2 usb-nic for the rest (vmotion, vsan, …)

        The nuc are from cyberport.de and costs me 240€ each as i ordered them.

        cheers,
        Tobias

  • Hello Michele, you state that you have some 50ish vms running on your nuc’s.
    Can you provide configurations…# of Cpu, ram, perhaps storage, Os and their function?
    I’m trying to determine if these will work going forth on vsphere 7 with vCSA tiny etc.

    • Hi J Mcelhiney,

      Thanks for your comment. The great benefit about virtualization is efficient scheduling of resources! And VMware does a great job!
      My standard vm setup is 2x CPU, 4GB RAM and 40/60 GB disk for Windows/Linux/Other OS. I have an average of 6/7 VMs per NUC. I have 4x 6i5SYH with 32 GB RAM each and 4x 7i7 DNHE with 64 GB RAM each. All VMs use Thin provisioning and are sitting on a iSCSI VMware Datastore on Synology LUNs. End to end I use 1GBit network.
      The only excpetions are the VSA machines which might require different specs (typically RAM up to 12GB). This is the case of NetApp, Dell VSAs. Once install is done these appliances rarely use more than 4GB. Of course depends how you use them.
      ATM I have upgraded the homelab to VMware 7.0u1c and having issues with USB NIC drivers (from fling.vmware site) causing PSOD when multiple NICS on same VMware vDS are used. Hope to get this one solved asap. Performance are great (for a homelab) with VMware 6.7u3 and 7.0u1c. If you want to know more about my homelab check https://domalab.com/build-homelab-setup-idea/

      Hope this helps,
      Michele

  • Hello Michele,

    I have setup of two ESXI 6.7 Hosts on Intel NUC7i7DNHE,when I run a lab, vCenter server shows error message “Host TPM attestation alarm” for both host. I think I need to change some settings in BIOS but not sure what exactly I have to do to fix this issue ?

    Hope you have the time to answer the question.

    • Hi Rav,
      there are 2 options:
      1. Checkbox under Devices > Onboard Devices > Trusted Platform Module 2.0 Presence. Depending on BIOS version and on some NUC units this still shos up in VMware vSphere Client

      2. In VMware vSphere just disable the “Host TPM attestation alarm” in the alarm definition section top of vCenter hierarchy.

      I went for this option as disabling some features in NUC BIOS might affect others.

      BTW I’m preparing articles covering NUC BIOS and upgrade to VMware 7.0.1.

      Hope this helps,
      Michele

      • I’m running 7.0 u1 on multiple nuc’s 7i5bnh, the biggest concern is that the Ne1000 driver supplied is VMware approved, but fails. It results in downgrading this vib. I’m also experiencing intermittent high CPU with vCenter. please share any comments

  • Hello Michele,

    I have almost same setup like you have, 2 intel NUCs running vSphere vCenter server 6.7, installed USB 3.0 Ethernet Adapter (NIC) driver for ESXi servers. The problem is when i shutdown my lab and restart next day i can see only one default uplink connected, the usb nics(5) does not show connected, i have to connect them manually every time. is that happen with you as well when you restart after shutdown ? if not than do you have any workaround ? Thanks in advance.

    • Hi Rav,
      Thanks for your comment. I’m not sure which drivers are you using. For VMware 6.7u3 and below I would recommend the drivers from Jose Gomes, I have stickers on my blog. Going forward VMware changed the drivers model with “USB native” and a new set of drivers are required. Search for “VMware usb network driver fling” and make sure to install the latest, v1.8 I think. The fling site includes info on how to make setting persistent across reboots. It didn’t work for me and I created an esxi script that loads settings and configurations at boot time. I’m planning to create an article and maybe even a tool to generate the script but lately I’ve been very busy. Also when adding multiple USB adapters make sure to add the disable scan at boot time option as otherwise it can cause the PSOD! All info on the fling site.
      Thanks,
      Michele

    • Check out William Lam post; ESXi does not natively support USB NIC and upon a reboot, the USB NICs are not picked up until much later in the boot process which prevents them from being associated with VSS/VDS and their respective portgroups. To ensure things are connected properly after a reboot, you will need to add something like the following in /etc/rc.local.d/local.sh which re-links the USB NIC along with the individual portgroups as shown in the example below.

      https://williamlam.com/2016/11/usb-3-0-ethernet-adapter-nic-driver-for-esxi-6-5.html

      • Hi Rav, Borg,
        please refer to this link to download latest USB Drivers: https://flings.vmware.com/usb-network-native-driver-for-esxi#summary
        The instruction section includes more info on how to install.

        In case of ESXi PSOD when multiple usb nic adapters are attached, use a command similar to the one below and replace this with the current MAC Addresses from the usb adapters:

        esxcli system module parameters set -p “usbBusFullScanOnBootEnabled=0 vusb0_mac=00:00:00:00:00:00 vusb1_mac=00:00:00:00:00:00 vusb2_mac=00:00:00:00:00:00 vusb3_mac=00:00:00:00:00:00” -m vmkusb_nic_fling

        Reboot and the nic mappings should be persistent. If not I would advice to create a script in /etc/rc.local.d/local.sh as per fling instruction site. It doesnt work for me and I have replaced that with a custom script which maps all nics, status and iscsi config with mulitple datastores at every boot and never had a problem since then.

        Hope this helps,
        Michele

  • Outstanding. It is 2022 and this article still resonates. Going with a NUC – ASROCK 4×4 4800U. Size and power consumption and ‘burn in (vetted)’ are just a few reasons. The Intel NUCs are nice. But now we have Spectre2? And I do not want the Intel ME. It would be nice to see some real benchmarks with updated microcode. It boggles the mind when the OS and not the hardware manufactorer deals with CPU microcode issues.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Browse articles

September 2023
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Articles by Category

Archives

error: Content is protected !!
%d bloggers like this: