I’m excited today to write a new article on how to upgrade vSphere to the latest version on 6.5 update 1. Currently on the “.g” release. By the time I was running some quick test in my home lab I was greeted by the news that VMware published on the same day the updated versions of both VMware vSphere 6.7 and VMware vCenter 6.7.
Now of course apart from the initial excitement about the latest releases this was also an opportunity to learn about upgrade paths. In my example my home lab is based on 4 Intel NUC 6i5SYH and unfortunately the hardware is not officially supported anymore based on the VMware Hardware Compatibility List.
To be fair to a certain extent this was true also for previous versions of VMware vSphere. In reality I have been running vSphere ESXi 6.0 for a long time and never had significant issues to deal with. In particular this is possible because the “inbox” drivers in the standard image already support the hardware on the Intel NUC kits.
The only exception was for the additional network cards. Since the Intel NUC 6i5SYH are shipping with one physical network card (the other one is wireless) I followed and used the excellent drivers from Jose Gomes to install additional USB network adapters pumping up each Intel NUC with 5 separate physical nics. Not bad for a home lab. In addition they have been working consistently and that’s a big bonus.
So coming back to the upgrade vSphere task I was really tempted to see if I was able to upgrade not just the VMware vCenter to the latest version but also the VMware vSphere ESXi. I have collected the most significant screenshots in this article hoping to help anyone willing to go the same process. Of course the benefit of the upgrade vSphere in place is primarily to keep the same configurations for security and network. In reality it is also possible to do a clean install and reconfiguring these ones again.
One thing really important I would like to mention apart from the obvious Backup of VMs is to make sure the vSphere Host is not using the VMware software ISCSI to connect to the storage. My recommendation is to temporarily use the built-in physical nic as an uplink. The reason for this is that VMware vSphere 6.5 creates a “vmhba64” to connect to iSCSI targets instead of “vmhba32”. And this can lead to communication issues since the “new” SCSI software adapter is not known. I’ve learned this the hard way and will also document in a separate article the steps to recover.
On a different note it is a good exercise to better understand how to query and install drivers when these are not included in the default image. It is also rewarding to find “Community” drivers in additional depot so hopefully VMware vSphere 6.5 will not be the very latest version we can install on our beloved Intel NUCs.
At this point we are ready to start. Let’s take a look on how to:
- query hardware devices
- discover drivers
- inject these into a custom image
- upgrade vSphere on Intel NUC
How to upgrade vSphere to latest 6.5 update 1
First thing we want to do is to understand the hardware device we can see in our vSphere Host. An Intel NUC in this case. LEt’s connect to a Host using SSH connection with telnet and run the command
will show all the installed hardware devices. Our attention of course goes first to the Mass Storage Controller “vmhba0” and the built-in network card “vmnic0”.
If we are using SCSI storage we can run a different command to discover about this one. We can run
to list network adapters to connect to SCSI storage. In this case there is the default one called “vmhba34”.
If we want to understand which hardware device the driver is operating with we can use the “vmchdev” command. In this example below we can limit this to show jst the information for the Mass Storage Controller “vmhba0”.
As pr screenshot below we can run
“vmchdev -l | grep vmhba0”
This command will return 4 important information:
- Vendor ID 8086
- Dev ID 9d03
- SubVendor ID 8086
- SubDev ID 2063
these numbers are really important because they uniquely identify an hardware device and we can use this information to check the native support in VMware vsphere standard image or look for the appropriate drivers.
Next step is to verify these numbers from the VMware Compatibility Guide. As the screenshot is showing the Vendor and Device ID are both supported. Unfortunately this is not true for the SubVendor and SubDevice IDs. That means built-in drivers might work but potentially not fully supported.
To be fair the ones we are after are the second one in the row. In theory they were not supported as previous version to 6.5. In reality they have been working fine. So not supported not necessarily means not working. Also it a is a matter for VMware support to keep this list clean and updated with current and new enterprise grade hardware. For a home lab environment my experience has been very rewarding.
In a production environment of course things are different and you want to make sure not just the current hardware is fully supported from VMware but also apply for customised vSphere ESXi images coming directly from different vendors like Cisco, DELL EMC, HPE on their hardware just to name a few.
Similarly we can list also the physical network cards we use as uplinks. From a different session on the same Host type and configuration if we run
will list all physical nics with “vmnic” names, drivers and other info. As we can notice from this screenshot we have a few usb network cards working with the “r8152” Driver.
Let’s find out more about the built-in driver by running
“esxcli network nic get -n vmnic0 | grep -A 3 Driver”.
And the same applies also for the custom additional Driver:
“esxcli network nic get -n vmnic32 | grep -A 3 Driver”.
Now for this one let’s make sure we get the latest driver version.
Similarly to what we have doen before we can now chek the VMware HCL against the network devices. In my case for the Intel NUC 6i%SYH this is the “Intel Ethernet Connection I219-V”.
Now that we have all the info about the most important drivers the next step is create a ISO we can use to upgrade vSphere including the missing drivers. In particular the ones for USB network Adapters.
To do this we can use the excellent PowerShell ESXi-Customizer-PS script from v-Front project which leverages the VMware PowerCLI to create the installation ISO.
To create the custom ISO i have a folder structure like the following:
- \NUC\VMware ESXi 6.5 Offline Bundle.zip
Let’s download vib and place this in the appropriate folder. Next is to download the latest standard “Offline Bundle” image from VMware website.
As per ESXi-Customizer instructions let’s create the custom image with a command like this:
“ESXi-Customizer-PS-v2.5.1.ps1 -izip “Offline Bundle.zip” -pkgDir “your path” -ipvendor “as desired optional field”
Depending on the execution policy we can accept to run this script and continue.
As soon as the custom ISO file is created next steps is to make this ISO bootable and we can use the Rufus utility to achieve this. We need to provide a USB key at least more than 1GB. This one will be formatted and the content of the ISO copied in. Should a warning appear about old version of bootable files let’s download the new one and proceed.
We are now ready to upgrade vSphere. We can use the USB to boot and quickly press F10 to choose the boot option.
Let’s go into USB disk and start the VMware vSphere ESXi installer.
The wizard will detect the previous installation. In the wizard we can choose the option to Upgrade the existing installation and in this way also to preserve the VMFS datastore.
When the install is completed let’s remove the USB key and press Enter to reboot.
The procedure to upgrade vSphere is completed successfully.
As a next step we can verify the installed version with this command:
will show something similar to the one below.
Since vSphere 6.5 is introducing a new model for USB adapters we need to disable this new module which prevents our custom driver to load the additional USB network cards.
All we need to do is to simply run a command to disable the module and reboot.
“esxcli system module list | grep -i usb”
returns the name of the module
“esxcli system module set -m=vmkusb -e=FALSE”
to disable the vmkusb module
and finally to make this change effective we need to reboot the host with a simple
When back to the vSphere console if we run again the command to list the physical nics we’ll see something similar to this:
We can now use the additional nics to divide the traffic amongst separate virtual Switches.
Hope this article is helpful for those willing to understand more and upgrade vSphere in their home lab!
Hi thanks for the tutorial. Just wanted to point out a typo that caught me out:
As pr screenshot below we can run
“vmchdev -l | grep vmhba0”
I think this should be vmkchdev -l |grep vmhba0
I was trying to run that command but then noticed the screenshot was a different command.
thanks for you comment! I have tried again both commands and look fine with same result. Now tested on 6.5 u1.
Let me know how you are gettin’ on. Also I’m planning on upgrading to 6.7 directly as it looks like the current hardware should work fine.
Watch this space 🙂