In this quick article we’ll review the simple steps on how to upgrade Nutanix to the latest version. In particular I will be using a Nutanix CE version nested into VMware. The steps pretty much apply also to the other Nutanix flavours sitting on the Hardware versions as well. To upgrade Nutanix platform is a very easy and simple process.
This is the beauty of the Software-Defined approach intrinsic to the HyperConverged platforms where Nutanix is offering good options by making the “infrastructure invisible”.
The entire process can be run through the command line and the GUI. Definitely the latter is a lot easier to use. When deploying upgrades though the Prism GUI such updates are called 1-click upgrades.
In this article we’ll see the steps to upgrade the Acropolis OS (AOS) and Acropolis Hypervisor (AHV).
Upgrade Nutanix Acropolis to latest version
Generally speaking each node in the cluster runs AOS. It is essential to have all nodes to the same version. When upgrading one Node all the other ones are still running the VMs and other operations. In addition Nutanix platform provides a live upgrade mechanism which runs in the background and allows to carry on with the rest of the operations hence running the cluster continuously. In our example this installation is a single node cluster installation.
One nice feature with regards to the AOS upgrade is that the new installer will check the RAM Memory on the Node Host. When this one has more than 64 GB the installer will automatically increase the RAM Memory to the Controller VM (CVM) running on that Host by 4 GB increments to a maximum of 32 GB. Less than 64 GB of Memory RAM on the Node Host will not change memory settings on the Host.
Another thing to take into consideration is that the AOS upgrade will restart the Cluster. So when working with a single Node cluster the Prism environment might become unresponsive.
As a last prerequisite before proceeding let’s make sure to Nutanix Cluster has access to the internet and in particular to the following addresses:
Such addresses will allow the download of the updates. In our case we’ll provide such updates manually. At this point we are ready to start.
From the Home > Settings > Upgrade Software we can start the wizard to upgrade Nutanix platform components. The first one in this case to upgrade is the Acropolis OS (AOS). We can have the automatic download of the software for the AOS. New downloads will appear here.
Alternatively we can provide manually the Binary and the pertinent Metadata file. We can download them from Nutanix Next Portal.
At this point let’s provide the pointers to the Metadata file and the Binary as shown in the screenshot below.
If a valid file the wizard will start uploading and showing the relative version.
After the file upload we can run the procedure to upgrade Nutanix. As per picture below can choose between a Pre-upgrade and Upgrade Now options.
Let’s go for a Pre-upgrade check and confirm the selection.
The wizard will now emulate the upgrade steps and show details. Depending on hardware this procedure can take quite a while. For the inpatients there’s a nice surprise under the Nothing to do link!
At this point assuming we had no errors we can repeat the same steps and proceed with the real upgrade.
In a similar fashion the screenshot below shows the Nutanix upgrade status.
And this screenshot below concludes this first part to upgrade Nutanix AOS. Next step is the AHV component.
Upgrading Acropolis Hypervisor AHV
To upgrade Nutanix Acropolis Hypervisor (AHV) to the latest version we can re-run pretty much the same steps we have seen for the Nutanix AOS.
Let’s point to the the correct AHV update Binary and Metadata files as per screenshot below.
And upload such files to the Nutanix file system. By default these files are located in /home/nutanix/software_downloads/ on the current CVM.
As per the AOS also in the case of the AHV Hypervisor we have the option to run a pre-upgrade action.
Let’s confirm and continue.
The wizard is now running the pre-upgrade checks and flag any issue that might be encountered. To make the process to upgrade Nutanix components even smoother I would recommend to upgrade the Nutanix Cluster Check component (NCC) currently at 220.127.116.11 and run this one from the command line before any upgrade. We’ll dedicate an article to this topic.
At this point we are ready for the actual upgrade of the Nutanix AHV component.
Let’s confirm and proceed.
The wizard runs a pre-ugrade and then an upgrade action showing the steps as per screenshot below.
Restarting Nutanix Cluster after upgrade
After upgrading the Nutanix AHV we need to restart the the Nutanix Cluster before accessing to Prism GUI. As the screenshot is showing all cluster services are down. It couldn’t be any easier. All we have to do is to open an SSH session to the Controller VM (CVM) and issue the command:
After a few seconds all the Nutanix cluster services are up and running. We can now access the Prism GUI.
The process to upgrade Nutanix is very straight forward. At the time of writing the latest version for the Nutanix Community Edition is 2018.01.31.
If you are running Nutanix CE nested into VMware the upgrade option might not work as expected and produce errors. This is due to a configuration in the new version. The good news is that the Community managed to identify the issue from the error logs and propose a solution. That solution is working and will be addressed as an update in the next release of the Nutanix Community Edition.
The next screenshots show the issue. Essentially the Controller VM fails to start. The same issue can be reproduced with a clean install.
All we have to do to solve this issue is to edit the Default.xml file located in:
We can use the nano utility.
First thing is to change the value on the 8th line:
- from: pc
- to: pc-i440fx-rhel7.2.0
New value for “pc”.
Next is to add <pmu state=’off’/> under the <pae/> option as per screenshot below.
At this point we can save the file and exit.
On the login let’s enter “install” and repeat the procedure. The end result now looks more promising!
In fact we can now list the CVM instance running with command:
“virsh list –all”
A big thank you to all the users in the Nutanix Community (requires free registration to access the link https://next.nutanix.com/discussion-forum-14/impossible-to-deploy-ce-5-5-nested-on-vmware-vsphere-6-5-27459) who spotted and managed to solve this issue. So we can get the latest goodies from Nutanix CE platform in our home lab!