I manage a few Nutanix clusters and they are all flash, because of this the normal tiering of data does not apply. In a hybrid mode, which has both spinning and solid state drives, the SSD will be used for read and write cache. Only moving “cold” data down to the slower spinning drives as needed. The other day one of the nodes local drives were running out of free space. It made me wonder what happens if they do fill up?
With Nutanix it tries keeps everything local to the node. This provides low latency reads since there is no network for data to cross, but the writes still have to go across the network. The reason for this is that you want at least two copies of data. One local and one remote. So when writes happen, it writes synchronously to the local and a remote node. Writes are written across all nodes in the cluster, and in the event of a lost node it can use all nodes to rebuild that data.
When the drives do fill up nothing really happens. Everything keeps working and their is no down time. The local drives become read only. Writes will then be written to at least two different nodes ensuring data redundancy.
To check the current utilization of your drives it is under Hardware > Table > Disk
So it is best practice to try to “right size” your workloads. Try to make sure that the VM’s will have their storage needs met by the local drives. HCI is a great technology it just has a few different caveats to consider when designing for your workloads.
If you want a deeper dive about it check out Josh Odgers post about it.