DFW VMUG February 3 – Cohesity Truck

Come check out the Cohesity Data Center on wheels at the DFW VMUG on February 3. Check out the invite below.

Register Now!!!

Cohesity Truck

Test drive Cohesity technology and get answers straight from the experts!

What: Keep on Truckin’: Cohesity Mobile Demo Center

Where: Lucchese Bootmaker, 6635 Cowboys Way #125, Frisco, TX 75034

When: 2/3/20

Time: 11:30am-1:30pm

The 18-wheeler Cohesity truck has live demos featuring how Cohesity solves the Mass Data Fragmentation problem for your back-up, recovery, file, cloud, and more.

Agenda

  • 11:30 am – 12:30 am | Truck Tour Open House
  • 12:30 pm – 1:00pm | Cohesity presentation & Raffle – $300 Lucchese gift card!
  • 1:00 pm – 1:30 pm | Truck Tour Open House

We’ll have tables and chairs set-up next to the truck so that folks can enjoy Cane Rosso pizza, then hop inside the truck for a private demo/tour during “Truck Tour Open House!”

VMware Project Pacific at Tech Field Day Extra 2019

This post has been in my draft folder for way too long. VMworld seems like just a short time ago, but so much time has already passed. It was a great time meeting up with old friends and making new ones. During that time I was lucky enough to be a part of Tech Field Day crew and be a part of some of the presentations from VMware. One of the most exciting announcements of week was about project Pacfic, but I think it should had been called vPacific to be like the rest of VMware products. What this project is doing is bringing native support for Kubernetes into vSphere. I can see this as a real game changer for the Enterprise as it brings Containers out of the development side over to the operations side. Kubernetes seems to be the next big thing, and I think with VMware it is about to really take off for enterprise.

The session kicked off with Jared Rosoff (who I think is trying to look like Bob Ross) one of the co-creators of Project Pacific. Talking about the challenges that are present with the modern Data Center. How can developers rapidly build, and how can Op’s keep the infrastructure working optimally. At the same time keeping cost down, ensuring up time and keeping everything secure.

Kubernetes is more than just an container orchestrator platform, but more of a platform for all kinds of orchestration and at the core of that is controlling the desired state. Project Pacific wanted to be able to build a new kind of cloud platform. Be able to deploy Kubernetes clusters, virtual machines and serverless environments and change how the developer use the cloud. One way this is accomplished is by going from managing at the VM level to managing at the Logical Application layer, by integrating the Kubernetes namespace into vSphere. This makes it go from managing 1000’s of VMs to managing only at the Application layer.

I am really excited to see what VMware is doing with Kuberenetes is doing. It is a powerful product, but in its current state it is hard for most enterprises to take advantage of it. VMware is in a great position to really push Kubernetes into the Enterprise, and I look forward to being able to use it soon.

This has been just a short overview of what VMware Pacific is. If you would like to learn more about this and other events you can see the entire presentation at Tech Field Day

Deploy OVA with PowerCLI

It has been a while since I have written anything for my blog. Life has become crazy busy for me the last few months. Between normal work and family time I do not have a lot of free time. The following meme really sums up my life right now.

I have had to deploy a few OVA recently, but wanted to let those that are a little less technical have an easy quick way to accomplish this. Its not that complicated of a script, but it works and gets the job done. By using the Read-Host capability it allows user input into a Powershell script which can really expand Powershells capabilities.

connect-viserver “Your vCenter”

$vmSource = Read-Host – Prompt “File path to OVA”

$vmCluster = Get-Cluster – Name “Name of Cluster you want to use”

$vmHost = ($vmCluster | Get-VMHost)

$vmDatastore = Get-Datastore -Name “Name of Datastore to be used”

$vmName = Read-Host -Prompt “What to name VM?”

$vmHost | Import-vApp – Source $vmSource -Location $vmCluster -Datastore $vmDatastore -Name $vmName -Force

Etherchannel, LACP and VMware

Recently I have had some discussions about using LACP and static etherchannel with VMware.  The conversations have mainly revolved around how to get it setup, and what are the different use cases for it. The biggest question was about what exactly is the difference between the two.  Are they the same thing with different names or are they actually different things?

nirclecom_user_file_VR9EFwHQTILTpFyG4tnAFjWnKn6tVUGoSyBc

Etherchannel and LACP are used to accomplish the same thing, but they both do it in a slightly different way.  They are used to form a link-aggregation-groups (LAG) made of multiple physical links to connect networking devices together.  This is needed to avoid getting a loop in the network, that is normally handled by using the Spanning Tree Protocol.   So what is the real difference between the two?  LACP has two modes.  Active and passive, if one or both sides are set for active then they form a channel.   With Etherchannel one side must be set for active and the other set for passive.  Otherwise no channel will form.  Seems fairly simple but…

The reason all of this matters is that the virtual switches with VMware cannot form a loop.  So by setting up LACP or etherchannel you are just increasing your operational cost, and the complexity of the network.  It requires greater coordination with the networking team to ensure that LACP or etherchannel are setup with the same exact settings.   LACP and etherchannel offer different forms of load balancing.  This is accomplished by using hashes based on things such as source IP, source MAC. There are quite a few options to choose from.  Once the hash is created the packet is sent down a certain link determined by the hash that was generated..  This creates a constraint because now every packet is sent down that same link, and will keep using it until a link fails and it is forced to use another link.  So it is possible that if 2 VM’s are communicating over a LAG all traffic could be going across just one link, and leaving the other links underutilized.  The distributed switch and physical switch must be setup to use the same settings or a link will not be established. LACP is only available by using the Distributed switch which is only available with Enterprise Plus Licensing.

If you are able to use the Distributed switch it also supports Load Base Teaming.  LBT is the only true load balancing method.  It will send traffic across all links based on the actual utilization of the link.  This is a far superior load balancing feature and if you are already paying for it you should be using it.  There is also the myth that bonding two 10gb links will give you 20gb of throughput.  As I discussed earlier the limitation is that vNIC can only utilize one link at a time.  It cannot break up streams across two links for increased throughput.  You can only really gain the throughput advantage with multiple VM’s utilizing them.

download (2)

As a best practice you should always use trunk ports down to your hypervisor hosts, this allows the host to utilize multiple VLAN’s as opposed to placing the switch ports into access mode and allowing only one VLAN, customers who do this often end up re-configuring their network later on and its always a pain. I generaly recommend setting up each port on the physical switch in a standard trunk mode with all the VLAN’s that you need.  Then on the virtual switch build out all of your portgroups and have the traffic tagged there with the VLAN needed for that portgroup.  By doing this and using LBT you have a simple yet efficient design.

Now there is one caveat to all of this  vSAN does not support LBT, but it does support LACP, and if you have vSAN you are licensed for the distributed switch.  LACP has one advantage over LBT and that is the fail over time.  This is the time it takes for a dead link to be detected and traffic sent to another link. LACP failover is faster than that of LBT, and this failover time could mean the difference between a failed write with vSAN.  Which can limit any downtime, but with a production hopefully there will not be many links going offline.

VMworld 2018!!!

It is finally that time of year.  The greatest time of year. It is time for VMworld!!!  August 26-30 is the the time where everyone packs up and spends a week in Las Vegas with some of the greatest minds in Virtualization.

download

VMworld is a great opportunity to learn about some of the latest technology in the industry.  The show floor will be backed with tons of vendors.  Some you have heard of and some that you haven’t.  You may find that vendor that has just the solution that you have been looking for.  All the vendors will have lots of information about the various products and solutions that they offer.  It is a great idea to talk to as many as you can.  Always a great opportunity to learn something new, and they usually have some great prize and swag!

The sessions will be excellent as always presented by some of the smartest people you have ever met.  You can take a look of all the sessions here.  If you can’t make it to VMworld they will post most of the sessions on Youtube shortly after.

They will also be offering training sessions on the various VMware products, and if you ready for it you can take one of the certification tests.  Maybe finally get that VCP or VCAP that you have been working on.

The best part of all of this is the networking, and the lifelong friends you will make.  Through VMworld and various other social events I met many great people and friends.  It is a great community to be a part of, and I hope this year I will be able to meet up with as many people I can at the various events.

download (1)

Updating Prism Central to 5.0

Nutanix recently released the 5.0 code, and it has a lot of new really nice features.  In a future post I plan on going over some of the features, and detail why they are so important.

Before you start upgrading all of your Nutanix host you should first upgrade Prism Central to 5.0  This server gives you an overview of all your Nutanix clusters and some management capabilities.  I am still fairly new to Nutanix so I was not sure how to upgrade Prism Central.  Usually you can upgrade from within the Nutanix console, but with this being a brand new release you had to download it directly from the website.  Sometime soon in the future it will be a part of the automatic upgrade within the Console.

At first I was a little confused about how to upgrade.  You would think there would be a separate upgrade for Prism Central since it was originally a  separate install.  Instead the update is included in the AOS 5 download.  Download the complete AOS 5.0 and also download the Upgrade Metadate File.
2017-01-17-08_43_02-nutanix-portalOnce you have that downloaded everything login to Prism Central. Next click the gear icon and then click upload the Prism Central Binary.  Now point  it to the AOS 5.0 download and the metadate file.  Click upgrade and soon you will be running Prism Central 5.0.  2017-01-17-08_46_42-nutanix-web-console

Blog at WordPress.com.

Up ↑