Deploy OVA with PowerCLI

It has been a while since I have written anything for my blog. Life has become crazy busy for me the last few months. Between normal work and family time I do not have a lot of free time. The following meme really sums up my life right now.

I have had to deploy a few OVA recently, but wanted to let those that are a little less technical have an easy quick way to accomplish this. Its not that complicated of a script, but it works and gets the job done. By using the Read-Host capability it allows user input into a Powershell script which can really expand Powershells capabilities.

connect-viserver “Your vCenter”

$vmSource = Read-Host – Prompt “File path to OVA”

$vmCluster = Get-Cluster – Name “Name of Cluster you want to use”

$vmHost = ($vmCluster | Get-VMHost)

$vmDatastore = Get-Datastore -Name “Name of Datastore to be used”

$vmName = Read-Host -Prompt “What to name VM?”

$vmHost | Import-vApp – Source $vmSource -Location $vmCluster -Datastore $vmDatastore -Name $vmName -Force

It Is Time For Cloud Field Day 5

I am super excited because it is almost time for Cloud Field Day, and as always it is going to be a big! If you don’t anything about Cloud Field Day then is an event where various vendors come together at a Top Secret location, and show off what they have been up to. As always the event will be streamed April 10-12 at http://www.techfieldday.com

There will be a lot of vendors at the event, but a few that I am excited to see what they have been up to. VMware will probably be showing off what they have been up to with the Amazon Cloud with VMC. Then there are other like Pure Storage which I have a lot of experience using their storage array, but I am curious what they are doing with the cloud. NGiNX will also be presenting , and they are a company that I know has a lot of customers, but I have never been able to spend much time with. So it will be a great time to learn more about them, and see what they have in development.

As you can see there will be a lot of vendors involved in this event, and they will definitely be showing a lot of great content. So make sure to tune in April 10-12 at TechFieldDay.com and make sure to follow me on Twitter @brandongraves08. Reach out to me if there is anything you would like me to ask them.

Transferring Files Using PowerCLI

I recently had a unique issue in that I needed to transfer some files to VM’s that were running in a secure environment. This meant there was no network access in our out of this environment except the one used by VMware Tools.  There is a powershell command that can be used to transfer files by utilizing VMware Tools.  This is very useful when transferring files even if it is not a secure environment.

Copy-VMGuestfile -Source “C:\Path\” -Destination “C:\Path\” -VM VMName -LocalToGuest -GuestUser “domain\user” -GuestPassword “password”

If you want to retrieve the file then replace -LocalToGuest with -GuestToLocal

One issue you may encounter is when you have multiple VM’s with the same name.  If they are using different folders then you can point to the correct folder path.

-VM $(Get-VM -Name VMName -Location folder)

My time at VMworld 2018

fond memoriesWow how time flies. VMworld was just last week, and its hard to believe that its already over. It was a very busy week with all of the announcements from VMware. Visiting with all of the vendors and seeing what new products they are offering. So much happens in such a short amount of time.  I never like the process of traveling.  All the time spent commuting to the airport just to wait a few hours to board the plan.  With all the possible delays I always a fear that I will get trapped in the airport over night.  In the end it is all worth it because it was an exciting week.

recap

It was great being able to meet up with all my old friends from the vExpert community.  It was great to see Jeremy Mayfield again. I have not seen him since .NEXT in Washington DC.  He lives quite far away from me in the frozen white north of Alaska.  It was great to have someone to hang out with during the week, and grab some food at the casino buffet. It was great to finally meet Thom Greene in person. It is always interesting talking to him because we have had such a similar career path.

When I had some free time I spent most of it at the VMTN booth in the Village.  There were a lot of people from the community hanging around it all day.  It was a great place to meet new people, and catch up with old friends. During the day vBrownBag had presentations going on from various people in the community.  It is one of the many things that makes VMworld a unique experience.

At night after all the presentation were over; there was a Hackathon going on.  I had a great time at it, even though I was not able to contribute much. There were some really smart people there. It was amazing to see what the different groups were able to accomplish in just a few hours.

The two big highlights for me were the vExpert party at the Pinball Museum. It was great meeting all the fellow vExperts while enjoying great barbecue and playing some pinball. Then on Wednesday night instead of going to the big party I went to an Influencer dinner.  It is put on by the legend Howard Marks.  It was at this great Thai place and met a lot of great people.  I really had some impostor syndrome kicking in while I was there, because it was full of famous people from the industry.  The contributions that they have made to the community have really helped me in my career.

Tech Field Day Extra was at VMworld this year.  I was a delegate at it for two different sessions. Tuesday morning DELL EMC presented on its Data Protection.  I enjoyed the presentation since I started my career using those data protection products. Wednesday afternoon BlueMedora, Barefoot Networks and Netscout presented. They had a lot a great of information, and as always it was a great event to be a part of. I am very thankful of the Tech Field Day crew inviting me, and over the next few weeks I will be publishing some more post detailing the presentations from these great companies.  So keep an eye on this site over the coming weeks.

Countdown to VMworld

FHUv9yX

It is finally that time.  That time has finally arrived. It is Sunday night and tomorrow I begin my travels to VMworld 2018, and before this event I wanted to make a quick post about some of the things that will be going on.  The last time I was able to attend was 2016 which was my first and only time. I went in not knowing many people, but through the vExpert program and the vCommunity I was able to meet a lot of great people.

This year there is going to be a lot of great things coming from the event, and The Level Up project is one that is really important to me.  It is a great way to get more active in the vCommunity.  Check out the twitter here.

I will also be attending Tech Field Day VMworld this year.  It will be going on Monday through Wednesday .  The sessions that I will be in attendance will be on Tuesday with Dell EMC, and then on Wednesday Blue Medora, Netscout and Barefoot Networks. Make sure to check it out here.  There will be a lot of content that you will not want to miss out on.

Remember that this week will be a great time to go to some sessions and learn a few new things, but the real value from this conference is the networking.  Make sure to take the time and meet new people and be an active part of the vCommunity.  So if you see me please say hi.  Maybe we can grab a beer and talk about everything going on.  And I hope that I will…

download (1)

Nutanix .NEXT Europe

Nutanix .NEXT is coming up soon on November 27-29 in London.  I was lucky enough to make it to .NEXT in New Orleans this year and it was a great experience.  It was great meeting up with Paul Woodward and Ken Nalbone.  Maybe one of these years I will also be able to attend the one in Europe.

The sessions at .NEXT are top notch and can cover a wide variety of subject, and there is something there for everyone.  If you are planning on getting your NPX anytime soon. there usually is a NPX boot camp the week before the conference.  It does mean a lot of time away from home, but is well worth it. The number one reason to attend any conference is for the networking with your peers.  Over the years I have met many great people that has helped grow my career and create new friendships.  So if you get a chance do what you can to attend.  You will not regret it.

Nasuni at SFD 16

 

GWbnBM4

Nasuni was a new company to me, but they had a great presentation and I really liked what they presented.  They are providing a solution to a real problem that a lot of companies are running into.  The cloud is a great solution to so many problems that IT departments are encountering, but going to the cloud is not always easy as it looks. Nasuni provides a solution that simplifies the distributed NAS.

The first line from the Nasuni website says it best “Nasuni is primary and archive file storage capacity, backup, DR, global file access, and multi-site collaboration in one massively scalable hybrid cloud solution.”  It does this through providing a solution to to have a “unified NAS” on top of public clouds.  It is essentially and overlay that is controlled by using an on premise appliance either through a VM on your current infrastructure or a Nasuni physical appliance and keeps the data locally cached.  This allows that in the event of internet access being down users can continues to access storage, and when internet is restored the data will be synced up.

There are no limits on the amount of files or file sizes.  The files in a solution like this can be access by multiple users and be changing all the time.  To prevent issues files are locked when a user is accessing it.  Once the file is done being accessed the file will remain locked until it is synced up with the cloud.  Through the continuous versioning of files and in the event of malware or such other issues. Files can be rolled back to another version before the incident occurred.  All the data is deduped and compressed for effective storage utilization.  Files can also be encrypted to prevent any data theft.   Managing a solution like this with multiple devices across many sites could be very complex and time consuming, but with Nasuni everything is managed from a single pane limiting operation costs.

1.PNG

 

Nasuni looks like a great product that really simplifies the migration to the cloud.  By supporting the big players such as AWS, Azure and GCP they give customers plenty of options on what cloud they wish to utilize.  With the caching device they ensure that data can be always accessible even if there are issues preventing access to the cloud and limiting the amount of data that has to be transferred.

You can see this presentation from Nasuni and all the other presentations from Storage Field Day 16 at http://techfieldday.com/event/sfd16/

 

NetApp OnCommand Insight

NetApp presented on its product OnComand Insight at Storage Field Day 16 this year.  What made the presentation unique from the rest of the presentations was that it was about an analytics and monitoring tool.  The only such presentation at the event.    OnCommand is an on premise appliance that can be setup as a VM in your environment.   Once it is fully deployed it will start reporting information about your environment.  Unlike other similar products it only reports on what it sees in your data center.  As opposed to comparing your environment to other environments.

255

OnCommand Insight is always watching your environment, and If an issues arises it can be setup to automatically generate a ticket and alert the proper team of the issue.  It supports Restful API so whatever needs to be done can be scripted out, and Licensing is done by the raw capacity.

They also spoke of the product Cloud Insights.  It is not a direct replacement for OnCommand, but takes many of its features and adds on top of it.  Cloud Insights is designed for the modern Hybrid data center. It can monitor both what is on premises and what is running in the cloud.    As more and more companies go hybrid it is imperative to have a tool that can monitor both and give recommendations on where to run a workload.

One of my favorite features is that is agnostic of what it monitors.  Monitoring is done via plugins and there is a large repository where you can download more.  It reminds me a lot of EMC ViPR SRM as it could monitor more than just EMC products, but NetApp has really gone a step further and its capabilities.

Take a look at the presentation from NetApp and the rest of the Storage Field Day 16 presentations here.

Etherchannel, LACP and VMware

Recently I have had some discussions about using LACP and static etherchannel with VMware.  The conversations have mainly revolved around how to get it setup, and what are the different use cases for it. The biggest question was about what exactly is the difference between the two.  Are they the same thing with different names or are they actually different things?

nirclecom_user_file_VR9EFwHQTILTpFyG4tnAFjWnKn6tVUGoSyBc

Etherchannel and LACP are used to accomplish the same thing, but they both do it in a slightly different way.  They are used to form a link-aggregation-groups (LAG) made of multiple physical links to connect networking devices together.  This is needed to avoid getting a loop in the network, that is normally handled by using the Spanning Tree Protocol.   So what is the real difference between the two?  LACP has two modes.  Active and passive, if one or both sides are set for active then they form a channel.   With Etherchannel one side must be set for active and the other set for passive.  Otherwise no channel will form.  Seems fairly simple but…

The reason all of this matters is that the virtual switches with VMware cannot form a loop.  So by setting up LACP or etherchannel you are just increasing your operational cost, and the complexity of the network.  It requires greater coordination with the networking team to ensure that LACP or etherchannel are setup with the same exact settings.   LACP and etherchannel offer different forms of load balancing.  This is accomplished by using hashes based on things such as source IP, source MAC. There are quite a few options to choose from.  Once the hash is created the packet is sent down a certain link determined by the hash that was generated..  This creates a constraint because now every packet is sent down that same link, and will keep using it until a link fails and it is forced to use another link.  So it is possible that if 2 VM’s are communicating over a LAG all traffic could be going across just one link, and leaving the other links underutilized.  The distributed switch and physical switch must be setup to use the same settings or a link will not be established. LACP is only available by using the Distributed switch which is only available with Enterprise Plus Licensing.

If you are able to use the Distributed switch it also supports Load Base Teaming.  LBT is the only true load balancing method.  It will send traffic across all links based on the actual utilization of the link.  This is a far superior load balancing feature and if you are already paying for it you should be using it.  There is also the myth that bonding two 10gb links will give you 20gb of throughput.  As I discussed earlier the limitation is that vNIC can only utilize one link at a time.  It cannot break up streams across two links for increased throughput.  You can only really gain the throughput advantage with multiple VM’s utilizing them.

download (2)

As a best practice you should always use trunk ports down to your hypervisor hosts, this allows the host to utilize multiple VLAN’s as opposed to placing the switch ports into access mode and allowing only one VLAN, customers who do this often end up re-configuring their network later on and its always a pain. I generaly recommend setting up each port on the physical switch in a standard trunk mode with all the VLAN’s that you need.  Then on the virtual switch build out all of your portgroups and have the traffic tagged there with the VLAN needed for that portgroup.  By doing this and using LBT you have a simple yet efficient design.

Now there is one caveat to all of this  vSAN does not support LBT, but it does support LACP, and if you have vSAN you are licensed for the distributed switch.  LACP has one advantage over LBT and that is the fail over time.  This is the time it takes for a dead link to be detected and traffic sent to another link. LACP failover is faster than that of LBT, and this failover time could mean the difference between a failed write with vSAN.  Which can limit any downtime, but with a production hopefully there will not be many links going offline.

Blog at WordPress.com.

Up ↑