Deploying Many VM’s From a Template

I recently needed to deploy a lot of VM’s for a particular scenario. They all needed to be the same but have a unique names and be on the Domain. Now there is a lot of ways to accomplish this, and one way is to build a spreadsheet and manually input all the names. It would work, but would not be the optimal way to do it as I wanted to automate as much as I could.

#Create many VM's from a Template
#Version 1.0

#variables
$vm_count = Read-Host "How many VM's would you like to deploy?"
$template = "Template"
$customspecification = "Join Domain"
$ds = "datastore"
$folder = "VM folder"
$cluster = "cluster"
$vm_prefix = "desktop-"

#Loop
write-host "Starting Deployment"
1..$vm_count | foreach {
    $y="{0:D1}" -f + $_
    $vm_name = $vm_prefix + $y
    $esxi=get-cluster $cluster | get-vmhost -state connected | Get-Random
    write-host "Creationg of VM $vm_name start" 
    new-vm -name $vm_name -template $template -vmhost $esxi -datastore $ds -location $folder -oscustomizationspec $customspecification -runasync
}

Bitfusion at VMworld 2019 with TFDx

AI and ML the next big thing in IT, and at one time is was all hand crafted by programmers. This has created great demand for processing power, but is usually on a monolithic machine that is expensive, and not effectively used. If you wanted to build a way for a computer to recognize a certain object lets say the Internets favorite the cat; how would that be done? At one time that would required a lot of manual work to make this happen, and a lot of processing power. Now this is all done automatically with things such as GPU’s and FPGA’s through a process of inferring and training.

Bitfusion was recently acquired by VMware. I had never heard of this company before the acquisition, but once I learned about them I quickly realized why VMware did the acquisition. They have created a solution that over comes all these issues that I mentioned before, and have done for the AI/ML world what VMware did for the Storage world. Basically this is vSAN for AI/ML. Creating a large pool of resources from devices that are not in the same chassis, but on they are on the same network. It runs in the software layer and in the user-space which makes it very secure. This software breaks out workloads to run across multiple remote nodes to effectively use all available resources all with an overhead of less than 10%. I can see this a great cost effective way to bring more ML workloads into enterprises. It does this by intercepting API calls at the API layer as this is the “sweet spot” for Bitfusion to run. Then it can transfer the data over the network to a remote device such as GPU to be processed, and the application does not even need to be aware of this. This is all done with Bitfusion FlexDirect which the following slides do a good job of explaining what FlexDirect is.

It uses also CUDA to intercept the applications calls.. Then the process goes down the stack to a remote device over the network for processing. Bandwidth is not an issue with workloads such as these as latency is the main concern, and it this has been optimized to minimize latency. Check out the above slide, as it does a great job of explaining the entire process of how this all works. It inte

GPU’s can be really expensive so to make them be cost effective they need to be optimally used. That is what makes Bitfusion such an interesting product in that it can optimally use your hardware investment. I could see an organization using GPU during the day for things such as VDI, but during the night they would go idle. Jobs could be scheduled to run at night and fully use all the GPU’s.

This is just an overview of what Bitfusion is capable of. If you like to dive more into this please watch the following embedded videos, and check out TechFieldDay.com.

Deploy OVA with PowerCLI

It has been a while since I have written anything for my blog. Life has become crazy busy for me the last few months. Between normal work and family time I do not have a lot of free time. The following meme really sums up my life right now.

I have had to deploy a few OVA recently, but wanted to let those that are a little less technical have an easy quick way to accomplish this. Its not that complicated of a script, but it works and gets the job done. By using the Read-Host capability it allows user input into a Powershell script which can really expand Powershells capabilities.

connect-viserver “Your vCenter”

$vmSource = Read-Host – Prompt “File path to OVA”

$vmCluster = Get-Cluster – Name “Name of Cluster you want to use”

$vmHost = ($vmCluster | Get-VMHost)

$vmDatastore = Get-Datastore -Name “Name of Datastore to be used”

$vmName = Read-Host -Prompt “What to name VM?”

$vmHost | Import-vApp – Source $vmSource -Location $vmCluster -Datastore $vmDatastore -Name $vmName -Force

It Is Time For Cloud Field Day 5

I am super excited because it is almost time for Cloud Field Day, and as always it is going to be a big! If you don’t anything about Cloud Field Day then is an event where various vendors come together at a Top Secret location, and show off what they have been up to. As always the event will be streamed April 10-12 at http://www.techfieldday.com

There will be a lot of vendors at the event, but a few that I am excited to see what they have been up to. VMware will probably be showing off what they have been up to with the Amazon Cloud with VMC. Then there are other like Pure Storage which I have a lot of experience using their storage array, but I am curious what they are doing with the cloud. NGiNX will also be presenting , and they are a company that I know has a lot of customers, but I have never been able to spend much time with. So it will be a great time to learn more about them, and see what they have in development.

As you can see there will be a lot of vendors involved in this event, and they will definitely be showing a lot of great content. So make sure to tune in April 10-12 at TechFieldDay.com and make sure to follow me on Twitter @brandongraves08. Reach out to me if there is anything you would like me to ask them.

Transferring Files Using PowerCLI

I recently had a unique issue in that I needed to transfer some files to VM’s that were running in a secure environment. This meant there was no network access in our out of this environment except the one used by VMware Tools.  There is a powershell command that can be used to transfer files by utilizing VMware Tools.  This is very useful when transferring files even if it is not a secure environment.

Copy-VMGuestfile -Source “C:\Path\” -Destination “C:\Path\” -VM VMName -LocalToGuest -GuestUser “domain\user” -GuestPassword “password”

If you want to retrieve the file then replace -LocalToGuest with -GuestToLocal

One issue you may encounter is when you have multiple VM’s with the same name.  If they are using different folders then you can point to the correct folder path.

-VM $(Get-VM -Name VMName -Location folder)

My time at VMworld 2018

fond memoriesWow how time flies. VMworld was just last week, and its hard to believe that its already over. It was a very busy week with all of the announcements from VMware. Visiting with all of the vendors and seeing what new products they are offering. So much happens in such a short amount of time.  I never like the process of traveling.  All the time spent commuting to the airport just to wait a few hours to board the plan.  With all the possible delays I always a fear that I will get trapped in the airport over night.  In the end it is all worth it because it was an exciting week.

recap

It was great being able to meet up with all my old friends from the vExpert community.  It was great to see Jeremy Mayfield again. I have not seen him since .NEXT in Washington DC.  He lives quite far away from me in the frozen white north of Alaska.  It was great to have someone to hang out with during the week, and grab some food at the casino buffet. It was great to finally meet Thom Greene in person. It is always interesting talking to him because we have had such a similar career path.

When I had some free time I spent most of it at the VMTN booth in the Village.  There were a lot of people from the community hanging around it all day.  It was a great place to meet new people, and catch up with old friends. During the day vBrownBag had presentations going on from various people in the community.  It is one of the many things that makes VMworld a unique experience.

At night after all the presentation were over; there was a Hackathon going on.  I had a great time at it, even though I was not able to contribute much. There were some really smart people there. It was amazing to see what the different groups were able to accomplish in just a few hours.

The two big highlights for me were the vExpert party at the Pinball Museum. It was great meeting all the fellow vExperts while enjoying great barbecue and playing some pinball. Then on Wednesday night instead of going to the big party I went to an Influencer dinner.  It is put on by the legend Howard Marks.  It was at this great Thai place and met a lot of great people.  I really had some impostor syndrome kicking in while I was there, because it was full of famous people from the industry.  The contributions that they have made to the community have really helped me in my career.

Tech Field Day Extra was at VMworld this year.  I was a delegate at it for two different sessions. Tuesday morning DELL EMC presented on its Data Protection.  I enjoyed the presentation since I started my career using those data protection products. Wednesday afternoon BlueMedora, Barefoot Networks and Netscout presented. They had a lot a great of information, and as always it was a great event to be a part of. I am very thankful of the Tech Field Day crew inviting me, and over the next few weeks I will be publishing some more post detailing the presentations from these great companies.  So keep an eye on this site over the coming weeks.

Countdown to VMworld

FHUv9yX

It is finally that time.  That time has finally arrived. It is Sunday night and tomorrow I begin my travels to VMworld 2018, and before this event I wanted to make a quick post about some of the things that will be going on.  The last time I was able to attend was 2016 which was my first and only time. I went in not knowing many people, but through the vExpert program and the vCommunity I was able to meet a lot of great people.

This year there is going to be a lot of great things coming from the event, and The Level Up project is one that is really important to me.  It is a great way to get more active in the vCommunity.  Check out the twitter here.

I will also be attending Tech Field Day VMworld this year.  It will be going on Monday through Wednesday .  The sessions that I will be in attendance will be on Tuesday with Dell EMC, and then on Wednesday Blue Medora, Netscout and Barefoot Networks. Make sure to check it out here.  There will be a lot of content that you will not want to miss out on.

Remember that this week will be a great time to go to some sessions and learn a few new things, but the real value from this conference is the networking.  Make sure to take the time and meet new people and be an active part of the vCommunity.  So if you see me please say hi.  Maybe we can grab a beer and talk about everything going on.  And I hope that I will…

download (1)

Nutanix .NEXT Europe

Nutanix .NEXT is coming up soon on November 27-29 in London.  I was lucky enough to make it to .NEXT in New Orleans this year and it was a great experience.  It was great meeting up with Paul Woodward and Ken Nalbone.  Maybe one of these years I will also be able to attend the one in Europe.

The sessions at .NEXT are top notch and can cover a wide variety of subject, and there is something there for everyone.  If you are planning on getting your NPX anytime soon. there usually is a NPX boot camp the week before the conference.  It does mean a lot of time away from home, but is well worth it. The number one reason to attend any conference is for the networking with your peers.  Over the years I have met many great people that has helped grow my career and create new friendships.  So if you get a chance do what you can to attend.  You will not regret it.

Nasuni at SFD 16

 

GWbnBM4

Nasuni was a new company to me, but they had a great presentation and I really liked what they presented.  They are providing a solution to a real problem that a lot of companies are running into.  The cloud is a great solution to so many problems that IT departments are encountering, but going to the cloud is not always easy as it looks. Nasuni provides a solution that simplifies the distributed NAS.

The first line from the Nasuni website says it best “Nasuni is primary and archive file storage capacity, backup, DR, global file access, and multi-site collaboration in one massively scalable hybrid cloud solution.”  It does this through providing a solution to to have a “unified NAS” on top of public clouds.  It is essentially and overlay that is controlled by using an on premise appliance either through a VM on your current infrastructure or a Nasuni physical appliance and keeps the data locally cached.  This allows that in the event of internet access being down users can continues to access storage, and when internet is restored the data will be synced up.

There are no limits on the amount of files or file sizes.  The files in a solution like this can be access by multiple users and be changing all the time.  To prevent issues files are locked when a user is accessing it.  Once the file is done being accessed the file will remain locked until it is synced up with the cloud.  Through the continuous versioning of files and in the event of malware or such other issues. Files can be rolled back to another version before the incident occurred.  All the data is deduped and compressed for effective storage utilization.  Files can also be encrypted to prevent any data theft.   Managing a solution like this with multiple devices across many sites could be very complex and time consuming, but with Nasuni everything is managed from a single pane limiting operation costs.

1.PNG

 

Nasuni looks like a great product that really simplifies the migration to the cloud.  By supporting the big players such as AWS, Azure and GCP they give customers plenty of options on what cloud they wish to utilize.  With the caching device they ensure that data can be always accessible even if there are issues preventing access to the cloud and limiting the amount of data that has to be transferred.

You can see this presentation from Nasuni and all the other presentations from Storage Field Day 16 at http://techfieldday.com/event/sfd16/

 

Blog at WordPress.com.

Up ↑