Troubleshooting Deploying Server 2019 using Packer

I am working on rebuilding my homelab and I was looking for a way to quickly automate building out my templates onto my VMware environment. I have used Packer in the past for this process but its been so long I had forgot how to do anything. With a quick Google I found this https://github.com/guillermo-musumeci/packer-vsphere-iso-windows-v2 that had everything ready to be customized for my environment. When I ran packer build .\win2019.json I received the following error.

I verified I had all my credentials correct in the credentials.json and I was following all of the instructions. After some troubleshooting I found that I needed to add the variable -var-file. So the completed command was packer build -var-file .\credentials.json .\win2019.base.json.

Cohesity Event in Dallas

Cohesity has an event coming up March 10th in Dallas Texas. Here is an excerpt from the event.

“Led by a member of the Cohesity Technical Advocacy Group, the qualifying lap will give you all the fundamentals of redefined data management. After qualifying each driver will get a Community License to drive the Cohesity Data Platform in their own lab and be invited to celebrate at a Cohesity Gathering immediately following the qualifying.”

Led by a member of the Cohesity Technical Advocacy Group, the qualifying lap will give you all the fundamentals of redefined data management. After qualifying each driver will get a Community License to drive the Cohesity Data Platform in their own lab and be invited to celebrate at a Cohesity Gathering immediately following the qualifying.Led by a member of the Cohesity Technical Advocacy Group, the qualifying lap will give you all the fundamentals of redefined data management. After qualifying each driver will get a Community License to drive the Cohesity Data Platform in their own lab and be invited to celebrate at a Cohesity Gathering immediately following the qualifying.

https://events.cohesity.com/cohesitycircuitdallas

Looks like it will be great event with some great content. The best part is that you will get a copy of Cohesity to deploy in your own lab. I hope to see you there!

DFW VMUG February 3 – Cohesity Truck

Come check out the Cohesity Data Center on wheels at the DFW VMUG on February 3. Check out the invite below.

Register Now!!!

Cohesity Truck

Test drive Cohesity technology and get answers straight from the experts!

What: Keep on Truckin’: Cohesity Mobile Demo Center

Where: Lucchese Bootmaker, 6635 Cowboys Way #125, Frisco, TX 75034

When: 2/3/20

Time: 11:30am-1:30pm

The 18-wheeler Cohesity truck has live demos featuring how Cohesity solves the Mass Data Fragmentation problem for your back-up, recovery, file, cloud, and more.

Agenda

  • 11:30 am – 12:30 am | Truck Tour Open House
  • 12:30 pm – 1:00pm | Cohesity presentation & Raffle – $300 Lucchese gift card!
  • 1:00 pm – 1:30 pm | Truck Tour Open House

We’ll have tables and chairs set-up next to the truck so that folks can enjoy Cane Rosso pizza, then hop inside the truck for a private demo/tour during “Truck Tour Open House!”

VMware Project Pacific at Tech Field Day Extra 2019

This post has been in my draft folder for way too long. VMworld seems like just a short time ago, but so much time has already passed. It was a great time meeting up with old friends and making new ones. During that time I was lucky enough to be a part of Tech Field Day crew and be a part of some of the presentations from VMware. One of the most exciting announcements of week was about project Pacfic, but I think it should had been called vPacific to be like the rest of VMware products. What this project is doing is bringing native support for Kubernetes into vSphere. I can see this as a real game changer for the Enterprise as it brings Containers out of the development side over to the operations side. Kubernetes seems to be the next big thing, and I think with VMware it is about to really take off for enterprise.

The session kicked off with Jared Rosoff (who I think is trying to look like Bob Ross) one of the co-creators of Project Pacific. Talking about the challenges that are present with the modern Data Center. How can developers rapidly build, and how can Op’s keep the infrastructure working optimally. At the same time keeping cost down, ensuring up time and keeping everything secure.

Kubernetes is more than just an container orchestrator platform, but more of a platform for all kinds of orchestration and at the core of that is controlling the desired state. Project Pacific wanted to be able to build a new kind of cloud platform. Be able to deploy Kubernetes clusters, virtual machines and serverless environments and change how the developer use the cloud. One way this is accomplished is by going from managing at the VM level to managing at the Logical Application layer, by integrating the Kubernetes namespace into vSphere. This makes it go from managing 1000’s of VMs to managing only at the Application layer.

I am really excited to see what VMware is doing with Kuberenetes is doing. It is a powerful product, but in its current state it is hard for most enterprises to take advantage of it. VMware is in a great position to really push Kubernetes into the Enterprise, and I look forward to being able to use it soon.

This has been just a short overview of what VMware Pacific is. If you would like to learn more about this and other events you can see the entire presentation at Tech Field Day

Deploying Many VM’s From a Template

I recently needed to deploy a lot of VM’s for a particular scenario. They all needed to be the same but have a unique names and be on the Domain. Now there is a lot of ways to accomplish this, and one way is to build a spreadsheet and manually input all the names. It would work, but would not be the optimal way to do it as I wanted to automate as much as I could.

#Create many VM's from a Template
#Version 1.0

#variables
$vm_count = Read-Host "How many VM's would you like to deploy?"
$template = "Template"
$customspecification = "Join Domain"
$ds = "datastore"
$folder = "VM folder"
$cluster = "cluster"
$vm_prefix = "desktop-"

#Loop
write-host "Starting Deployment"
1..$vm_count | foreach {
    $y="{0:D1}" -f + $_
    $vm_name = $vm_prefix + $y
    $esxi=get-cluster $cluster | get-vmhost -state connected | Get-Random
    write-host "Creationg of VM $vm_name start" 
    new-vm -name $vm_name -template $template -vmhost $esxi -datastore $ds -location $folder -oscustomizationspec $customspecification -runasync
}

Bitfusion at VMworld 2019 with TFDx

AI and ML the next big thing in IT, and at one time is was all hand crafted by programmers. This has created great demand for processing power, but is usually on a monolithic machine that is expensive, and not effectively used. If you wanted to build a way for a computer to recognize a certain object lets say the Internets favorite the cat; how would that be done? At one time that would required a lot of manual work to make this happen, and a lot of processing power. Now this is all done automatically with things such as GPU’s and FPGA’s through a process of inferring and training.

Bitfusion was recently acquired by VMware. I had never heard of this company before the acquisition, but once I learned about them I quickly realized why VMware did the acquisition. They have created a solution that over comes all these issues that I mentioned before, and have done for the AI/ML world what VMware did for the Storage world. Basically this is vSAN for AI/ML. Creating a large pool of resources from devices that are not in the same chassis, but on they are on the same network. It runs in the software layer and in the user-space which makes it very secure. This software breaks out workloads to run across multiple remote nodes to effectively use all available resources all with an overhead of less than 10%. I can see this a great cost effective way to bring more ML workloads into enterprises. It does this by intercepting API calls at the API layer as this is the “sweet spot” for Bitfusion to run. Then it can transfer the data over the network to a remote device such as GPU to be processed, and the application does not even need to be aware of this. This is all done with Bitfusion FlexDirect which the following slides do a good job of explaining what FlexDirect is.

It uses also CUDA to intercept the applications calls.. Then the process goes down the stack to a remote device over the network for processing. Bandwidth is not an issue with workloads such as these as latency is the main concern, and it this has been optimized to minimize latency. Check out the above slide, as it does a great job of explaining the entire process of how this all works. It inte

GPU’s can be really expensive so to make them be cost effective they need to be optimally used. That is what makes Bitfusion such an interesting product in that it can optimally use your hardware investment. I could see an organization using GPU during the day for things such as VDI, but during the night they would go idle. Jobs could be scheduled to run at night and fully use all the GPU’s.

This is just an overview of what Bitfusion is capable of. If you like to dive more into this please watch the following embedded videos, and check out TechFieldDay.com.

Deploy OVA with PowerCLI

It has been a while since I have written anything for my blog. Life has become crazy busy for me the last few months. Between normal work and family time I do not have a lot of free time. The following meme really sums up my life right now.

I have had to deploy a few OVA recently, but wanted to let those that are a little less technical have an easy quick way to accomplish this. Its not that complicated of a script, but it works and gets the job done. By using the Read-Host capability it allows user input into a Powershell script which can really expand Powershells capabilities.

connect-viserver “Your vCenter”

$vmSource = Read-Host – Prompt “File path to OVA”

$vmCluster = Get-Cluster – Name “Name of Cluster you want to use”

$vmHost = ($vmCluster | Get-VMHost)

$vmDatastore = Get-Datastore -Name “Name of Datastore to be used”

$vmName = Read-Host -Prompt “What to name VM?”

$vmHost | Import-vApp – Source $vmSource -Location $vmCluster -Datastore $vmDatastore -Name $vmName -Force

It Is Time For Cloud Field Day 5

I am super excited because it is almost time for Cloud Field Day, and as always it is going to be a big! If you don’t anything about Cloud Field Day then is an event where various vendors come together at a Top Secret location, and show off what they have been up to. As always the event will be streamed April 10-12 at http://www.techfieldday.com

There will be a lot of vendors at the event, but a few that I am excited to see what they have been up to. VMware will probably be showing off what they have been up to with the Amazon Cloud with VMC. Then there are other like Pure Storage which I have a lot of experience using their storage array, but I am curious what they are doing with the cloud. NGiNX will also be presenting , and they are a company that I know has a lot of customers, but I have never been able to spend much time with. So it will be a great time to learn more about them, and see what they have in development.

As you can see there will be a lot of vendors involved in this event, and they will definitely be showing a lot of great content. So make sure to tune in April 10-12 at TechFieldDay.com and make sure to follow me on Twitter @brandongraves08. Reach out to me if there is anything you would like me to ask them.

Transferring Files Using PowerCLI

I recently had a unique issue in that I needed to transfer some files to VM’s that were running in a secure environment. This meant there was no network access in our out of this environment except the one used by VMware Tools.  There is a powershell command that can be used to transfer files by utilizing VMware Tools.  This is very useful when transferring files even if it is not a secure environment.

Copy-VMGuestfile -Source “C:\Path\” -Destination “C:\Path\” -VM VMName -LocalToGuest -GuestUser “domain\user” -GuestPassword “password”

If you want to retrieve the file then replace -LocalToGuest with -GuestToLocal

One issue you may encounter is when you have multiple VM’s with the same name.  If they are using different folders then you can point to the correct folder path.

-VM $(Get-VM -Name VMName -Location folder)

My time at VMworld 2018

fond memoriesWow how time flies. VMworld was just last week, and its hard to believe that its already over. It was a very busy week with all of the announcements from VMware. Visiting with all of the vendors and seeing what new products they are offering. So much happens in such a short amount of time.  I never like the process of traveling.  All the time spent commuting to the airport just to wait a few hours to board the plan.  With all the possible delays I always a fear that I will get trapped in the airport over night.  In the end it is all worth it because it was an exciting week.

recap

It was great being able to meet up with all my old friends from the vExpert community.  It was great to see Jeremy Mayfield again. I have not seen him since .NEXT in Washington DC.  He lives quite far away from me in the frozen white north of Alaska.  It was great to have someone to hang out with during the week, and grab some food at the casino buffet. It was great to finally meet Thom Greene in person. It is always interesting talking to him because we have had such a similar career path.

When I had some free time I spent most of it at the VMTN booth in the Village.  There were a lot of people from the community hanging around it all day.  It was a great place to meet new people, and catch up with old friends. During the day vBrownBag had presentations going on from various people in the community.  It is one of the many things that makes VMworld a unique experience.

At night after all the presentation were over; there was a Hackathon going on.  I had a great time at it, even though I was not able to contribute much. There were some really smart people there. It was amazing to see what the different groups were able to accomplish in just a few hours.

The two big highlights for me were the vExpert party at the Pinball Museum. It was great meeting all the fellow vExperts while enjoying great barbecue and playing some pinball. Then on Wednesday night instead of going to the big party I went to an Influencer dinner.  It is put on by the legend Howard Marks.  It was at this great Thai place and met a lot of great people.  I really had some impostor syndrome kicking in while I was there, because it was full of famous people from the industry.  The contributions that they have made to the community have really helped me in my career.

Tech Field Day Extra was at VMworld this year.  I was a delegate at it for two different sessions. Tuesday morning DELL EMC presented on its Data Protection.  I enjoyed the presentation since I started my career using those data protection products. Wednesday afternoon BlueMedora, Barefoot Networks and Netscout presented. They had a lot a great of information, and as always it was a great event to be a part of. I am very thankful of the Tech Field Day crew inviting me, and over the next few weeks I will be publishing some more post detailing the presentations from these great companies.  So keep an eye on this site over the coming weeks.

Blog at WordPress.com.

Up ↑