Inspect VMware ESXi log with local llama3.1 AI.

In this example I would like to demonstrate the use of the llama 3.1 AI Large Language Model. In this example a locally running llama 3.1 will inspect a VMware ESXi vmkernel log.

I am using an Apple MacBook Air M2 with 16GB of RAM. If you are using a Windows or Linux system the procedure might vary a little bit.

So the first thing to do is to download/install ollama from https://ollama.com. It is shockingly simple. You don’t need to create an account or register your email address. Simply download and install. It couldn’t be easier. Therefore I will not go further into detail about the installation of ollama.

I assume you have Microsoft Powershell and VMware PowerCLI running on your machine. If not, I quick google search should lead you to the steps needed to install PowerCLI, and if needed Powershell (newer Windows systems should have Powershell installed by default)

On a Mac I have to open a terminal window and start powershell by typing pwsh.

Now we connect to VMware vCenter server by typing the following (vcenter.lab being DNS the name of the vCenter Server. )

You will be prompted for username and password for the vCenter Server.

Let’s get a vmhost object using the Get-VMHost command and ESXi name and save it in the $vmhost variable.

With the cmdlet Get-LogType you can gather the available log types from the ESXi host.

Now we are using the cmdlet Get-Log with the required parameter -key and optional parameter -vmhost and save the result in the variable $vmkernellog

The output is not a simple string but of type VMware.VimAutomation.ViCore.Types.V1.LogDescriptor

To get the log, simply use $vmkernellog.Entries.

Now we use ollama to ask llama3.1 questions about the log file. The command for this will be ollama run and the LLM which in this case is llama3.1. llama3.1 is the default 8B model. To use the 70B model the command would be ollama run llama3.1:70b and for the 405B model it the command would be ollama run llama3.1:405b. However your computer most likely will not be powerful enough to run the 70B or 405B model. Ollama doesn’t have access to your local file system and therefore you cannot direct it to a file. But you can parse the string, which in our case is $vmkernellog.Entries and then ask your question about it. I have just stopped a vm called photon1 and I am going to ask llama3.1 about it. My question here is “vm photon1 failed, can you determine the reason?”

As we see, we will get an error message back. The reason for this is that llama3.1 can “only” handle 128K tokens. So we need to shorten the log first. The cmdlet Get-Log has a parameter -NumLines. So would could specify, let’s say 500 lines. However it would return the oldest 500 lines and I want the latest 500 lines. For that we have to filter the log after the fact. I use select -last 500 to do that and save the result in the $mylog variable.

Now I can run ollama again using $mylog

Ok, the result is not very convincing, but you get the idea. Of course you are not limited to ESXi log files but pretty much any data represented as string. To set this up only takes minutes. I was really impressed how easy it was to setup ollama and to get the logs from ESXi is also quick and easy. It also runs locally on your machine, you don’t need to worry about your ESXi log data ending up on the internet.

I would love to hear your feedback if you have use this method to have other data reviewed by llama and if you get any results that are valuable.

Published by txusa

VMware Certified Design Expert - VCDX 92. VMware Architect, automation enthusiast.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.