[Integrate NSX with PaloAlto] Solve OVF Import Certificate problem using the OVFTool

In my next post I'll be focusing on the NSX and Palo Alto integration, and all the improvements this brings to the Micro Segmentation. For now, lets just focus on importing the Palo Alto Virtual FW VM (NSX Version) to the existing vSphere environment.

VMware Environment Details:

ESXi 6.0 on a Physical Host + 5 Nested ESXi 6 (deployed in my Demo Center, as explained here)
vSphere 6.0 Managing Compute and Management Clusters
NSX Vestion 6.2
Palo Alto 7.0.1Model PAN-PA-VM-1000-HV-E60 (Features: Threat Prevention, BrightCloud, URL Filtering, PAN-DB URL Filtering, GlobalProtect Gateway, GlobalProtect Portal, PA-VM, Premium Support, WildFire License).

IMPORTANT: You will need to be a Palo Alto partner, as their permission is required in order to download their products.

What is OVFTool, and why did I need it?

OVFTool is a Multi-use VMware tool for various OVA/OVF files operations using the Command Line. I found it really handy in this occasion, while trying to deploy the Palo Alto NSX Version of Virtual FW into the existing vSphere 6 environment with NSX 6.2 deployed. The issue was that there was no way to deploy the .OVF due to the certificate error, presented below. The original 3 files in the PA7.0.1 folder are the .MF, .OVF and the .VMDK file, all with the same name (PA-VM-NSX-7.0.1.*).

I tried talking to Palo Alto support, and they proposed signing an .OVF manually, due to a possible corruption of a .MF file. Basically, sometimes when you try to deploy a OVA/OVF, the Manifest File (.mf) will be missing, or corrupt. In this case you will need to sign the file "manually".  Before you're able to sign the .OVF VM, you will need two files: file.PEM and file.MF.

Before you start, you will need to download the OVFTool. To do this, you will need a valid VMware username/password.

Before you start "playing around", I strongly suggest you to read a bit about it, and the operations you can perform in the Official VMware OVF Tool User’s Guide

Create a PEM file

To sign a package, a public/private key pair and certificate that wraps the public key is required. The private key and the certificate, which includes the public key, is stored in a .pem file.

The following OpenSSL command creates a .pem file:

> openssl req -x509 -nodes -sha1 -days 365 -newkey rsa:1024 -keyout x509_for_PA.pem -out x509_for_PA.pem

You will need to specify the standard x509 certificate details while doing this. Check if the .PEM file has been successfully created:

MJ-MacPro:VMware OVF Tool iCloud-MJ$ ls | grep pem

MJ-MacPro:VMware OVF Tool iCloud-MJ$ openssl x509  -text -noout -in x509_for_PA.pem
        Version: 3 (0x2)
        Serial Number:
        Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=es, ST=Madrid, L=Madrid, O=Logicalis, CN=Logicalis/emailAddress=mateja.jovanovic@es.logicalis.com
            Not Before: Oct 20 09:38:14 2015 GMT
            Not After : Oct 19 09:38:14 2016 GMT
        Subject: C=es, ST=Madrid, L=Madrid, O=Logicalis, CN=Logicalis/emailAddress=mateja.jovanovic@es.logicalis.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
            RSA Public Key: (1024 bit)
                Modulus (1024 bit):
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Key Identifier:
            X509v3 Authority Key Identifier:

            X509v3 Basic Constraints:
    Signature Algorithm: sha1WithRSAEncryption

Create a Manifest (.MF) file

To create the manifest file, run the following command for all files to be signed:

openssl sha1 *.vmdk *.ovf > Final-Signed-VM.mf

Once you´ve created the .MF and .PEM, you can proceed to signing the OVF file using the OVFtool. I had the files in C:/PA7 Folder, but to avoid copy-pasting the entire path, I simply copied them to the folder where OVFTool.exe is (C:\Program Files\VMware\VMware OVF Tool> in Windows environment, /Applications/VMware OVF Tool in Macbook)

You may continue the procedure in Linux/Mac. OVFTool commands are exactly the same. I switched to Windows environment due to a Fusion Library errors (details at the end of this post).

Sign the OVF using the OVFTool

The final step is to execute the OVFTool command in order to create the new, signed OVF:

ovftool --privateKey="x509_for_PA.pem" PA-VM-NSX-7.0.1.ovf Final-Signed-VM.ovf

TIP: Beware of the CAPITAL/non-capital letters errors in your command:

C:\Program Files\VMware\VMware OVF Tool>ovftool --privatekey="x509_for_PA.pem" PA-VM-NSX-7.0.1.ovf Final-Signed-VM.ovf
Error: Unknown option: 'privatekey'
Completed with errors

C:\Program Files\VMware\VMware OVF Tool>
C:\Program Files\VMware\VMware OVF Tool>
C:\Program Files\VMware\VMware OVF Tool>ovftool --privateKey="x509_for_PA.pem" PA-VM-NSX-7.0.1.ovf Final-Signed-VM.ovf
Opening OVF source: PA-VM-NSX-7.0.1.ovf
The manifest does not validate
Error: Invalid manifest file (line: 1)
Completed with errors

C:\Program Files\VMware\VMware OVF Tool>ovftool --privateKey="x509_for_PA.pem" PA-VM-NSX-7.0.1.ovf Final-Signed-VM.ovf
Opening OVF source: PA-VM-NSX-7.0.1.ovf
The manifest validates
Opening OVF target: Final-Signed-VM.ovf
Writing OVF package: Final-Signed-VM.ovf
Transfer Completed
OPENSSL_Uplink(000007FEEDE66000,08): no OPENSSL_Applink

C:\Program Files\VMware\VMware OVF Tool>

Now we copy the files BACK to the original folder (C:/PA7). The content is displayed below.

 El volumen de la unidad C no tiene etiqueta.
 El número de serie del volumen es: B416-28D0

 Directorio de C:\PA7

20/10/2015  12:13    <DIR>          .
20/10/2015  12:13    <DIR>          ..
20/10/2015  12:11     1.552.252.928 Final-Signed-VM-disk1.vmdk
20/10/2015  12:11                 0 Final-Signed-VM.cert.tmp
20/10/2015  12:11               121 Final-Signed-VM.mf
20/10/2015  12:11            10.256 Final-Signed-VM.ovf
               4 archivos  1.552.263.305 bytes
               2 dirs   6.033.895.424 bytes libres

You will now be able to deploy the .OVA to your vSphere.

Note: As you probably noticed, I created the .PEM and .MF in my MacBook, and then passed the files to a Windows VM because of a few Fusion Library errors I´ve been getting. 
Error Details (if someone is interested):
VMware Fusion unrecoverable error: (vthread-4), SSLLoadSharedLibraries: Failed to load OpenSSL libraries. libdir is /Applications/VMware OVF Tool/lib A log file is available in "/var/root/Library/Logs/VMware/vmware-ovftool-16747.log".

VMware NSX Home Lab

The required Physical Infrastructure

To prepare for the VCIX-NV Exam, the ideal environment to practice is similar to the one we may find on the Hands-on-Labs:

We are particularly interested in the following 4 HoL-s:
  • HOL-SDC-1403 - VMware NSX Introduction
  • HOL-SDC-1425 - VMware NSX Advanced
  • HOL-SDC-1603 - VMware NSX Introduction
  • HOL-SDC-1625 - VMware NSX Advanced

They all have one thing in common: There are 5 Physical Hosts (ESXi-s) distributed into 3 Logical Clusters:
-        Compute Cluster A (2 hosts)
-        Compute Cluster B (1 host)
-        Compute Cluster C (2 hosts)

In the ideal case, you would have 5 Physical Servers to install the native ESXi, and a Physical Switch. Since the majority of us do not have an infrastructure like this just lying around, we need to do an alternative approach: Use 1 Physical Server (needs to be packed with RAM, Memory and CPU), and build the Nested ESXi-s to simulate the target environment.

Before you even start thinking about building your home lab, you should make sure that your Server complies with the bare minimum of the requirements:
  • Minimum 32GB of RAM. With 32 you will be “on the edge”. It´s recommended that you have between 48 and 64GB.
  • Minimum 12 Cores
  • Minimum 512GB HDD (SSD Highly Recommended). You will need more GBs for your VMs later if you want to test the functionalities, so it´s recommended to have 2x512GB or 1TB.

The Server should have no Windows, or any other Standard Use OS. You will install the ESXi as the OS
directly on the Bare Metal. If you choose to use Windows and the VMware Workstation, you will need
the requirements mentioned above + what Windows requires for the adequate performance. In this guide
I will only focus on Home Lab built directly on ESXi as the native OS.

Simulate a physical environment, Step-by-Step

Step 1

You first need to Install ESXi 6.0U1 as the Native OS on your Physical Server. In my case I have a HP Z800 with 48GB of RAM, 1TB HDD and 512GB SSD (I use SSD only for the entire HOME LAB, and then later when I install a bunch of Linux VMs in order to test the features - I use a 1TB disk [you need to create a Datastore using this disk, or it won't be visible].
I assigned the Static IP to the ESXi:

Step 2

Install a vCenter 6.0 and connect it to the Physical ESXi. I assigned the Static IP to the vCencer:, with the GW

Once you´ve done this, you can now login to vSphere Web Client to add the ESXi host:

You should log in as administrator@vsphere.local, not as root. The difference is that the “root” user has the additional privileges to configure the authentication. Yes, this is better, but you should get used to use the “admin” account, because in the future you will be having the Admin set of privileges.

Step 3

You now need to deploy your 5 Nested ESXi-s as the VMs with the ESXi 6.0 ISO. Before you start deploying the nested hosts you need to copy your ESXi .iso to a Datastore [just browse around, you´ll manage how to copy from a local PC to Datastore].

IMPORTANT: Be sure to assign at least 4GB of RAM to each host, or it will report weird errors afterwards. Sometimes these errors are random, I once got – Network Adapter not present.
Briefly, you should create a new VM, and the Type should be Other – Other (64 bit), as shown on the screenshot below:

To boot the ESXi installation on your VM, you need assign the ESXi .iso that you placed in the Datastore to a CD Drive. In the CD Drive Type choose the Datastore, and do as shown in the screenshot below:

Step 4

You should now build 3 clusters, MANAGEMENT, COMPUTE A and COMPUTE B (that’s EXACLY like in the HOL). Have in mind that the management IPs of ALL ESXi should be STATIC, so in my case I assigned - 105/24.

IMPORTANT: You´re doing a Nested environment, so you need to set the Security settings within your vSwitch on a Physical Host to Accept the Promiscuous traffic. If you do not do this, the traffic between your Nested ESXis and the Physical NIC will be dropped (Rejected).

You do this following this path, and clicking on a small PENCIL icon that enters the Edit Settings  menu of a vSwitch:

Inventories -> Hosts and Clusters -> Click on a Physical Host ( -> Manage -> Networking

You need to set it as in the following Screenshot:

Step 5

On the vSwitch on your physical ESXi you need to put all 5 ESXi in the same port group. Once this is done, you go to your vCenter server, build 3 clusters and add the Hosts (VMs from your native ESXi point of view) to them:
-        CLUSTER A: 2 Hosts (.101 and .102)
-        CLUSTER B: 1 Host (.103)
-        MANAGEMENT Cluster: 2 Hosts (.104 and .105)

Once you're done, you should see something like this:

That’s it, your environment is ready, you have 5 “physical” hosts to build your NSX Lab. For a reference, this is how a HOL environment looks, and what we should be going towards with a Home Lab:

Migrate to the Virtual Distributed Switches (VDS)

Virtual Distributed Switch is the basis of the NSX, and before we start with any of the NSX-related “activities” we need to deploy the VDS, and migrate all the 5 hosts ports to the VDS.
We will need 2 Switches:
-        VDS for the two Compute Clusters, and we will call it Compute_VDS
-        VDS for the Management Clusters, and we will call it Mgmt_Edge_VDS

To create a VDS, you need to follow the simple procedure shown on the screenshot below. When prompted, select the VDS version based on your ESXi version (in my case I did the VDS 6.0), and choose the number of Uplinks that your Hardware allows.

You now need to create the Port Groups, and perform a Migration of the existing Hosts from the vSwitch to the VDS.

Once the port groups are created (Check the Screenshot below to see which Port Groups you need), go to “Add and Manage Hosts”, and migrate all the 5 Hosts to the corresponding VDS. Use the Mgmt Port Groups for the VMkernel network adapters.

Deploy the NSX

Before you install the NSX 6.2, consider your network configuration and resources. You can install one NSX Manager per vCenter Server, one Guest Introspection per ESX™ host, and multiple NSX Edge instances per data center.

First consider the System Requirements:

NSX Manager: 16 GB (24 GB with certain NSX deployment sizes*)
NSX Controller: 4 GB
NSX Edge Compact: 512 MB, Large: 1 GB, Quad Large: 1 GB, and X-Large: 8 GB

Disk Space
NSX Manager: 60 GB
NSX Controller: 20 GB
NSX Edge Compact, Large, and Quad Large: 512 MB, X-Large: 4.5 GB (with 4 GB swap)

The number of vCPU Cores
NSX Manager: 4
NSX Controller: 4
NSX Edge Compact: 1, Large: 2, Quad Large: 4, and X-Large: 6

Once you´re sure that your Home Lab meets the requirements, you may deploy the NSX Manager OVA. Be sure to assign the IP address and set the credentials. In my case, I assigned the IP to the NSX Manager:

Integrate the NSX Manager with your vSphere

The first thing you need to do is enter the NSX Manager console with the username: admin and the password you configured when deploying the OVA.

Have in mind that you will not use this portal at all. The only purpose of it is to integrate it with the existing vSphere. Enter the “Manage vCenter Regustration”, and fill in your vCenter details. Once you fill in the credentials and accept the certificate association, you will see your vCenter status as “Connected”:

Logout from your vSphere Web Client, and log back in. You will the the new icon called “Networking and Security”. Congratulations, you have installed the NSX Manager, and connected it to your vSphere.

Deploy the NSX Controllers

From the beginning we knew that we wanted to deploy 3 NSX Controllers, and that we want to do it in the Management Cluster. Since there are 2 Hosts in the Management cluster (.104 and .105), we will deploy 2 Controllers on one, and 1 Controller on the other host. We need to have 3 NSX Controllers because the Controllers use the VOTING principle to avoid the Split Brain. This means that if 1 Controllers stays alone, it will not be able to make any configuration changes.
You need to create the new IP Pool while Adding the First Controller. I called mine NSX_Controller_Pool, and assigned the IP range

Have in mind that if you didn’t assign enough resources to your ESXi VM-s, the NSX Controller deployment will fail. On the screenshot below you can see that I had 10 failed attempts to deploy the NSX controller. Each time I had to go back, and tune the storage and the CPUs on the Management Cluster hosts.

Once you have successfully deployed all 3 NSX Controllers, you will be able to see something like this:

After this, you may use the HoL Workbooks to test the NSX Features in your environment.

Good luck!!!

Most Popular Posts