The required Physical Infrastructure
To prepare for the
VCIX-NV Exam, the ideal environment to practice is similar to the one we may find on the
Hands-on-Labs:
We are particularly
interested in the following 4 HoL-s:
- HOL-SDC-1403 - VMware NSX Introduction
- HOL-SDC-1425 - VMware NSX Advanced
- HOL-SDC-1603 - VMware NSX Introduction
- HOL-SDC-1625 - VMware NSX Advanced
They all have one thing in
common: There are 5 Physical Hosts (ESXi-s) distributed into 3 Logical
Clusters:
-
Compute Cluster A (2 hosts)
-
Compute Cluster B (1 host)
-
Compute Cluster C (2 hosts)
In the ideal case, you
would have 5 Physical Servers to install the native ESXi, and a Physical
Switch. Since the majority of us do not have an infrastructure like this just
lying around, we need to do an alternative approach: Use 1 Physical Server
(needs to be packed with RAM, Memory and CPU), and build the Nested ESXi-s to
simulate the target environment.
Before you even start
thinking about building your home lab, you should make sure that your Server
complies with the bare minimum of the requirements:
- Minimum 32GB of RAM. With 32 you will be “on the edge”. It´s recommended that you have between 48 and 64GB.
- Minimum 12 Cores
- Minimum 512GB HDD (SSD Highly Recommended). You will need more GBs for your VMs later if you want to test the functionalities, so it´s recommended to have 2x512GB or 1TB.
The Server
should have no Windows, or any other Standard Use OS. You will install the ESXi as the OS
directly on the Bare
Metal. If you choose to use Windows and the VMware Workstation, you will need
the requirements mentioned above + what Windows requires for the adequate
performance. In this guide
I will only focus on Home Lab built directly on ESXi
as the native OS.
Simulate a physical environment, Step-by-Step
Step 1
You first need to
Install ESXi 6.0U1 as the Native OS on your Physical Server. In my case I have
a HP Z800 with 48GB of RAM, 1TB HDD and 512GB SSD (I use SSD only for the
entire HOME LAB, and then later when I install a bunch of Linux VMs in order to
test the features - I use a 1TB disk [you need to create a Datastore using this
disk, or it won't be visible].
I assigned the Static
IP to the ESXi: 192.168.1.53/24
Step 2
Install a vCenter 6.0
and connect it to the Physical ESXi. I assigned the Static IP to the vCencer:
192.168.1.78/24, with the GW 192.168.1.1.
Once you´ve done this,
you can now login to vSphere Web Client to add the ESXi host:
You should log in as administrator@vsphere.local, not as
root. The difference is that the “root” user has the additional privileges to
configure the authentication. Yes, this is better, but you should get used to
use the “admin” account, because in the future you will be having the Admin set
of privileges.
Step 3
You now need to deploy
your 5 Nested ESXi-s as the VMs with the ESXi 6.0 ISO. Before you start
deploying the nested hosts you need to copy your ESXi .iso to a Datastore [just
browse around, you´ll manage how to copy from a local PC to Datastore].
IMPORTANT: Be sure to assign at least 4GB of RAM to each
host, or it will report weird errors afterwards. Sometimes these errors are
random, I once got – Network Adapter not present.
Briefly, you should
create a new VM, and the Type should be
Other – Other (64 bit), as shown on the screenshot below:
To boot the ESXi
installation on your VM, you need assign the ESXi .iso that you placed in the
Datastore to a CD Drive. In the CD Drive Type choose the Datastore, and do as
shown in the screenshot below:
Step 4
You should now build 3
clusters, MANAGEMENT, COMPUTE A and COMPUTE B (that’s EXACLY like in the HOL).
Have in mind that the management IPs of ALL ESXi should be STATIC, so in my
case I assigned 192.168.1.101 - 105/24.
IMPORTANT: You´re doing a Nested environment, so you
need to set the Security settings within your vSwitch on a Physical Host to
Accept the Promiscuous traffic. If you do not do this, the traffic between your
Nested ESXis and the Physical NIC will be dropped (Rejected).
You do this following
this path, and clicking on a small PENCIL icon that enters the Edit
Settings menu of a vSwitch:
Inventories -> Hosts and Clusters ->
Click on a Physical Host (192.168.1.53) -> Manage -> Networking
You need to set it as
in the following Screenshot:
Step 5
On the vSwitch on your
physical ESXi you need to put all 5 ESXi in the same port group. Once this is
done, you go to your vCenter server, build 3 clusters and add the Hosts (VMs
from your native ESXi point of view) to them:
-
CLUSTER A: 2 Hosts (.101 and .102)
-
CLUSTER B: 1 Host (.103)
-
MANAGEMENT Cluster: 2 Hosts (.104 and .105)
Once you're done, you
should see something like this:
That’s it, your
environment is ready, you have 5 “physical” hosts to build your NSX Lab. For a
reference, this is how a HOL environment looks, and what we should be going
towards with a Home Lab:
Migrate to the Virtual Distributed Switches (VDS)
Virtual Distributed
Switch is the basis of the NSX, and before we start with any of the NSX-related
“activities” we need to deploy the VDS, and migrate all the 5 hosts ports to
the VDS.
We will need 2
Switches:
-
VDS for
the two Compute Clusters, and we will call it Compute_VDS
-
VDS for
the Management Clusters, and we will call it Mgmt_Edge_VDS
To create a VDS, you
need to follow the simple procedure shown on the screenshot below. When prompted,
select the VDS version based on your ESXi version (in my case I did the VDS
6.0), and choose the number of Uplinks that your Hardware allows.
You now need to create
the Port Groups, and perform a Migration of the existing Hosts from the vSwitch
to the VDS.
Once the port groups
are created (Check the Screenshot below to see which Port Groups you need), go
to “Add and Manage Hosts”, and migrate all the 5 Hosts to the corresponding
VDS. Use the Mgmt Port Groups for the VMkernel network adapters.
Deploy the NSX
Before you install the
NSX 6.2, consider your network configuration and resources. You can install one
NSX Manager per vCenter Server, one Guest Introspection per ESX™ host, and
multiple NSX Edge instances per data center.
First consider the
System Requirements:
RAM
NSX Manager: 16 GB (24
GB with certain NSX deployment sizes*)
NSX Controller: 4 GB
NSX Edge Compact: 512
MB, Large: 1 GB, Quad Large: 1 GB, and X-Large: 8 GB
Disk Space
NSX Manager: 60 GB
NSX Controller: 20 GB
NSX Edge Compact, Large,
and Quad Large: 512 MB, X-Large: 4.5 GB (with 4 GB swap)
The number of vCPU Cores
NSX Manager: 4
NSX Controller: 4
NSX Edge Compact: 1,
Large: 2, Quad Large: 4, and X-Large: 6
Once you´re sure that
your Home Lab meets the requirements, you may deploy the NSX Manager OVA. Be
sure to assign the IP address and set the credentials. In my case, I assigned the IP 192.168.1.242 to the
NSX Manager:
Integrate the NSX Manager with your vSphere
The first thing you
need to do is enter the NSX Manager console with the username: admin and the
password you configured when deploying the OVA.
Have in mind that you
will not use this portal at all. The only purpose of it is to integrate it with
the existing vSphere. Enter the “Manage vCenter Regustration”, and fill in your
vCenter details. Once you fill in the credentials and accept the certificate
association, you will see your vCenter status as “Connected”:
Logout from your
vSphere Web Client, and log back in. You will the the new icon called
“Networking and Security”. Congratulations, you have installed the NSX Manager,
and connected it to your vSphere.
Deploy the NSX Controllers
From the beginning we
knew that we wanted to deploy 3 NSX Controllers, and that we want to do it in
the Management Cluster. Since there are 2 Hosts in the Management cluster (.104
and .105), we will deploy 2 Controllers on one, and 1 Controller on the other
host. We need to have 3 NSX Controllers because the Controllers use the VOTING
principle to avoid the Split Brain. This means that if 1 Controllers stays
alone, it will not be able to make any configuration changes.
You need to create the
new IP Pool while Adding the First Controller. I called mine
NSX_Controller_Pool, and assigned the IP range 192.168.1.200-192.168.1.202.
Have in mind that if
you didn’t assign enough resources to your ESXi VM-s, the NSX Controller
deployment will fail. On the screenshot below you can see that I had 10 failed
attempts to deploy the NSX controller. Each time I had to go back, and tune the
storage and the CPUs on the Management Cluster hosts.
Once you have
successfully deployed all 3 NSX Controllers, you will be able to see something
like this:
After this, you may use the HoL Workbooks to test the NSX Features in your environment.
Good luck!!!
Cheers for taking the time to Share , off to build a lab this weekend
ReplyDeleteGlad I could help :) I hope it went well...
Deletethis is just what i was looking for , how many Physical NIC card i need in my server? thanks
ReplyDelete