Before
you even consider getting into the NSX, be sure you understand deeply the
vSphere, vCenter and ESXi concepts, including the vSphere Networking (vSwitch
and vDS). If this is not the case, I highly recommend you start with a
vSphere-networking course.
There
are two types of Hypervisors, and in case of VMware they are, as show on the
diagram above.
-
Type 1: ESXi over the Bare Metal (Physical Server)
-
Type 2: VM Workstation installed on the Native OS
The vCenter Server system is a centralised platform for management features. It manages each of the ESXi hosts. Have in
mind that in the vCenter your ESXi servers will be managed as Hosts, so from
it´s point of view each ESXi is a single Host.
The
vCenter Server system includes the following features:
- VMware vSphere® vMotion® enables you to migrate
running virtual machines from one ESXi host to another without disrupting the
virtual machine.
- VMware vSphere® Distributed
Resource Scheduler" (DRS) provides load balancing for your virtual machines across the
ESXi hosts. DRS leverages vSphere vMotion to balance these workloads.
- If configured, VMware
vSphere® Distributed Power Management" (DPM) can be used to power off
unused ESXi hosts in your environment. DPM can also power on the unused EXI
hosts at the correct time.
- VMware vSphere® Storage
vMotion®
allows you to migrate a running virtual machine's hard disks from one storage
device to another device. VSphere vMotion allows you to migrate a running
virtual machine from one ESXi host to another, even during normal business
hours, and it can operate without the shared storage (you can migrate a running
machine even if the ESXi hosts don’t have the shared storage).
- VMware vSphere® Storage DRSTM automates load balancing
from a storage perspective.
- VMware vSphere® Data
Protection
enables you to backup your virtual machines.
- VMware vSphere® High
Availability
to restart your virtual machines on another host if you have a hardware
problem.
Storage: vSphere supports Fibre
Channel, Fibre Channel over Ethernet (FCoE), iSCSI, and NFS for Shared storage. vSphere also supports local storage. vSphere HA feature does require the shared
storage between the machines, and this is important for us, because this is the
only available HA mechanism for NSX Manager for now (as you will later see,
there is a limitation that only one NSX Manager can be implemented per
vCenter).
Virtual networking is similar to physical networking. Each virtual
machine and ESXi host on the network has an address and a virtual network card.
These virtual network cards are connected to virtual Ethernet switches. Virtual
switches attach your virtual machines to the physical network.
Virtual
switches are of the following types:
- Standard switch architecture – Virtual Switch (vSwitch): Manages virtual machine
and networking at the host level. There is NEVER a direct connection between
two vSwitches, and the Spanning Tree is OFF. So EAST-WEST Traffic is NOT ALLOWED between the
vSwitches, and the only way out of the vSwitch is via UPLINKs (physical
interconnections with the Physical Switch, NIC=VMNIC) that are Teamed to work as one link. There is a
variety of ways of teaming them (Active-Standby, LACP etc.).
- VMware vSphere® Distributed
Switch (vDS) architecture: Manages
virtual machine and networking
at the Data Center level. Have in mind that one vDS only covers one Data
Center span. This is not on a physical level, but on a vSphere Datacenter
level.
The
aim of all these Virtualization Techniques is to reach the stadium of a
Software Defined Data Center (SDDC = SDN
+ SDS [storage] + SDC), where the Service Providers are decoupled from
physical infrastructure, allowing them to use any x86, any storage, and any IP
networking hardware, as is shown on the example below.
A
software-defined data center is decoupled from the underlying hardware, and
takes advantage of underlying network, server, and storage hardware. This is
where VMware NSX enters the game, between other things also because it can do
layer 2, SSL, and IPsec VPNs, and this provides business continuance and disaster
recovery capabilities, which are not otherwise available.
NSX
uses the vSphere Distributed
Switches (VDS) in vSphere (requires Enterprise Plus licence), which is
considered a Data Plane and it needs to be Setup in order to later configure
the Logical Switch as a VXLAN platform, even though you can run some services
on the standard vSwitch. Distributed
switches must be in the same vCenter Server system as NSX Manager so that NSX
Manager can use the distributed switch.
VDS is actually a second vSwitch included in
vSphere, and it allows you to do things more globaly, not only on a level of a
single host (ESXi) as a standard vSwitch.
The biggest advantage of VDS is that it can manage all vSwitches in a
data center, not only the individual switches per host. Have in mind that the VM Port Groups are
configured on the vDS and the configuration applies to all hosts, while the
VMKernel port groups are still configured on the each individual host (ESXi in
our case), just like in the Standard vSwitch.
VMkernel is the liaison between virtual machines (VMs) and the
physical hardware that supports them. VMkernel ports are special constructs
used by the vSphere host to communicate with the outside world. The goal of a
VMkernel port is to provide some sort of Layer 2 or Layer 3 services to the
vSphere host. Although a VM can talk to a VMkernel port, they do not consume
them directly.
dvUplink is a concept of the Physical Links from each host
represented in the VDS. This way regardless of how the vmnic Uplinks are called
on the host level (vmnic1, vmnic2 … vmnic10…), on the VDS Level they are simply
shown as dvUplink1 and dvUplink2. This way we can simply define which one is
Active and which one is Standby without having to configure each host
separately.
TIP: Cisco Nexus 1000v is a VDS alternative, but for now NSX
doesn’t integrate with the 1000v.
There
are features that the standard vSwitch cannot provide, and the VDS can, such
as:
- NIOC (Network IO Control), to tune the load
distribution and the priorities. NIOC can be used to set limits and shares in order to set up the
priorities, for example with NSX to implement QoS.
- Port Mirroring (SPAN and
RSPAN).
- NetFlow (and there are some free
NetFlow collectors, such as SolarWinds).
- Private VLANs (not really needed in NSX
because of Micro Segmentation and DFW and DLR features).
- Ingress/Egress Traffic
Shaping. The
most common use case is vMotion, and it’s supported by NSX.
Network
I/O control (NIOC) is the advanced feature in vSphere Distributed Switch that
provides traffic management capability. You can use the Network I/O Controls
features of the Virtual Distributed Switch to limit vMotion traffic causing a
configuration error that prevents a successful vMotion between hosts. By
default, Network I/O control has 8 predefined Network Resource Pool types:
- NFS Traffic
- Management Traffic
- vMotion Traffic
- Virtual SAN Traffic
- vSphere Replication (VR) Traffic
- iSCSI Traffic
- Virtual Machine Traffic
- Fault Tolerance (FT) Traffic
Assigning
Limits and Shares to these various categories of traffic provides an easy way
to assign priorities to network traffic. In addition, Network I/O Controls
provides the flexibility to define your own Network Resource Pool types that
will be explored later in the module.
VDS
needs to cover all the clusters, and it’s what actually switches the frames.
You need to configure the following VDS attributes:
- Teaming and Failover.
- Security Options.
- Monitoring.
I/O
or Data Plane on the VDS is made out of hidden vSwiches on each vSphere host
that is a part of dVSwitch. In order to troubleshoot it you can log into the
ESXi via SSH. View all physical NIC's attached to the Host:
# esxcli network nic list