In
July 2012 VMware acquired Nicira (Nicira was founded by Martin Casado of
Stanford University and it had a product called NVP - Network Virtualization
Platform), and that’s basically how VMware started the NSX Venture and got into
the SDN.
NSX
enables you to start with your existing network and server hardware in the data
center, as it´s independent of the network hardware. This does not mean that
you can use just any hardware; you still to have a stable, Highly Available and
Fast network. ESXi hosts, virtual switches, and distributed switches run on the
hardware.
On the other hand, to avoid the Physical Network problem, tend
towards Life and Spine architecture.
Nowadays you wont find many clients with the Spine and Leaf network deployed. Therefore maybe the best approach would be proposing a slow transition (Upgrade even) where the traditional 3-Tier Architecture would be evolving towards the below presented L3 Spine and Leaf:
There are 2 versions of NSX:
There are 2 versions of NSX:
-
NSXv, or NSC for vSphere (you need 100% vSphere, no other
Hypervisors are allowed), which has more features.
-
NSX-mh, or Multi Hypervisor (pending on some standards to be
globally accepted).
SDN
concept, or any Software Defined Service concept is based on moving the
proprietary features (“Intelligence”) from Hardware to Software Layer, and turning
Hardware into a Comodity. Some popular terms are SDS (Storage, such as VSAN), SDC
(Compute, such as vSphere and Hyper-V), SDN
(Network, such as NSX, Juniper Contrail, and Cisco ACI) and SDDC (Data Center) that includes all
the previous.
SDN introduces a very simple idea: provision and deploy an
infrastructure in accordance with the applications needs. The optimization
technique here is clear, since we all know that it takes a long time to
provision FW rules, Load Balancers, VPNs, VLANs, IP addresses etc.
Any NSX implementation will need Management Cluster and Compute
Cluster. Both Layer 3 and Layer 2 transport networks are fully supported by
NSX, however to most effectively demonstrate the flexibility and scalability of
network Virtualization a L3 topology with different routed networks for the
Compute and Management/Edge Clusters is recommended.
Best practices when integrating NSX with your infrastructure
are to deploy the following components:
- Management Cluster is a really important concept, and the best practice recommendation by VMware.
- Install kernel modules on the vSphere hosts. No disruption in service.
- Layers on top of VDS (vSphere Distributed Switch). Yes, you have to deploy VDS.
Why would a customer deploy NSX, what improvements does it
introduce?
- Network Abstraction (VXLAN), allowing transparent communication over an entire network, Layer 2 over Layer 3, decoupled from the physical network.
- Automation brings the transparency to the Vendor Variety in the infrastructure by adding a management layer on top.
- DLR (Distributed Logical Routing)
- EDGE Services, better then the old ones, more features.
- Distributed Firewalling (DLF), to have the FW over an entire infrastructure, and beeing able to allow/deny communication even between the hosts in the same network.
- 3rd Party Extensions like L7 Firewall, any Network Service really.
NSX components require a number of ports for NSX
communications:
- 443 between the ESXi hosts, vCenter Server, and NSX Manager.
- 443 between the REST client and NSX Manager.
- TCP 902 and 903 between the vSphere Web Client and ESXi hosts.
- TCP 80 and 443 to access the NSX Manager management user interface and initialize the vSphere and NSX Manager connection.
- TCP 22 for CLI troubleshooting.
NSX Manager is installed as a typical .ova Virtual Machine,
and have in mind that you need to integrate the NSX Manager into the existing
platform by connecting it to the desired vCenter Server. NSX uses the
management plane, control plane, and data plane models. Components on one plane
have minimal or no effect on the functions of the planes below.
NSX controller is the central control point for all logical
switches within a network and maintains information of all virtual machines,
hosts, logical switches, and VXLANs (supports Multicast, Unicast and Hybrid control plane modes). The unicast
mode replicates all the BUM traffic locally on the host and requires no
physical network configuration.
No comments:
Post a Comment