In this video, Ryan Johnson demonstrates the failover of the Software-Defined Data Center management, automation and operations solutions – distributed deployments of vRealize Automation, vRealize Orchestrator and vRealize Operations – between regions in the IT Automation Cloud validated design. Follow Ryan Johnson on Twitter as @tenthirtyam on on our podcast at vmware.com/go/podcast. Learn more at vmware.com/go/vvd or follow updates on Twitter @VMwareSDDC – – – – – – – – – – – – – – – – – -…Read More
Read more about network virtualization with VMware NSX here: https://www.vmware.com/products/nsx/ Milin Desai of VMware discusses an overview of VMware NSX at VMworld 2015. He highlights the 3 most common pain points within customers and how VMware NSX has addressed them through its value proposition. See a live demo of VMware NSX’s infrastructure security, IT automation, and application continuity in action.
In general it seems typical that VMware Tools gets installed on the Guest OS and then left alone after that. While doing some reading and working on some “slowness” issues, I’ve found the Tools CLI to become very handy and powerful.
On the Windows side of things here are a few “common” commands to use tools via the command line. First we need to get into the directory where tools is installed and the toolbox command can be run. The default directory is “C:\Program Files\VMware\VMware Tools”
The command below in the screenshot lists the base commands available with the VMwareToolboxCmd: VMwareToolboxCmd.exe help
I’m not covering all of the commands there but the documentation from VMware does a good job.
I’ve been using the VmwareToolboxCmd.exe stat “subcommandhere” for seeing stats within the GuestOS and I’ve included the snipit from the VMware doc with a little detail for each stat subcommand:
As you can see it covers many useful areas to see if the VM is having performance issues related to CPU Limits perhaps or to see if any memory is ballooning, or swapping(I’ve also included memres and cpures just to see if your VM has any reservations):
You can manually turn timesync with the host on/off/and check status:
Another command that I would imagine is useful would be the disk command and shrink subcommands that can be used to actually shrink and reduce the space the virtual disk takes up. As you can see from the screenshot my test VM is a linked clone and this can not be run against it. This doesn’t work against thick provisioned VMs as it wouldn’t shrink the virtual disk since the space has already been allocated for the virtual disk:
**NOTE certain version of Fusion have a “Clean Up Virtual Machine” button and Workstation has a “Compact” menu command that will do the same thing.
The commands are pretty much the same within a Linux OS, below is a screenshot of a CentOS VM. The default directory for this is /usr/sbin/ and the command is “vmware-toolbox-cmd”:
There are many more commands that can be run from within the Guest OS, as I stated I’ve been using and seeing these commands used to track down slowness issues within VMs.
In May of 2015, we did a video around VMware NSX vs. Cisco ACI. As part of that video, we made the prediction that VMware NSX and Cisco ACI would not be an either/or discussion in the future (I also did a webinar on the topic that you can download here). At the time, the common question we were getting from clients was if they should be using NSX or ACI. My opinion was that Cisco ACI quite well complimented the feature sets of VMware NSX and that one could really support the other.
Now let’s fast forward to last month (February 2016) to Cisco Live Berlin where an announcement was made that supported just that idea. In sessions at the conference, they talked about a number of overlay networks in Cisco ACI and specifically mentioned VMware NSX. So what are these use cases? I’m planning on doing a series of videos to explore the topic further. The next video will discuss heavily utilizing Cisco ACI with an overlay of VMware NSX. After that, we’ll look at the opposite – more heavily leveraging the feature sets of NSX on top of the fabric automation feature sets that exist in ACI.
VMware NSX and Cisco ACI: NSX Now Supported on ACI
NSX has been the acronym on the lips of everyone in the SDN space. So I have been studying the VMware NSX software defined networking platform in preparation for my VCIX exam in the coming months and I have this few thoughts to share from some of my study materials about this exciting product from VMware, but what is it and what does it mean to your organization?VMware NSX is a network virtualization platform from VMware. The software is reportedly able to operate using any hypervisor and it is a completely non-disruptive solution which can be deployed on any IP network from any vendor – both existing traditional networking models and next generation fabric architectures. The physical network infrastructure already in place is all that is required to deploy a software-defined data center with NSX.
What are We solving?
i. Physical Networks are hard to scale in multi-tenant’s data-center environment i.e. Business Units, Customers and acquisitions can benefit from this type of overlay topology.
ii. Physical networks make VM mobility across data centers tougher when we use complex layer 2 adjacency design.
Logical networks allow for greater automation and ease of provisioning since everything is done in software. (Logical Switching, Firewall, Routing, Load balancing).
iii. Server virtualization, a software abstraction layer (i.e. server hypervisor) reproduces the familiar attributes of an x86 physical server (e.g. CPU, RAM, Disk, NIC) in software. This allows components to be programmatically assembled in any arbitrary combination to produce a unique VM in a matter of seconds.
With NETWORK virtualization, the functional equivalent of a “network hypervisor” reproduces layer 2 to layer 7 networking services (e.g. switching, routing, firewalling, and load balancing) in software. These services can then be programmatically assembled in any arbitrary combination, producing unique, isolated virtual networks in a matter of seconds. With VMware NSX, existing networks are immediately ready to deploy a next generation software defined data center. Customers are using NSX to drive business benefits as shown in the figure below.
The main themes for NSX deployments are Security, IT automation and Application Continuity.
FEATURES OF NSX
Security: NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects and tags, active directory groups, etc. Intelligent dynamic security grouping can drive the security posture within the infrastructure. NSX can be used in conjunction with 3rd party security vendors such as Palo Alto Networks, Checkpoint, Fortinet, or McAffee to provide a complete DMZ like security solution within a cloud infrastructure. NSX has been deployed widely to secure virtual desktops to secure some of the most vulnerable workloads, which reside in the data center to prohibit desktop-to-desktop hacking.
Automation: VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. IT admins can reduce the tasks and cycles required to provision workloads within the datacenter using NSX. NSX is integrated out of the box with automation tools such as vRealize automation, which can provide customers with a one-click deployment option for an entire application, which includes the compute, storage, network, security and L4-L7 services. Developers can use NSX with the OpenStack platform. NSX provides a neutron plugin that can be used to deploy applications and topologies via OpenStack.
Application Continuity: NSX provides a way to easily extend networking and security up to eight vCenter either within or across data center. In conjunction with vSphere 6.0, customers can easily vMotion a virtual machine across long distances and NSX will ensure that the network is consistent across the sites and ensure that the firewall rules are consistent. This essentially maintains the same view across sites. NSX Cross vCenter Networking can help build active – active data centers. Customers are using NSX today with VMware Site Recovery Manager to provide disaster recovery solutions. NSX can extend the network across data centers and even to the cloud to enable seamless networking and security.
COMPONENTS OF NSX
Switching: Logical switching enables extension of a L2 segment / IP subnet anywhere in the fabric independent of the physical network design.
Routing: Routing between IP subnets can be done in the logical space without traffic leaving the hypervisor; routing is performed directly in the hypervisor kernel with minimal CPU / memory overhead. Routing is done by the Distributed Logical Router and one of the features of the Edge Service gateway. It supports Static and Dynamic routing protocols (OSPF, ISIS, BGP). The distributed logical routing (DLR) provides an optimal data path for traffic within the virtual infrastructure (east-west communication). Additionally, the NSX Edge provides an ideal centralized point for seamless integration with the physical network infrastructure to handle communication with the external network (north-south communication) with ECMP-based routing.
Connectivity to physical networks: L2 and L3 gateway functions are supported within NSX to provide communication between workloads deployed in logical and physical spaces.
Edge Firewall: Edge firewall services are part of the NSX Edge Services Gateway (ESG). The Edge firewall provides essential perimeter firewall protection which can be used in addition to a physical perimeter firewall. The ESG-based firewall is useful in developing PCI zones, multi-tenant environments, or dev-ops style connectivity without forcing the inter-tenant or inter-zone traffic onto the physical network.
VPN: L2 VPN, IPSEC VPN, and SSL VPN services to enable L2 and L3 VPN services. The VPN services provide critical use-case of interconnecting remote datacenters and users access.
Logical Load-balancing: L4-L7 load balancing with support for SSL termination. The load-balancer comes in two different form factors supporting inline as well as proxy mode configurations. The load-balancer provides critical use case in virtualized environment, which enables devops style functionalities supporting variety of workload in topological independent manner.
DHCP & NAT Services: Support for DHCP servers and DHCP forwarding mechanisms; NAT services. NSX also provides an extensible platform that can be used for deployment and configuration of 3rd party vendor services. Examples include virtual form factor load balancers (e.g., F5 BIG-IP LTM) and network monitoring appliances (e.g., Gigamon – GigaVUE-VM).Integration of these services is simple with existing physical appliances.
In more post, as i advance in my studies i will go deep on this wonderful product from VMware.
To list image profiles that are provided by the Patch Bundle use following command
esxcli software sources profile list -d /path/to/.zip
The output will look like this:
[root@esx01:~] esxcli software sources profile list -d /vmfs/volumes/NFS-SYNOLOG
Name Vendor Acceptance Level
——————————– ———— —————-
ESXi-6.0.0-20160301001s-no-tools VMware, Inc. PartnerSupported
ESXi-6.0.0-20160302001-standard VMware, Inc. PartnerSupported
ESXi-6.0.0-20160301001s-standard VMware, Inc. PartnerSupported
ESXi-6.0.0-20160302001-no-tools VMware, Inc. PartnerSupported
Now you can update the system with specific profile: esxcli software profile update -d /path/to/.zip -p ESXi-5.5.0-profile-standard Note: You can run an ESXCLI vCLI command remotely against a specific host or against a vCenter Server system.
ESXCLI over PowerCLI
The same can be done via PowerCLI. The code below is optimized for ESXCLI-Version2 releases in PowerCLI 6.3 R1.
#get esxcli object on particular host
$esxcli = Get-EsxCli -VMhost -V2 #list profiles in patch bundle
$arguments = $esxcli2.software.profile.list.CreateArgs()
As done in my previous Labs, I’ll use Ravello as main plaform to develop a complete stack for a cloud service – doesn’t matter if for private or public use, the stack will be the same.
I’ll begin using Autolab 2.6 from Ravello blueprint to save some time, as this will be illustrated in this first post.
Then, I’ll add a NSX component. In my previous post I built 2 clusters, one of them for management, the other one for production, resources to be managed by the first cluster, and NSX resided in the management cluster. So, it was double-nested, first by the ESXi, second by ESXi nested in Ravello.
This means a heavy load of the whole environment.
Now, I’ll use the Ravello environment as management cluster, and a cluster for production, following the post edited by Sam McGeown
Other posts will follow, showing vCloud Director 8.0 install and AirVM for management, since vCD 8 doesn’t provide a GUI.
I will jump the initial phase of Autolab deployment since it’s the topic of my next post (and many others around the Net).
The follwing image is my lab. Please do not consider the last 2 ESXi, I needed them to perform the previous nested installation of NSX.
Now we’ll begin starting the first 2 VMs, NAS and Domain Controller. As soon as they’re started, we’ll proceed with the remaing 3, the vCenter and 2 ESXis. We’ll turn on just 2 instead of 3 as per Autolab blueprint because I don’t want to destroy my previous vCenter environment, made, as described above, of 2 custers of 2. Anyway, 2 ESXis will be enough.
Time to download NSX. IMPORTANT: initially I downoladed 6.2: DO NOT! You must use 6.1 since the first one won’t start in Ravello, no matter if changing NIC or adding RAM. Probably it depends on the underlying “magics” casted by Ravello. At least, this is what happened to me. You’re warned 🙂
After NSX download from my.vmware.com, I’ll receive a OVA file – not accepted by Ravello upload. I must open the OVA in OVF decompressing in by 7-zip in a folder:
Then import it in Ravello Library (if you didn’t before, you must download and install the GUI VM Import Tool).
To make things as simple as possible, I’ll use the same settings that Sam used:
Ready to deploy in our environment, start up and enter in console to configure and setup, after accessing with admin/default, same for enable:
Once rebooted, access is allowed from one of the 2 windows machines, DC or VC:
Accessing with the default credentials – admin/default – we’ll be presented with the home page, choosing “View Summary” you’ll have the main data screen. Be sure that the first 3 services are running – SSH is not important since we’ll configure it from this GUI.
The tab “Manage” up right will allow you to configure the device. Starting with General, where to setup syslog server (optional), adjust NTP server if not already setup before, and locale settings.
Moving down using the left side menu, we can set network (any modification will need a reboot as shown below), and SSL certificate will allow you to create a new one to send it to any Certification Authority, to upload an existing one, or just leaving the fake one generated during installation.
We can set up a FTP Server for backups – optional – and schedule them. Lastly (for this section), the Upgrade line, a simple “Ugrade” button:
Now it comes the connection with vSphere elements – if NSX services are not started, the system won’t allow these settings. Lookup service will ask details for authentication to SSO (and acceptance of the server thumbprint): the success wil be shown with a green leed in “Status” line. Same procedure for vCenter connection – in this case, in addiction to the green led we’ll refresh the inventory clicking the arrows beside it.
The whole NSX installation proces will end up adding a new item inside vCenter – using webclient, since C# one wn’t show it.
Even if I settle up AD to be used as LDAP in vCenter, and LAB\Administrator as enterprise global administrator, NSX didn’t allow me to make changes if not email@example.com logged in.
In the next part that will come in a few days, we’ll configure NSX in order to deploy Controllers, will prepare hosts, and deploy VXLAN and Edges. Following we’ll add vCloud Director and a GUI to manage it.
In this case, I’ll report my experience where I used Autolab 2.6 for NSX 6.2 deployment.
I will jump over the steps needed to deploy Autolab in Ravello, I’ll report it in another post. This first part will reach the point of deployment of NSX inside the Autolab’s vCenter.
We’ve to move away from standard in the moment when, to have 2 clusters, 3 hosts are nomore enough – 3 hosts is the standard of Autolab.
Luckily the guys at Labguides had a good intuition, so that they added at IPXE’s ESXi menu the “install fourth ESXi” line. So, my task was only to deploy a brand new application from blueprint, save one of the standard ESXi, delete that application and deploy this new ESXi in my current installation, modifying all the networks and hosts stuff.
So, installed the fourth as I did with previously ones, I planned to have 2 Clusters: Management and Prod.
The Management cluster will serve NSX Manager, the Prod is the resource cluster, so it will take care of the Edges.
I’ll put the Host1 and Host2 inside cluster “Management”, and Host3 to cluster “Prod”- before moving it I had to set it in Maintenance Mode
That’s the new host ready to be inserted in my Prod cluster:
Now proceeding with Add host:
We’ll answer “yes” to the security alert, and going straight forward with all the defaults:
Now we can assist at the import process.
In my case, I had to reboot the host to permit HA agent to be installed on it.
Since the Add Host wasn’t automated by Autolab as the previous ones, but manual, I had to add NIC1 in teaming to the first vSwitch, to create the other vSwitch and to recreate all the storage connections.
From the beginning, I had to modify, for Management network, NIC teaming, setting vmnic0 as active and vmnic1 as standby, overwriting switch failover order:
And then create the following portgroups: IPStore2
We’re going to recreate the second switch, the one dedicated to VMs:
It’s time to reorder the storage connections too. We’ve to add, to the new host, the following:
I’ll jump this part since the purpose of the post is the NSX installation, and not recreate from scratch an host to be compliant with Autolab environment. Anyway, it’s simply a “copy&paste” process from the existing ones to the new one. Regarding iSCSI datastores, we’ll have to set up the HBA interfaces.
Time to deploy NSX. After download the OVA file we’ll use vCenter to deploy it on Management cluster. We’ll use the webclient and not C# client since the first one will give us more options (if we didn’t before, to deploy using webclient we need to download the client integration plug-in – link appears during deployment).
By the way, I don’t know if this is only my problem, but my vC server didn’t start automatically the vSphere Web Client service, although set as automatic.
It’s important to check the box accepting the extra configuration options.
Autolab sets its storages not larger than 58GB, that is less than NSX requires in its “thick” deploy. We can use “thin”, and iSCSI2 that is the larger DS available. Storage policy will be the default one – we’re not using vSAN neither VVol
At this point I encountered a boring error: “A connection error occurred. Verify that your computer can connect to vCenter server”:
Since my computer IS the vCenter, this didn’t make sense. Googling the error I discovered a related KB – https://kb.vmware.com/kb/2053229 – stating that it depended on a bad DNS configuration. This did’n make sense too, since I connected the webclinet using the DNS name, but I double checked pinging tha name, and I understood that my server was using IPv6 – solution was to disable IPv6 on my VC NIC:
It works now, and I’m able to continue.
The network to map is the “Servers” one, but it’s not important: we only have one physical network, so it doesn’t matter. In the last page we’ll be asked a password for default CLI user, a password for privileged and Network infos. We’ll assign the NSX the IP 192.168.199.200, providing an entry in DNS too, as nsx.lab.local. The same DC server acts as NTP server.
In our installation we won’t use IPv6 (ok… shame on me!)
And this is the summary and deployment:
I don’t choose to start it automatically because I could be forced to modify the resources assigned to nsx: my ESXi’s could offer 24GB of RAM and 12 CPUs – yes, I modified the default values of Autolab.
IMPORTANT: you must change the vNIC from VMXNET3 to E1000, according to Martijn Smit’s post: https://www.vmguru.com/2016/01/ravello-systems-vmware-nsx-6-2-management-service-stuck-in-starting/ . DO IT BEFORE STARTING THE VM – it won’t work changing it after, I had to redeploy. AND you should do it via SSH’ing the ESXi, not deleting and recreating from GUI because if so, the VM will rename it in eth1.
Actually, the nsx doesn’t start:
I must reduce assigned RAM from 16GB to 12GB and CPU from 4 to 3, otherwise it won’t start.
After the first boot, although, if you shut down, you’ll be allowed to use 16GB and 4CPU, as adviced.
And that’s the result of our efforts:
Logging in, this is the main window:
And the summary page:
This is the end of this first part. In the next one we’ll configure the NSX manager and we’ll deploy our Edges and VXlans.
During the last VMUG UserCon in Milan I had the pleasure to
meet Andy Cary , VCI Program Enablement Lead at VMware.
Since I begun months ago to study for my VCAP-CIA and in the while it “disappeared”, I was wondering which was the best between switch to VCAP-DCV and waiting for VCIX (still DCV since the Cloud path doesn’t include anymore vCloud Director – vRA took its place, and I’m really not confident with it).
His advise was to go ahead with VCAP for 2 reasons:
first of all VCIX will not see the light before new year, and even though, it will pass through a testing/beta period;
second, if you pass even just one of the 2 VCAP (DCAdministration or DCDesign) you’ll earn a certification. And this exam will be converted in “half” of VCIX when it will be issued, so it won’t be time wasted, you’ll have to pass just the other exam to earn VCIX.
So, back to my old lab for VCAP-DCA, maybe I’ll invest some money on Ravello Systems. No chance to have anything at home, if I care my marriage, I just could borrow some unused stuff in our Datacenter, but more time wasting.
I had the chance to test Ravello Systems deploying Autolab, and I must say they’re amazing, I needed support for a little trouble and had it solved in minutes.