Tag Archives: Datacenter

Get peace of mind with these simple monitoring tips

Get peace of mind with these simple monitoring tips

If a server falls over in the forest and no one raises an incident, does it actually go down?

As every good VMware administrator knows, there is no known good reason on earth as to why you shouldn’t be using some form of monitoring solution to keep watch on your VMware platforms. As the “VMware guy” you really can’t afford to waste your time keeping a constant watchful eye on things, just in case something bad were to happen. But let’s face it – from time to time bad things do happen!

There are many, many options available in the market to poke and probe your infrastructure to check if it’s all still there, doing what it should be doing. These range from free tools that simply ping devices and alert you if something fails to respond, to monster-sized monitoring and management solutions that cost an arm and a leg. The big comprehensive solutions are great, but they are typically very complex to design, deploy, configure and keep running, and will often only alert you to an issue once it has occurred and the phone is already ringing with your boss saying “has anything changed on the VMware platform today?”. That’s when panic sets in as you realise the production VMware cluster is spiralling into a full meltdown. The two options available to you are a) start troubleshooting the issue and hope you find a solution PDQ, or b) pick up your jacket, exit the building, and start getting your LinkedIn profile up to date because you’re going to need a new job!

Taking a proactive approach

Wouldn’t a better approach be to discover potential issues in your environment before they were about to happen? What if you could fix an issue before it brings down your entire VMware cluster? Surely that would have you rise up the ranks to demigod level and then you could spend more time playing with all the cool new things you wish you had time to try.

I’ve spent the last few months getting to know the Runecast Analyzer product very well.  I can honestly say “Wow! I’m impressed”. From the simplicity of the product to be able to quickly deploy and have it scanning your environment in minutes, to its easy-to-use and great looking web interface, it really is an excellent tool to have in your toolbox.

Rich capability

What makes Runecast really stand out to me, is that it is capable of looking at the setup of your VMware platform and check this against three main sources of information:

  1. VMware knowledge base articles
  2. VMware best practice guides
  3. VMware security hardening guides

Runecast_pic_1.png

The expert guys who developed and founded Runecast (VCDX #74, VCAP’s, VCPs, etc) continuously monitor and assess new KB’s, best practice guides and security recommendations, and determine how to check for them. These updates are then automatically pulled down into the Runecast Analyzer appliance on a regular basis.

Once a scheduled scan occurs and picks up a potential issue in your environment, not only does Runecast flag this issue, it provides you with a copy of the KB article directly in the web interface or a link to the best practice/security hardening guide where it came from. This means you can fully understand the issue before you decide to address it or choose to ignore it.

For example, it may be a requirement in your environment to allow certain non-standard settings such as allowing promiscuous mode on a port-group. In this instance you can simply choose to ignore this alert by way of the highly configurable filter. Runecast will continue to re-scan your environment on a regular basis (defined by you) to ensure continuous compliance and help protect you against configuration drift.

Runecast_pic_2.jpg

It also allows you to send the log files from vSphere hosts and Virtual Machines (the VMs VMware logs, not from inside the Virtual Machines Operating Systems/applications, just to be clear) to the Runecast Analyzer appliance and have these logged and checked for issues too. All of this can be configured in the Runecast UI (provided the account you use has sufficient permissions in vCenter to make these changes) by a couple of simple clicks. It couldn’t actually be any easier than that, could it?

And… it does all this onsite, so no data is sent back from the appliance for analysis somewhere in a different country, or stored on a server that you have no control over – so there’s no issues with security there.

As I said, I’ve been giving this VM monitoring and troubleshooting solution a really good bash around for the last few months, and I’m well impressed. If you fancy checking it out for yourself you can download a free 30-day trial, and get it up and running in your own environment in minutes. And who knows, perhaps just doing this alone could help save one tree, sorry, server from falling over.

By Stuart McEwan

– See more at: http://www.vifx.co.nz/blog/get-peace-of-mind-with-these-simple-monitoring-tips#sthash.A1abTdJt.dpuf


VMware Social Media Advocacy

Advertisements

The Practical Path to NSX: Security, Automation, Application Continuity

The Practical Path to NSX: Security, Automation, Application Continuity

Read more about network virtualization with VMware NSX here: https://www.vmware.com/products/nsx/ Milin Desai of VMware discusses an overview of VMware NSX at VMworld 2015. He highlights the 3 most common pain points within customers and how VMware NSX has addressed them through its value proposition. See a live demo of VMware NSX’s infrastructure security, IT automation, and application continuity in action.


VMware Advocacy

A full cloud stack – Autolab 2.6 – Part 1

As done in my previous Labs, I’ll use Ravello as main plaform to develop a complete stack for a cloud service – doesn’t matter if for private or public use, the stack will be the same.

I’ll begin using Autolab 2.6 from Ravello blueprint to save some time, as this will be illustrated in this first post.

Then, I’ll add a NSX component. In my previous post I built 2 clusters, one of them for management, the other one for production, resources to be managed by the first cluster, and NSX resided in the management cluster. So, it was double-nested, first by the ESXi, second by ESXi nested in Ravello.

This means a heavy load of the whole environment.

Now, I’ll use the Ravello environment as management cluster, and a cluster for production, following the post edited by Sam McGeown

Other posts will follow, showing vCloud Director 8.0 install and AirVM for management, since vCD 8 doesn’t provide a GUI.

I will jump the initial phase of Autolab deployment since it’s the topic of my next post (and many others around the Net).

2016-03-03_232537

The follwing image is my lab. Please do not consider the last 2 ESXi, I needed them to perform the previous nested installation of NSX.

2016-03-03_234333

Now we’ll begin starting the first 2 VMs, NAS and Domain Controller. As soon as they’re started, we’ll proceed with the remaing 3, the vCenter and 2 ESXis. We’ll turn on just 2 instead of 3 as per Autolab blueprint because I don’t want to destroy my previous vCenter environment, made, as described above, of 2 custers of 2. Anyway, 2 ESXis will be enough.

Time to download NSX. IMPORTANT: initially I downoladed 6.2: DO NOT! You must use 6.1 since the first one won’t start in Ravello, no matter if changing NIC or adding RAM. Probably it depends on the underlying “magics” casted by Ravello. At least, this is what happened to me. You’re warned 🙂

After NSX download from my.vmware.com, I’ll receive a OVA file – not accepted by Ravello upload. I must open the OVA in OVF decompressing in by 7-zip in a folder:

Then import it in Ravello Library (if you didn’t before, you must download and install the GUI VM Import Tool).

To make things as simple as possible, I’ll use the same settings that Sam used:

  • Hostname: nsx
  • IP: 192.168.199.20
  • Subnet: 255.255.255.0
  • Gateway: 192.168.199.1
  • DNS: 192.168.199.4
  • Search: lab.local

Ready to deploy in our environment, start up and enter in console to configure and setup, after accessing with admin/default, same for enable:

Once rebooted, access is allowed from one of the 2 windows machines, DC or VC:

Accessing with the default credentials – admin/default – we’ll be presented with the home page, choosing “View Summary” you’ll have the main data screen. Be sure that the first 3 services are running – SSH is not important since we’ll configure it from this GUI.

The tab “Manage” up right will allow you to configure the device. Starting with General, where to setup syslog server (optional), adjust NTP server if not already setup before, and locale settings.2016-03-07_010024.jpg

Moving down using the left side menu, we can set network (any modification will need a reboot as shown below), and SSL certificate will allow you to create a new one to send it to any Certification Authority, to upload an existing one, or just leaving the fake one generated during installation.

We can set up a FTP Server for backups – optional – and schedule them. Lastly (for this section), the Upgrade line, a simple “Ugrade” button:

Now it comes the connection with vSphere elements – if NSX services are not started, the system won’t allow these settings. Lookup service will ask details for authentication to SSO (and acceptance of the server thumbprint): the success wil be shown with a green leed in “Status” line. Same procedure for vCenter connection – in this case, in addiction to the green led we’ll refresh the inventory clicking the arrows beside it.

The whole NSX installation proces will end up adding a new item inside vCenter – using webclient, since C# one wn’t show it.

2016-03-07_022157.jpg

Even if I settle up AD to be used as LDAP in vCenter, and LAB\Administrator as enterprise global administrator, NSX didn’t allow me to make changes if not administrator@vsphere.local logged in.

In the next part that will come in a few days, we’ll configure NSX in order to deploy Controllers, will prepare hosts, and deploy VXLAN and Edges. Following we’ll add vCloud Director and a GUI to manage it.

NSX 6.2 inside Autolab 2.6 – Part 1

The Ravello Systems blueprint of Autolab 2.6 is a great point of start for any deployment vSphere based.

In this case, I’ll report my experience where I used Autolab 2.6 for NSX 6.2 deployment.

NSXdownload

I will jump over the steps needed to deploy Autolab in Ravello, I’ll report it in another post. This first part will reach the point of deployment of NSX inside the Autolab’s vCenter.

We’ve to move away from standard in the moment when, to have 2 clusters, 3 hosts are nomore enough – 3 hosts is the standard of Autolab.

Luckily the guys at Labguides had a good intuition, so that they added at IPXE’s ESXi menu the “install fourth ESXi” line. So, my task was only to deploy a brand new application from blueprint, save one of the standard ESXi, delete that application and deploy this new ESXi in my current installation, modifying all the networks and hosts stuff.

So, installed the fourth as I did with previously ones, I planned to have 2 Clusters: Management and Prod.

The Management cluster will serve NSX Manager, the Prod is the resource cluster, so it will take care of the Edges.

I’ll put the Host1 and Host2 inside cluster “Management”, and Host3 to cluster “Prod”- before moving it I had to set it in Maintenance Mode

2016-02-27_163702.jpg

That’s the new host ready to be inserted in my Prod cluster:

2016-02-27_163730

Now proceeding with Add host:

We’ll answer “yes” to the security alert, and going straight forward with all the defaults:

Now we can assist at the import process.

In my case, I had to reboot the host to permit HA agent to be installed on it.

Since the Add Host wasn’t automated by Autolab as the previous ones, but manual, I had to add NIC1 in teaming to the first vSwitch, to create the other vSwitch and to recreate all the storage connections.

2016-02-27_233243

From the beginning, I had to modify, for Management network, NIC teaming, setting vmnic0 as active and vmnic1 as standby, overwriting switch failover order:

2016-02-27_234035

And then create the following portgroups: IPStore2

IPStore1:

FT:

and vMotion:

We’re going to recreate the second switch, the one dedicated to VMs:

It’s time to reorder the storage connections too. We’ve to add, to the new host, the following:

2016-02-28_001354.jpg

I’ll jump this part since the purpose of the post is the NSX installation, and not recreate from scratch an host to be compliant with Autolab environment. Anyway, it’s simply a “copy&paste” process from the existing ones to the new one. Regarding iSCSI datastores, we’ll have to set up the HBA interfaces.

Time to deploy NSX. After download the OVA file we’ll use vCenter to deploy it on Management cluster. We’ll use the webclient and not C# client since the first one will give us more options (if we didn’t before, to deploy using webclient we need to download the client integration plug-in – link appears during deployment).

Using IE11 I wasn’t able to use the plugin, and neither Windows Authentication. Following several forum’s advices (https://communities.vmware.com/thread/515713?start=0&tstart=0 is one of them), I downloaded and used Chrome.

By the way, I don’t know if this is only my problem, but my vC server didn’t start automatically the vSphere Web Client service, although set as automatic.

It’s important to check the box accepting the extra configuration options.

Autolab sets its storages not larger than 58GB, that is less than NSX requires in its “thick” deploy. We can use “thin”, and iSCSI2 that is the larger DS available. Storage policy will be the default one – we’re not using vSAN neither VVol

At this point I encountered a boring error: “A connection error occurred. Verify that your computer can connect to vCenter server”:

2016-02-29_012113.jpg

Since my computer IS the vCenter, this didn’t make sense. Googling the error I discovered a related KB – https://kb.vmware.com/kb/2053229 – stating that it depended on a bad DNS configuration. This did’n make sense too, since I connected the webclinet using the DNS name, but I double checked pinging tha name, and I understood that my server was using IPv6 – solution was to disable IPv6 on my VC NIC:

It works now, and I’m able to continue.

The network to map is the “Servers” one, but it’s not important: we only have one physical network, so it doesn’t matter. In the last page we’ll be asked a password for default CLI user, a password for privileged and Network infos. We’ll assign the NSX the IP 192.168.199.200, providing an entry in DNS too, as nsx.lab.local. The same DC server acts as NTP server.

In our installation we won’t use IPv6 (ok… shame on me!)

And this is the summary and deployment:

I don’t choose to start it automatically because I could be forced to modify the resources assigned to nsx: my ESXi’s could offer 24GB of RAM and 12 CPUs – yes, I modified the default values of Autolab.

IMPORTANT: you must change the vNIC from VMXNET3 to E1000, according to Martijn Smit’s post: https://www.vmguru.com/2016/01/ravello-systems-vmware-nsx-6-2-management-service-stuck-in-starting/ . DO IT BEFORE STARTING THE VM – it won’t work changing it after, I had to redeploy. AND you should do it via SSH’ing the ESXi, not deleting and recreating from GUI because if so, the VM will rename it in eth1.

Actually, the nsx doesn’t start:

I must reduce assigned RAM from 16GB to 12GB and CPU from 4 to 3, otherwise it won’t start.

After the first boot, although, if you shut down, you’ll be allowed to use 16GB and 4CPU, as adviced.

And that’s the result of our efforts:

2016-02-29_105004

Logging in, this is the main window:

2016-02-29_105240

And the summary page:

2016-02-29_105355

This is the end of this first part. In the next one we’ll configure the NSX manager and we’ll deploy our Edges and VXlans.

Thank you for following!

 

Zerto: DRaaS in cloud

This is my second post regarding Zerto, but I’d like to focus on the opportunity that it gives to take advantage of the cloud, in particular, VMware vCloud Director or even AWS. We’ll talk about vCD here.

This is the case when a customer has his own infrastructure on premise, but wants to have a parachute in the cloud. Or the opposite: his production is in the cloud, but he wants to keep his investment using the on premise infrastructure as DR.

First of all I’d like to thanks to Vladan Seget, who did a post about Zerto Lab setup here. I’m adding the 3rd case, recovery to/from cloud.

The architecture in this case is slightly different: there’s one more piece, ZCM (Zerto Cloud Manager) that is upon the ZVMs and vCloud Director (s). The installation is simple as ZVM, a light Windows Server and a package to be installed, very few options to set.

2016-02-24_162930.jpg

We’ll access the GUI using the server’s credential.

2016-02-24_163156.jpg

And this is what we get. I apologize for the black boxes, there will be many of them since it’s a production environment. This page summarizes all our customers, name (ZORG: Zerto Organization), a CRM ID for internal purposes (billing, among other) and then the numner of organization they have in vCloud Director and the number of sites on premise that they connect.

Here we should distinguish: there’s one more solution, called by Zerto In-Cloud. The customer can have all in cloud, an organization for production, and another one, usually in another DC, for replication. And nothing at home. Or a combination of this. The only limit is that a single VM can’t be replicated in 2 places. But the same organization can have vApps replicated in cloud and others on premise, for example.

When we’ll create a new ZORG, these is are info requested:

2016-02-24_163431.jpg

Now, let’s take an existing ZORG to explain what’s inside.

2016-02-24_163923.jpg

As before, the ZORG, the ID and, most important, the path where to upload VMDK to preseed big VMs.

2016-02-24_164012.jpg

on permissions tab, we can choose what the customer can do. And, the credentials to access the ZSSP (Zerto Self Service Portal) that we’ll see later. This portal is needed in In-Cloud cases, and when the customer experiences a disaster in his vCenter, so he can access Zerto panel via browser.

2016-02-24_164312.jpg

The organization (s) belonging to the customer are displayed here, with the cloud site (physical one, everyone with its own vCD), nam eof the organization as appears in vCD, and among other info, note the Max limits: we can limit either in number of VM either in storage GB for every customer.

To connect the infrastructure on premise, we need a connector:

2016-02-24_164412.jpg

ZCC, Zerto Cloud Connector – in this case the customer has 2 vCenter connected to his organizations. The connector is a simple VM, deployed automatically by ZCM, that has 2 NICs: one connected to the customer’s organization network, the other one to the main management network. To better understand the role of the connector, it’s like a proxy: in this case Zerto is multitenant, the customer can manage and see only his own VMs.

The last 2 tabs display the groups of VMs replicated (VPG, as Vladan will cover in his next post), and the errors FOR that customer:

2016-02-24_164610.jpg

2016-02-24_164723.jpg

And, lastly, the ZSSP: the portal for the customer to manage his replication. This is the access, where insert the credential previously seen:

2016-02-24_165035.jpg

after which we land here:

2016-02-24_165312.jpg

edit, delete, peer site (vCenter or vCD), status and RPO.

All you need after a catastrophic event (or even much more less…)

Thank you again Vladan.

Join this expert-led webcast on 2/18 to…

Join this expert-led webcast on 2/18 to discover how to improve your mobile and data center security protocol to better protect your data center resources and limit user access to internal resources:

Join this expert-led webcast on 2/18 to…

Join this expert-led webcast on 2/18 to discover how to improve your mobile and data center security protocol to better protect your data center resources and limit user access to internal resources:


VMware Advocacy

EMC and VMware introduce VxRail, a new…

EMC and VMware introduce VxRail, a new hyper-converged appliance

EMC and VMware introduce VxRail, a new…

Advertise here with BSA As most of you know I’ve been involved in Virtual SAN in some shape or form since the very first release. Reason I was very excited about Virtual SAN is because I felt it would provide anyone the ability to develop a hyper-converged offering. Many VMware partners have already done this, and with the VSAN […] ” EMC and VMware introduce VxRail, a new hyper-converged appliance ” originally appeared on Yellow-Bricks.com . Follow me on twitter – @DuncanYB.


VMware Advocacy

V2D updated with new NSX integration

Following my last post about Vmware Valitaded Design, I’m happy to review a Nikhil Kelshikar‘s post in his blog allowing us to build a compliant design including NSX feature.

nsx.png

Well, it was already in the last design references, but this integration includes new features such as:

  • Routing Design
  • Security Policy Design
  • Sizing Guidance

I don’t want to overlap Nikhil’s post, so I’ll just write down my first impressions on this update.

First of all it covers vSphere 6.0, and if you’re planning a new SDDC it’s mandatory.

I think that a section of best practices will help all of them (us) involved in a practical SDDC design, allowing to adapt plans according to that rules. The theory explained in the previous issue is great, but a practical guide is even better.

In few words, we can consider this update as a series of advises, a kind of validation for the theory in terms of routing and security, a strong help for architects and a good read for all of those that want to understand better how NSX works in a real environment, maybe even as a smart help for VCP-NV study.