V2D updated with new NSX integration

Following my last post about Vmware Valitaded Design, I’m happy to review a Nikhil Kelshikar‘s post in his blog allowing us to build a compliant design including NSX feature.

nsx.png

Well, it was already in the last design references, but this integration includes new features such as:

  • Routing Design
  • Security Policy Design
  • Sizing Guidance

I don’t want to overlap Nikhil’s post, so I’ll just write down my first impressions on this update.

First of all it covers vSphere 6.0, and if you’re planning a new SDDC it’s mandatory.

I think that a section of best practices will help all of them (us) involved in a practical SDDC design, allowing to adapt plans according to that rules. The theory explained in the previous issue is great, but a practical guide is even better.

In few words, we can consider this update as a series of advises, a kind of validation for the theory in terms of routing and security, a strong help for architects and a good read for all of those that want to understand better how NSX works in a real environment, maybe even as a smart help for VCP-NV study.

New All-Flash Array for Tintri

A new post about Tintri, again. But this is a news that I can’t ignore: its all-flash array line T5000 today has a new born. A little brother, T5040, up to 18TB and 1,500 VMs in a 2U box.
More over, a new version of Tintri OS 4.1 supporting Xenserver hypervisor, among other new features.
Last, “Tintri Analytics”: a predictive system at VM and application level.

t5000.jpg

This new all-flash array at a street price of 125k$ opens this technology to more companies with lower budgets, and aimed to serve all the applications requiring heavy workloads like large persistent databases or persistent VDIs.
The raw capacity is 5,76TB made by 24x240GB SSDs driven by a dual-controller.
At the moment Tintri can offer 3 All-Flash array models (T5000 series) and 3 Hybrid models (T800 models).

This new born device will be managed by the new OS 4.1, announced on December, in a webinar, from his co-founder and CTO Kieran Harty. The most important feature is the support of another hypervisor: Xenserver, but even the File-level Restore, Shared Snapshot Deletion, Openstack/Cinder and more.

os.jpg

Tintri Analytics go deep inside customer’s VMs and applications to shape the future needs in terms of space and performances – it will integrate the already complete Analytics. And you won’t run out of space since the analytics will predict you when and what to buy.

video.png

Another like-to-be success for the Californian Company.

Disaster recovery: ZeRT0 vision for a virtual world

Several times in my career I had to face a system issue that needed a “time machine” to solve. Usually a restore from backup was enough, whereas the backup was scheduled. In other situations, if and when the company I worked for could afford a DR traditional solution, I was lucky to rebuild the system from its mirror. Many times I had to rebuild the whole system, loosing data, usually not so critical to be backed up.

restore-red-key-keyboard-595x335.jpg

Then virtualization came. Snapshots were a godsend, the main thing was to schedule them, no matter if they could have an impact on performances, the pros were far superior than cons.

After some time, when virtualization became important and not only an exception,  snapshots started to be annoying. In the last years, having a backup was no more enough for a critical system in production – too many hours could be lost. And snapshots weren’t anymore adequate because or they were catched frequently – but with performances impact – or occasionally, between a backup and another – which in this case lots of data could be lost.

At 2012 VMworld I visited ZeRT0‘s booth. I was amazed by their solution, because it was almost syncronous, no snapshot based, no need of an alternative datacenter (either hot or warm) but, overall, hypervisor based.

master-of-disaster.jpg

Through these years the application evolved ’till covering, today, even hypervisor-hybrid environment. So nomore just HW agnostic, but even Hypervisor agnostic.

We latey upgraded to the last version, 4.0, it’s a major release.

Before reviewing the last version, I’ve to say that the upgrade process took very few time and, overall, so few operations – it was a real easy walk even in a complex environment like ours.

First of all, the architecture. We’ll talk of VRA (Virtual Replication Appliance), ZVM (Zert0 Virtual Manager), ZCM (Zert0 Cloud Manager) and VPG (Virtual Protection Group), just to name some acronyms.

At the very low layer are the VRAs: little linux appliances rooted one for a hypervisor that take care of every single VM, all its pointers, all its network data, where its storage is, etc.

Just upper lay the ZVM: it manages all the VRAs, transposes user’s setting via an intuitive (and nice) GUI, and connect to the recovery site pair ZVM.

zerto.png

In the case of Cloud providers, or simply in environments where more than a ZVM is needed, you’ll have to install a ZCM: an orchestrator for the ZVMs, but, especially, a tool that offers cloud multitenancy, exposing a Self Service interface for the end user (ZSSP) and that deals with Connector installation – a proxy that acts as a ZVM pair for the customer, but that allows a complete privacy showing only its organization and separating all the other customers’ ones.

zssp.png

Some of the benefit experienced in these years, and not only “marketing words”:

  • RTO of minutes (since the name of ZeRT0 – Zero RTO), RPO of secons
  • Native multitenant architecture
  • HW agnostic
  • Full vCloud Director compatibility and integration
  • Zero impact on production VMs
  • Full automation of failover, failback and test
  • Partial recovery

The main difference I fond with the other solutions in the market was the RTO/RPO timing (very low to call it not only DR but BC too), and no usage of snapshots thanks to a journaling file where all the modifications written on a protected vDisk are reported to the replicated one and consolidated one by one after the chosen history time.

Compared to a traditional solution, well, in this case the difference is even wider, from the possibility to recover a single VM and not all the storage volume, no need of mirrored DC, and the consequent cost.

Now, what this solution misses from my point of view.

  • From version 3.5 ZeRT0 implemented the extended recovery: it’s a middle way between DR and backup. I’d like to reduce the number of vendors for many reasons, so if backup could be covered by ZeRT0 I’ll be satisfied. But today we can’t use it for backup for 2 main reasons: it doesn’t dedupe, and restore from backup isn’t integrated in vCloud Director like, instead, recovery from DR is. So customer won’t be autonomous.
  • From a CSP point of view, another missing service is the possibility to connect to customers residing on other CSP.
  • It is a only-virtual solution. But this is not a limit for our needs.
  • It does not replicate IDE disks – some virtual network devices are based on IDE disks – Watchguard is one of them.
  • It doesn’t replicate vShield Edge rules, either org ones and vApp ones. The first case could be solved replicating by hand that rules before any disaster should occur, but the vApp ones cannot since the vApp is created only when a failover starts.

My opinion, anyway, is that this is a real enterprise solution, where the benefits are far more than the disadvantages. A particular mention goes to the Support guys. They’re highly professional, responsibles at any hour of the day (and the night) and they won’t leave you, if a WebEx is needed, ’till the issue is solved, no sentences like “I’ll call you after my senior engineer will evaluate the issue”, that senior engineer will be ready online for your issue.

Be aware, anyway, that ZeRT0 is a great tool, but it’s a TOOL, not a DR Plan. You need a plan to decide in an objective way when to declare a disaster and which are the procedures to be taken. Inside the plan you’ll insert ZeRT0 as part of it. Don’t make this mistake as some of our customers did.

I’m sure I forgot some important feature, I hope ZeRT0 will forgive me. Below the comments section will welcome any suggestion and correction.

V2D: Vmware Validated Designs

In the past days I’ve been wondering how hard could be building from scratch a new datacenter. I was thinking at the lower layer, hardware, but maybe more at the upper layer, virtualization platform and over it, all the related services.


I supposed that the HW part could be more or less easily addressed by lots of models, from vendors to best practices since it’s a relatively static environment – yes, everyday there are new technologies, but often are improvements of the existing ones – but the software layer was for me the real challenge. All the implications each other in terms of compatibility between all the versions made me think that this could be the hardest part to accomplish.

Assuming VMware as virtualization platform, HW isn’t hard to choose, as long as it’s compliant to HCL.
I’m talking only if the case is the choose of a “build your own” solution. It doesn’t apply in case of converged or hyperconverged infrastructures.
When all the servers, storage, network devices are cabled together, the real question is: I’d like to have the last version of vSphere. And even this is quite simple. The difficult will be after: which version of the side components will be compatible? Before we had matrices to look at, several ones, and the only way to be sure it worked was to set up a test plant.

Picture2 (3)

Then the guys at VMware had an idea: set up these test plant for us. And V2D was born.
Once you – or your vendor – or your host – complete the hardware layer, you can pick up a version – last, but not necessarily – of vSphere and, thanks to VMware Software Manager, a free tool, you’ll have as output all the other related services versions that will be compatible. It doesn’t simply read and elaborate all the previous matrices for you, but, throught a nice GUI, provides the list and the download links, together with any new release detection and, last but not least, integrity check, usually skipped.

But this is only the first step, just an automation. The real topic of this post is the availability of templates, blue prints, called V2D.
These are widely exposed by several sessions at last VMworld (SDDC5440, SDDC5609), and at the Italian VMUG session by Andrea Siviero, in addition to a post from Kelly Dare. A nice video can be also viewed here.

These designs are the result of VMware test plants assuring that all the operations, architecture and applications version are compatible and correctly working each other.
Let’s thing at designs like models, core structures. The core is designed, over the core you can choose to apply services, not necessarily VMware-branded (I’m thinking at Zerto for DR, for example, or at Veeam for backup), applied following that alternative vendors setup guides.

Currently here are two of these templates:

It’s a kind of sentence from VMware like “if you follow this template, you have my assurance that it works, and it works well”.

That’s not all: whenever you should need an update or upgrade, the same designs will update too, so that you’ll have the same compliance if you follow that architecture.

This told, being helped by a VMware engineer isn’t a bad idea. You’ll have the double check – design and removal of any mistake you could make. But you don’t need a full team as you did before, moreover you can upgrade/update with no help keeping the VMware compliance.

I will come back to this topic since it’s so wide to be covered by a single post. But just to pin out some useful links, by now.