Moving the control plane to multi-cloud with new vRealize Operations

How could you manage your complex virtual environment being sure to have any single bit under control? An answer could be VMware vRealize Operations (or, simply vROPS).

Tech Field Day 19 was a great opportunity for me to deeply understand the features of this product, and Taruna Gandhi gave us a clear overview on it, focusing on self-driving operations.

Self-driving operations are based upon four main Tenets. First of them is the Optimization of Continuous Performance, maybe the most important since it takes care of all the anomalies that could arise in a datacentre, preventing them to occur. In this tenet you’ll find operations like automatic balance of workload through predictive DRS using machine learning on the back end. This mechanism is also able to find the right placement to avoid resource contention.

001

The second indicator is dedicated to Capacity Management. Customer is very concerned with just in time provisioning, just in time utilization and cost management. And, most important, is highly interested on the out of capacity risk. When running out of capacity, is vital to know what kind of actions are available, reclaiming resources, interacting with shadow customers or, simply, plan to contact their hardware suppliers or rent capacity on the cloud as quickly as possible.

The third Tenet brings Intelligent Remediation at our attention. In a multicloud environment the cloud is the new silo, replacing the traditional ones made of compute, storage and networking. This means that the previous Tenets must be considered not only at a SDDC level but also at a higher level, and be applied from apps to infrastructure.

Finally, the last Tenet is about  Configuration and Compliance management, an integrated discipline that should be part of self-driving operations.

Starting from the bottom of this self-driving operations platform I was upset since I was used to find here the components of a SDDC: sure, there are, but not only. There’s VMConAWS, there are VMware Cloud Providers and there are the public cloud providers. The platform’s analytics engine is able to collet all the underlying metrics, logs and events, it can discover all this information, it’s able to map the dependencies, create the topology and concentrate all this inside our machine learning platform. It’s a lot of data, it’s a huge amount of data.

002

The output is amazing. The four tenets above mentioned are the drivers for a continuous optimization that covers not only operational intent but also business one.

The business intent is actually very interesting. This is where, through simple settings and tags in vCenter, a customer can define policies and rules about compliance, workload placement according to it, consolidation based on OS and host, define different tiers of service – these are simple examples that Taruna explained, but they’re illuminating use cases: the engine will make these decisions on a continuous basis and based on the intents we set, it optimize performances, capacity, costs, assures compliance and remediate when needed – the four Tenets I described above.

The Quick Start screen is based on the same four tenets of self-driving operations at a glance. It gives the whole idea on what’s happening inside the virtual environment, what should be done immediately, hints on what’s better to deepen and several level of alerts.

003

Talking of new features introduced in 7.5, following the 4 pillars introduced since vROPS 6.7, the first one adopted a placement optimization for vSAN driven by storage intent definition (in addition to the main 2 business and operational). A lot of work went on the capacity management, introducing Allocation Aware Capacity, since many customers act as internal service provider, Planning scenarios from migration  to HCI adoption, comparing private to public cloud, and finally, new cost drivers (HCI, monthly OpEX…) A good amount of effort was made on Intelligent Remediation with a lot of new features in terms of OS and Apps monitoring, relationship widgets, Metric correlation and Supermetrics, integration with Service Now, NSX-T support and management via Skyline. On the Configuration and Compliance side light’s on Custom Compliance standards for particular customers, and automated configuration management with vRO integration.

In conclusion, vROPS spans its control across on-premises SDDC and public cloud maximizing performance, capacity utilization, prediction and prevention of issues, troubleshooting as quick as possible. vSAN and HCI operations, Multi-Cloud Observability, App-aware operations and Config&Compliance will complete the product.

004

I’m observing VMware moving more and more towards a multi-cloud environment. Companies need it for resiliency reasons and for differentiation of OpEX costs. The market is mature, comparison is a daily task. In my opinion the behaviour of VMware is the right one: not fighting public cloud, it would be silly, but integrating it as much as possible using its products, from vROPS to NSX-T just to name 2. With vRealize vROPS the customer is able to aggregate business and operational keys considering his whole workload, no matter where it runs, and take decisions having a global overview and perception.

Using Pipelines for Administrators in Code Stream

When VMware presented its products at the last TFD 19, I was attracted by one in particular, Code Stream, following remotely the event. Coding resides out of my comfort zone, so, ok, let’s have a look at it.Risultati immagini per vmware code stream

Code Stream is one of the services inside the cloud automation portal. The new version, 3.0, is not just focused on application delivery, instead on infrastructure delivery, an API backed planning platform. Using Pipelines.

2019-07-17_111649

Cody De Arkland started with a little story: when he was an admin he updated monthly templates with security patches, and it was a pretty rough process – no matter how hard he tried to automate this process, he still had to deploy the image, boot it up, verify it, validate it – a long tedious process. Solution was structuring a pipeline that did it for him. Actually, the point was to create a pipeline for administrators, since developers were already used to them for lots of application tests and notification of results.

The approach of pipelines to be adopted by vAdmins are the so-called “infrastructure pipelines”, a sequential execution engine: it’s a way of executing jobs step by step.

During the session Cody showed the pipeline “vSphere Template Builder”.  It deployed a blueprint in cloud assembly that spinned up, pulled down Packer from Hashicorp, built an Ubuntu image using Git based on the template, ran that builder, deployed it inside the environment, converted it to a template, updated CAS via API, then deployed it and tested the new image if everything was as expected. Impressive! This is a great example of infrastructure pipeline driving the steps when building systems and testing them.

2019-07-17_122414

Cody moved to another pipeline, more cloud native, that runs against a kubernetes cluster. Here he presented the concept of endpoints made of several tasks – this makes possible to nest several pipelines inside a master pipeline, or establish remote calls, or make a DNS reservation through a REST call, and so on, creativity is the only limit. In the screenshot below an example of the bash code part of a task inside an endpoint. We can get notified by slack, a.e., about result of the single operation or the whole pipeline (or both).

2019-07-17_123815

At this point Cody was aware that people wanted to interact with outer world, so decided to build from scratch a hook, from creating an endpoint, configuring a trigger based on a webhook for Git specifying the Pipeline to work against and, at last, “create”. This action binds the that Git repo to the pipeline above.

Code Stream main page also shows pipelines stats and metrics in the dashboard, default or custom. The same dashboard is useful also from the infrastructure point of view, not only application: we just have to monitor this dashboard.

2019-07-17_152612

Rollback functionality is a great feature: should the pipeline not work as expected, or break during execution, leaving garbage, rollback will clean the whole namespace, allowing a new execution from scratch.

Examples shown are interesting, and never enough. But the main concept is the very same: automation via pipelines, and pipelines created at a high level as a chain of smaller operations to reach the goal. Yes, the real secret is to reduce a big, complex, important function in simple little chunks to execute: it makes the whole pipeline much more elastic and agile, and simple to troubleshoot and to improve.

I think I should wander more out of my comfort zone, especially walking on the grass of code, and especially with such tools around the scene.

VMware Cloud on AWS – Getting Started as a…

Ever considered #VMConAWS for your workload, maybe for redundancy or DR together with your current provider?

VMware Cloud on AWS – Getting Started as a…

What is VMware Cloud on AWS? If you’re a VMworld regular or follow our announcements you’ve probably heard of VMware Cloud on AWS, but if you haven’t let me summarise. At it’s heart, it is a service (and this is an important point) which allows you to consume VMware Cloud Foundation (VCF) running on bare metal The post VMware Cloud on AWS – Getting Started as a vSphere Admin appeared first on VMware vSphere Blog.


VMware Social Media Advocacy