VMware SDDC with NSX Expands to AWS: The Best…

#VMware SDDC with #NSX Expands to #AWS: The Best of Both Worlds

VMware SDDC with NSX Expands to AWS: The Best…

With VMC on AWS, customers can leverage the best of both worlds – the leading compute, storage, & network virtualization stack enabling enterprises for SDDC


VMware Social Media Advocacy

Advertisements

Enterprise Data as a Service (DaaS): the new paradigm of Actifio

I’m back talking about Actifio, after my previous visit in their offices, and this time in San Jose joining the other delegates at TFD15. In a previous post I was curious about what I was going to find this time, comparing with what I learnt last time. Expectations were better than I imagined.

acti

The main concept in Actifio is that “Data is a strategic asset”

Something I had in my mind was presented by Ash Ashutosh in a clear way, at least for me: some years ago, business was driven by IT: IT moved at its own speed useful for cost saving, and the business followed it. Today, the process is inverted: Business drive IT, cost is nomore the driver but speed&agility are. Speed is set by business, and an army of inside developers could accomplish these needs, deploying on cloud. This last part can be summarized by an overused word: DevOps, together with analytics.

The difference between the 2 models is represented by the following image:

use

Analytics of the developed applications gain a huge value for the company, same for DevOps. Ash makes a clear example: Friday night, system is down, in the first case all you can do is restore few applications, either with DR or, in the worst case, restore archived data; in the second case, I have plenty of developers available, my concern is on the business application, not on the system ones: they can check and manage the business applications via API, they aren’t relying on your on-premises infrastructure.

This is the new paradigm, following the previous example: cloud is no more simply the place where applications reside, a datacentre, but a platform to deliver applications.

This paradigm consists on three important elements, everyone has its place in the cloud (well… “should”):

  • Infrastructure, moving to IaaS – easy to move
  • Applications, moving to SaaS – easy to move
  • Data: actually, it misses the corresponding service in the cloud, and it’s the real core of the whole stack, it’s the “lifeblood of the business”

Solution is having a so called “Enterprise Data as a Service (DaaS) that makes the data stateful as before, but easy to move instantly all around the cloud, that is all around the world, and that connect the previous 2 elements.

All the related functions to be operated on our dataset stack can be split all around several and different cloud providers, choosing the one that, in our opinion, is better on: one for DR, one for backup, another for Windows applications and so on.

cloud

This shows a kind of serverless infrastructure, but also a storageless one

Of course, this whole discussion is applicable to hybrid cloud. We all agree that the companies switching completely on a public cloud are just a few percentage, for several reasons out of the scope of this discussion.

That’s the new stuff that I found after my last visit at Actifio. Beside to tools that improve resilency and availability, beside facilitating building of HQ applications faster, the new feature is enabling an enterprise hybrid cloud.

Sky Platform 8.0 was built around this new concept. It’s cloud native (it couldn’t be different) currently supporting AWS, IBM Blue Mix, Azure, Google and Oracle. It allows cloud mobility (no lock-in). It uses cloud object storage, and this is revolutionary. Think at an Oracle DB using object storage for its tables. It can use a cloud catalog to support scaling using metadata, via API. Lastly, if I, user, add a great value to the data loaded in the cloud, you can expose this value through cloud engagement.

I consider this new paradigm about data as a game changer. I’m sure we’ll be able to discriminate in the next future who will adopt a DevOps model like this one, and who will use a traditional – right now we can also do the same wit who use DevOps and who doesn’t, actually.

I want to go deeper technically in my following post, so keep an eye on my blog. I’m sure you’ll be amazed at least as I was.

Ixia CloudLens to dive in the public cloud traffic

During last TFD15 in San Jose, I had the opportunity to attend a Ixia session presenting their solution for cloud network visibility. Honestly – it was the first time I heard of Ixia, and it’s been impressive.

ixia

I learnt it is now part of Keysight, a spin-off of HP/Agilent. I’d like to spend a few words on the company before the tech dive: the main task they are focused on is simplify complexity of security all the billions of mobile devices, testing it on any of the single components from the chip to the cloud. Plus: monitoring. The focus: network and cloud. It is #1 in network test and #2 in network visibility markets.

The network lifecycle to consider:

life

A very high variety of network devices from traditional to mobile, to security, that can bombard your network and

CLOUDLENS: ANYTIME ANYWHERE and CLOUD VISIBILITY

In the old DC, all the workload resided under the same infrastructure.

old

Now data is distributed especially in virtualization and public cloud (more than one maybe) blinding visibility

CloudLens provides an insight of this traffic end-to-end, from and to Branch Office, DataCenter or Cloud, at the packet level

CloudLens is a platform that expose the visibility of all the network streams in a multicloud environment: private, hybrid, public. And in case of public, the solution wasn’t simply the same adopted for the previous 2 brought to the public, but a new architecture using native cloud solutions in a serverless design.

The most experienced problem in these environments are

  • Data access, in a public cloud infrastructure you don’t have access to the packet level data, L1 and L2 is completely unavailable, L3 is limited to make the platform agnostic by the cloud provider
  • Elasticity and scale: any previous policies written for traditional DC, in presence of autoscaling, are no longer valid, VLANs and IPs moving through components just to make an example
  • Infrastructure churn

How CloudLens solve these critical points:

components

There are 2 components working together to give visibility to public cloud.

First, SaaS management portal built using cloud native services in an serverless architecture

Second, a docubase container for Linux instances, and agent-based solution for windows instances

From the bottom, using a container sensor allows to be Cloud agnostic: anywhere a container can be installed, this sensor will work. Same told for windows

When our traffic starts to flow, the agents send metadata of this traffic to the management component. This done, the module has enough resources to create an advanced intelligence at the source (sensor) so it can autonomously send these data to the relevant tool, filtering based on that patterns.

This is the simple concept. Now let’s try to export it to the many and many loadworks running in the cloud

The users create the AI from that metadata. An example: a webserver:

webse

Source group: my webservers, and based on what I want to monitor, I’ll make a specialized bucket of tools (tool group), passing the metadata coming from the first group to this one via tunnel.

So, the user tasks are

  • Login into the SaaS platform
  • Install sensors and tools instances
  • Group creation: with a simple drag and drop, both kind of groups are created:

groups

As you can imagine, using groups has the big advantage to scale. Just let’s think at the AWS autoscale feature applied to a/many Webservers, sharing all the same functionalities: no manual intervention. Same for the tools: as far as the traffic increases, the solution rebalances the analysis of this traffic automatically.

Please note that there’s no additional infrastructure to run the solution

In some cases, tools vendors may not be ready for the cloud, running on premises: a hybrid cloud environment. In this case, instead of send data to the tool-groups they can be sent on premises, totally or partially. More, if you want to monitor your on premises data source not redirecting this stream to your public cloud environment, you can do this asking the SaaS platform to look at this data instead of or together with the ones coming from public cloud data sources.

This kind of mix is also possible between multiple public cloud: you could have your data source instances running in a public cloud provider, sending traffic to be analysed to a tools group running in another public cloud provider.

This is an example of “drag&drop” to direct a certain traffic to a certain tool:

drag

The next step will be understanding how visibility is reached in a container environment. Next to come, stay tuned!

Getting started with Hybrid Cloud Extension…

Getting started with Hybrid Cloud Extension (#HCX) on #VMware Cloud on AWS

Getting started with Hybrid Cloud Extension…

I had been hearing a lot of cool things about VMware’s Hybrid Cloud Extension (HCX) but never tried the solution myself nor had a good understanding of what it actually provided. With the recently announced Hybrid Cloud Extension (HCX) on VMware Cloud on AWS (VMWonAWS) offering being available, I thought this was a great way to get hands […]


VMware Social Media Advocacy

The New vSphere Beta Just Got Refreshed! Join Now

The New #vSphere Beta Just Got Refreshed! Join Now

The New vSphere Beta Just Got Refreshed! Join Now

We are excited to share that the new vSphere beta has just been refreshed. We have added a significant number of new capabilities in this Dec 2017 download beta for vSphere, on top of the features in the Oct 2017 download beta release. As indicated in the recent announcement blog post, this new VMware vSphere The post The New vSphere Beta Just Got Refreshed! Join Now appeared first on VMware vSphere Blog .


VMware Social Media Advocacy