During last TFD15 in San Jose, I had the opportunity to attend a Ixia session presenting their solution for cloud network visibility. Honestly – it was the first time I heard of Ixia, and it’s been impressive.
I learnt it is now part of Keysight, a spin-off of HP/Agilent. I’d like to spend a few words on the company before the tech dive: the main task they are focused on is simplify complexity of security all the billions of mobile devices, testing it on any of the single components from the chip to the cloud. Plus: monitoring. The focus: network and cloud. It is #1 in network test and #2 in network visibility markets.
The network lifecycle to consider:
A very high variety of network devices from traditional to mobile, to security, that can bombard your network and
CLOUDLENS: ANYTIME ANYWHERE and CLOUD VISIBILITY
In the old DC, all the workload resided under the same infrastructure.
Now data is distributed especially in virtualization and public cloud (more than one maybe) blinding visibility
CloudLens provides an insight of this traffic end-to-end, from and to Branch Office, DataCenter or Cloud, at the packet level
CloudLens is a platform that expose the visibility of all the network streams in a multicloud environment: private, hybrid, public. And in case of public, the solution wasn’t simply the same adopted for the previous 2 brought to the public, but a new architecture using native cloud solutions in a serverless design.
The most experienced problem in these environments are
- Data access, in a public cloud infrastructure you don’t have access to the packet level data, L1 and L2 is completely unavailable, L3 is limited to make the platform agnostic by the cloud provider
- Elasticity and scale: any previous policies written for traditional DC, in presence of autoscaling, are no longer valid, VLANs and IPs moving through components just to make an example
- Infrastructure churn
How CloudLens solve these critical points:
There are 2 components working together to give visibility to public cloud.
First, SaaS management portal built using cloud native services in an serverless architecture
Second, a docubase container for Linux instances, and agent-based solution for windows instances
From the bottom, using a container sensor allows to be Cloud agnostic: anywhere a container can be installed, this sensor will work. Same told for windows
When our traffic starts to flow, the agents send metadata of this traffic to the management component. This done, the module has enough resources to create an advanced intelligence at the source (sensor) so it can autonomously send these data to the relevant tool, filtering based on that patterns.
This is the simple concept. Now let’s try to export it to the many and many loadworks running in the cloud
The users create the AI from that metadata. An example: a webserver:
Source group: my webservers, and based on what I want to monitor, I’ll make a specialized bucket of tools (tool group), passing the metadata coming from the first group to this one via tunnel.
So, the user tasks are
- Login into the SaaS platform
- Install sensors and tools instances
- Group creation: with a simple drag and drop, both kind of groups are created:
As you can imagine, using groups has the big advantage to scale. Just let’s think at the AWS autoscale feature applied to a/many Webservers, sharing all the same functionalities: no manual intervention. Same for the tools: as far as the traffic increases, the solution rebalances the analysis of this traffic automatically.
Please note that there’s no additional infrastructure to run the solution
In some cases, tools vendors may not be ready for the cloud, running on premises: a hybrid cloud environment. In this case, instead of send data to the tool-groups they can be sent on premises, totally or partially. More, if you want to monitor your on premises data source not redirecting this stream to your public cloud environment, you can do this asking the SaaS platform to look at this data instead of or together with the ones coming from public cloud data sources.
This kind of mix is also possible between multiple public cloud: you could have your data source instances running in a public cloud provider, sending traffic to be analysed to a tools group running in another public cloud provider.
This is an example of “drag&drop” to direct a certain traffic to a certain tool:
The next step will be understanding how visibility is reached in a container environment. Next to come, stay tuned!