Radar against Ransomware: how Rubrik stops the threats

One of the last innovation in Rubrik, a lot I’ve to say, is a feature built over Polaris, the new SaaS solution presented a few months ago.

The technology used to empower this weapon is machine learning based on models that evolve following behaviours met on the protected VM and by that users.

Ransomware is the main target, Radar can recover in a bunch of seconds all the volumes affected.

radar

Evolution of Ransomware attacks are followed by evolution of Radar itself, thanks to that evolving models. The intelligence applied to these attacks is reinforced and supported by several actions, like analysis of the data running inside the whole environment, identifying strange behaviours (out of the previous mentioned models), detection of these anomalies through a continuous monitoring of the environment and users behaviour – monitoring is mainly focused on metadata in order to be more reactive and fast in detecting the anomalies, and last, recover the affected data easily and quickly, with no (or minor) loss.

I still didn’t have a chance to put personally my hands on the solution, but following some webinars and blog post I was seriously interested in going deep inside the topic. What excites me is the intelligence applied to detection of the attacks, not simply adopting schemes, models, but modifying them according to behaviour, either of the attack and of the users. Machine learning and artificial intelligence were always attracting me, setting myself a little bit closer to my beloved Sci-Fi books and movies.

Automation is another point of attention. Today APIs are used for every single device, included domestic appliances. In this case they come to help for several tasks, from monitoring to manage an adequate reaction workflow to prevention, not necessarily in this order.

Once again Rubrik isn’t simply a backup solution. With Radar it become one important and active component of the enterprise security stack.

Scheduling feature now available for VMworld…

Scheduling feature now available for VMworld 2018 Europe!

Scheduling feature now available for VMworld…

VMworld 2018 Europe takes Barcelona by storm this November. Meet experts, learn about industry hot topics, preview new hands-on labs, and attend networking events.


VMware Social Media Advocacy

Python SDK in Rubrik – joining all the new last features

As shouted from its blog, Rubrik welcomes a new set of sdk for Python to manage its platform.

As far as I remember a first approach was made by Aaron Lewis in a post of his blog reporting an example of API management via python scripts.

rub

After a long series of launches, Rubrik announcing officially the use of python as main language to integrate all the vendors accessing the platform with a unique code.

The choice was intuitive: python is an easy to understand and to code high level language, is a new one and very flexible.

Considering the simplicity concept that Rubrik applies to the backup role, adopting Python is the natural consequence of this characteristic: simple as Rubrik is, in all its aspects.

Download and installation is easyalso. Download the SDK from https://github.com/rubrik-devops/rubrik-sdk-for-python (this link is also for documentation).

“Quick Start” section, you can install the SDK via “pip” or directly from the source.

Well… to the next announcement, Rubrik! In the while I’ll reinforce my (still weak) knowledge of Python using your API.

Storage is not just a bunch of disks, said SoftNAS

Attending my first Cloud Field Day (CFD#4) in San Jose I had a chance to learn of an innovative storage integration product from SoftNAS. I will blog about it in 2 posts, this is the first one.

The introduction was led by the CEO, Rick Braddy, that gave us a company overview.

sn

He had a long experience on Cytrix, noticing that virtual desktop running Windows needed a very large number of IOPS to feed the system. This was (is) true also in a VMware environment, but, as you can imagine, this kind of storage is quite expensive

The concept on which SoftNAS is based is that “storage is not just a bunch of disks”.

The business model is subscription so no upfront needed subscription based.

Data are no more datacenter-centric, but moving across several DC around the globe, on SaaS app, multiple cloud providers not necessarily public. Manage and control these data is hard and critical: “FABRIC for BUSINESS DATA” is the paradigm.

SoftNAS Cloud is a “Data control and Management Platform”, it means a pre-integrated set of capabilities and tools to manage a very large amount of data natively on the cloud, no more locally. This set is a dedicated Data on-Ramp to the cloud (ideally, a full cloud, more realistically a hybrid solution). I’d like to underline that it’s based on an open source project.

The product will take care of 4 kind of problems:

sn

SoftNAS Cloud integrates the following storage sources using the technology shown in the ring:

sn

This image is full of technology, really, integrating all of them has been an impressive work, especially if paired with an easiness of management, kind of drag&drop Java-driven, and this is one of the open source project.

A summary of what previously shown, in a layered way:

sn

So, the core is an open architecture, SoftNAS leverages this architecture automating and integrating all the single components. APIs are part of this process.

Twitter, http, https, hadooop, webservices, redshift, s3, azure, the list is long, could contribute to this amount of data to manage.

After Rick, John-Marc Clark took the stage, illustrating in a deeper detail the product.

SoftNAS Cloud product is based on the Open ZFS file system (a good post about it from Stephen Foskett here), an open source project, and on Apache NiFi, another open source project, a system built to automate flow of data between platforms running in a Java environment.

First goal: avoid cloud vendor locking, giving the user a freedom in choose the best kind of storage at the price and performances he needs, including any on-premises storage.

A lot of problems that companies encounter approaching the cloud are file-system related, mainly for performance degradations and cost to solve it. IoT and machine learning add another level of complexity when moving these data to the cloud – this is much more a Data as a service rather than a Storage as a service. As Joep stated during his presentation, that’s more a consulting solution rather than a technology solution, helping companies moving their workload (not just storage, but more generally data) to the cloud.

One of the goals that Ultrafast supply is to let this migration from and to the cloud, from and to on-premises systems, faster. Same across clouds.

Another primary goal accomplished by the product is make an object storage performant nearly as a block one.

To summarize, customers encountered different problems moving to the cloud, to be solved, by SoftNAS point of view, through 3 different editions of the product: Essential, Enterprise and Platinum. Details are shown below:

sn

In very few words, the essentials edition is dedicated to the companies that need to move to the cloud backups, DR, object storage and so on. Enterprise is for customer that chose to move their main data to the cloud, the production ones with no recoding of their apps. Last, Platinum, is for all data, a complete integration and automation of the company premises with the cloud in the perspective of moving these data across cloud and on-premises – a complete hybrid solution.

We could consider this product as a Private Cloud NAS.

(Second and last part coming soon).

HA Admission control: How can I check how much…

HA Admission control: How can I check how much reserved resources are used?

HA Admission control: How can I check how much…

Advertise here with BSA I had this question twice in the past three months, so somehow this is something which isn’t clear to everyone. With HA Admission Control you set aside capacity for fail-over scenarios, this is capacity which is reserved. But of course VMs also use reservations, how can you see what the total combined used reserved capacity […] The post HA Admission control: How can I check how much reserved resources are used? appeared first on Yellow Bricks .


VMware Social Media Advocacy

Dataset life cycle: Multi-cloud at Cohesity

In my previous post of the series I talked about fruition of enterprise data.
In this one, the second about Cohesity presenting at CFD4, I’ll focus on Cloud Adoption Trends and the session driven by Sai Mukundan and Jon Hildebrand.
Since this part includes also a demo, it was quite exciting (as any session were a demo is performed).
We saw Jon managing that PS commands as I expected he did and I mean, greatly.

cohe1

Proceding with the life cycle of the previous mentioned Dataplatform, a customer typically start his journey to the cloud approaching the “big ones”, AWS, Azure, Google cloud. The use case he’s presenting is the long retention and VM migration, and the cloud means leveraging his storage on-prem. But if the platform on site is, a.e., VMware, adopting one of the previous 3 for long retention means also moving the VMware VM on that platforms: different formats.
The demo driven by Jon shows how simple for the end user is managing this conversion.

cohe2.png

Walking down on the life cycle path, the second use case for the customer is using the cloud for test/dev, and this is application-based, the real mobility of the applicationfrom and to the different clouds. In this way he can either reproduce the behaviour of an app for testing purposes, or for further development and finally to put it on production directly from the cloud. As asked, some of the networking aspect are also replicated if needed.

The next step is backing up these migrated, or simply hosted, applications. That’s accomplished by the Dataplatform using APIs, in a cloud native mode. Sometimes, most of them, this isn’t enough: he also asks for a full disaster recovery of them, no matter from a technical point of view which is the destiantion (on-prem, AWS, Azure, etc.)

To close the cycle, the possibility to move across the clouds (Multi-cloud mobility) that means being vendor-agnostic and, consequentely, having a wider horizon evaluating economically all the clouds possibilities.

Now, the demo. From the first step of the life cycle, “Long-term Data Retention & Granular Recovery”. Archiving data on the cloud allows rehidratation of them on the point where the backup was taken, but also on another environment in the cloud or, again in a total new on-prem enviroment: the platform, in all these cases, remain the same.

cohe3
Before sending the backupped data to cloud archive, they are dedup’ed and indexed to simplify recovery from a research. The index is needed also because the recovery could be performed in a different environment respect the original one, reporting the same metadata collected and created during backup. The following datasets (incremental backups) aren’t sent completely, but only the modified blocks and indexed accordingly.
Building this reference and managing datasets in this way reduces sensibly the network traffic – that’s not for free in almost all the public clouds.

cohe4

Granularity is the first use case, customer needs a specific file at a specific point. The source depends on retention, if it’s present only on the cloud, that will be picked up from there and v.v. Again: recovery will only affect the modified blocks

Demo ran by Jon displayed the creation of a new job on a VM to be backed up and then archived on a public cloud. From definition of the public clouds, then creation of the SLA, everything available either through the GUI and the APIs (PowerShell in Jon’s case). A new job will be responsible of the operation

The indexing engine is very powerful, and it acts in a Google-way, whatever is the key of research, the index coming out includes all the items with that key, no matter if VM or vCenter, or datastore.

Immediately after completion of the backup, Cohesity send simultaneously the archived data to all of 3 public cloud configured. Jon now went for a granular search of files and folder. Again, impressive is the index engine – it proposes suggestion during typing in the key field, plus filter the results by customized words.

Since the requested data are present either on-prem and in all of 3 clouds, the choice where to pick it up is to the customer.

Another aspect of the life cycle is the VM migration, and the following demo will focus on it, using CloudSpin. This is preformed through policies. During this second demo 2 of the delegates asked for new features – this is one of the reasons I love these events, direct contact and feedback with the vendor, interaction with the key people of that vendor that are able to answer in a technically way, not only marketing stuff.

My first consideration is that multicloud today is invaluable – let the customer move his data where and when he wants is a critical feature for who manages his data.
Second, the cloud itself should be considered also in on-prem environments. Backups of on-prem environments taken and archived in the same premises don’t accomplish all the cases of disaster, dev-ops and other situations where duplication of the same data set is needed.

I’ll keep an eye on Cohesity since his development is having a boost…