When VMware presented its products at the last TFD 19, I was attracted by one in particular, Code Stream, following remotely the event. Coding resides out of my comfort zone, so, ok, let’s have a look at it.
Code Stream is one of the services inside the cloud automation portal. The new version, 3.0, is not just focused on application delivery, instead on infrastructure delivery, an API backed planning platform. Using Pipelines.
Cody De Arkland started with a little story: when he was an admin he updated monthly templates with security patches, and it was a pretty rough process – no matter how hard he tried to automate this process, he still had to deploy the image, boot it up, verify it, validate it – a long tedious process. Solution was structuring a pipeline that did it for him. Actually, the point was to create a pipeline for administrators, since developers were already used to them for lots of application tests and notification of results.
The approach of pipelines to be adopted by vAdmins are the so-called “infrastructure pipelines”, a sequential execution engine: it’s a way of executing jobs step by step.
During the session Cody showed the pipeline “vSphere Template Builder”. It deployed a blueprint in cloud assembly that spinned up, pulled down Packer from Hashicorp, built an Ubuntu image using Git based on the template, ran that builder, deployed it inside the environment, converted it to a template, updated CAS via API, then deployed it and tested the new image if everything was as expected. Impressive! This is a great example of infrastructure pipeline driving the steps when building systems and testing them.
Cody moved to another pipeline, more cloud native, that runs against a kubernetes cluster. Here he presented the concept of endpoints made of several tasks – this makes possible to nest several pipelines inside a master pipeline, or establish remote calls, or make a DNS reservation through a REST call, and so on, creativity is the only limit. In the screenshot below an example of the bash code part of a task inside an endpoint. We can get notified by slack, a.e., about result of the single operation or the whole pipeline (or both).
At this point Cody was aware that people wanted to interact with outer world, so decided to build from scratch a hook, from creating an endpoint, configuring a trigger based on a webhook for Git specifying the Pipeline to work against and, at last, “create”. This action binds the that Git repo to the pipeline above.
Code Stream main page also shows pipelines stats and metrics in the dashboard, default or custom. The same dashboard is useful also from the infrastructure point of view, not only application: we just have to monitor this dashboard.
Rollback functionality is a great feature: should the pipeline not work as expected, or break during execution, leaving garbage, rollback will clean the whole namespace, allowing a new execution from scratch.
Examples shown are interesting, and never enough. But the main concept is the very same: automation via pipelines, and pipelines created at a high level as a chain of smaller operations to reach the goal. Yes, the real secret is to reduce a big, complex, important function in simple little chunks to execute: it makes the whole pipeline much more elastic and agile, and simple to troubleshoot and to improve.
I think I should wander more out of my comfort zone, especially walking on the grass of code, and especially with such tools around the scene.