RedHat Openshift deployment inside a VMware Cloud Director’s organization (1).

Part 1: Architecture Overview, Persistent Storage and Backup

The use of container-based environments, specifically Kubernetes, has long ceased to be the exclusive domain of developers. For numerous advantages, which I won’t list here due to space constraints (and the article’s purpose), many traditional applications are migrating to this type of technology. VMware has been moving in this direction for several years now, offering a Kubernetes-based offering with Tanzu. However, many users require a specific platform, namely RedHat’s Openshift.

Unlike Tanzu, which is integrated into VCD via the CSE extension, Openshift requires a separate installation and usage. Despite being an installation on the world’s most widely used virtual platform in on-premises and hybrid environments, vSphere, and on the layer above that makes this platform multi-tenant, VCD, the deployment of Openshift can occur in a console-assisted mode by Redhat in bare metal version.

I’ve designed the architecture of this infrastructure in several steps, from the simplest and fastest, namely an ad hoc installation for a tenant in an organization, self-contained, to envisioning what is called Hypershift, where I see the cluster composed of shared masters, with access to the management console in multi-tenant mode residing in a “master” common organization, from which each client/tenant can deploy the necessary worker nodes (and possibly infrastructure ones) for their workloads, within their organization. I don’t consider it feasible to share infrastructure nodes given the peculiarities that each client can give to their installation. However, this type of architecture is still in its infancy.

In this series of articles, I will therefore describe the architecture and development of the simplest mode, the self-contained one with all nodes dedicated to the tenant, distinguishing and starting from the basic version of OCP (unlike the “Plus” version, which includes other components I will discuss later).

A note of caution is necessary for storage in case of persistent storage needs. By default, OCP does not require persistent storage. However, if needed, I have experimented with different approaches, not all successful.

  • Local storage: it would have been the simplest, albeit the most resource-intensive (the storage of each node had to be exactly replicated in the other nodes, with a huge waste of space, albeit super-redundant). However, according to Kubernetes documentation, the local storage provisioner did not allow dynamic volume creation.
  • Gluster: The provisioner was native, but here too there was a huge waste of resources, as it required at least 2 nodes to build the cluster plus a third for the Heketi balancer.
  • FreeNAS: A free application, licensed as TrueNAS, with the NFS provisioner, some applications didn’t work due to the system renaming the owner of some folders.
  • Netapp Ontap Select: A great solution, with a myriad of available features, especially indicated for those who have physical Netapp devices behind the scenes and do not want to expose them publicly. The downside: the license cost, rightly high given the numerous features available.
  • A physical VLAN from the physical storage – which however conflicts with the concept of abstraction and virtualization, along with a long series of security considerations.
  • Deploying RH Data Foundation, which is essentially Ceph, a component of the Plus version (but can also be installed separately).
  • A file server (in my case RedHat) with an NFS-exported share to be entered into the platform via the NFS provisioner. And this was my choice.
    Someone might question the lack of redundancy in this choice. Actually, not exactly: you could build a cluster of file servers, or if you’re not talking about DR/BC but more lenient RTO/RPO, a simple backup of this server, perhaps at tighter intervals.

There’s also the possibility of adopting an S3 protocol, which I didn’t explore in this test.

Finally, since I mentioned backup, I’ll report that for data and infrastructure protection, I deemed it appropriate to work on 3 levels:

  • The basic one, where I back up all virtual machines in a “traditional” way.
  • The intermediate one, where the “etcd” file is the backup object, to preserve the platform infrastructure.
  • The upper layer, but also the most delicate one, where I save all namespaces and related data, through, in my case, Kasten 10.

I omitted the backup of the actual data, but it’s not really a forgetfulness because it falls under the first point, where I back up all VMs in the organization, including the file server.

With this premise, I will continue in the next article with the definition of prerequisites and step-by-step deployment of the solution within a VCD organization.

3 thoughts on “RedHat Openshift deployment inside a VMware Cloud Director’s organization (1).”

Leave a comment