RedHat Openshift deployment inside a VMware Cloud Director’s organization (2).

Part 2: Install overview and high level procedure

This post is the second in a series dedicated to deploying RHOS within a multitenant environment such as a VCD org. In the first part, I outlined the architecture of the solution, as well as the various options regarding storage and backup. In this second part, I will delve much more into the practical aspects, detailing the steps taken to install the base, OCP, upon which I will subsequently add the components of the PLUS version.

The cluster is made of a series of VMs created within the VCD org, 7 to be precise: 3 Masters, 2 Workers, and 2 Infra nodes. From a licensing perspective, RedHat requires only the workers to be licensed. The number and attributes of nodes are as reported in the official documentation: a minimum of 3 for the control plane, at least 2 for the workers, and the same for the infra nodes.

Name/RolevCPURAMStorageIP
Master01817217x.x.x.11
Master02817217x.x.x.12
Master03817217x.x.x.13
Worker01817217x.x.x.21
Worker02817217x.x.x.22
Infra011224224x.x.x.31
Infra021224224x.x.x.32
Fileserver112118x.x.x.101
VMs to create

The same goes for resources, as a shortage of them could prevent some PODs from starting. Therefore, it is essential to avoid unnecessarily occupying valuable resources by creating an org according to the PAYG or FLEX model, setting the allocation to 0%. Also for storage, which should be of the thin type.

Master and worker nodes have the same resources, while the infra nodes require much more as they are responsible for supporting the additional 4 elements of the PLUS version.

In case of resource scarcity, we can consider a single-node cluster installation, strictly not for production use. However, in this case, it will not be possible to attach the PLUS components.

The total (“virtual”, minimally used) resources will be 61 vCPUs, 135GB of RAM, and 1,7 TB of storage. For simplicity, we will put everything on the same organization network, separating the nodes by role within vAPPs (Master, Worker, Infra). Additionally, we will need a public IP to assign to the console and API endpoint: one will suffice since they use different ports. In this demo, I will use a public domain name on which I have control over the DNS to add the relevant “A” records for the console and API.

The file server is a RHEL 9 dedicated explicitly to the role of “file server”. We will proceed with the following steps:

  • Setup of an organization in VCD
  • Creation of empty VMs
  • Creation of the cluster in the RHOS console
  • Setup of the Edge NSX (for NAT and FW)
  • Creation of DNS records
  • RHOS procedure until the ISO download to be attached to the empty VMs for booting
  • Connection of the ISO and powering on the VMs
  • Completion of the cluster installation from the RHOS console
  • Installation of RHEL9 on the file server with definition of the share to be used as persistent storage

In the next post I’ll show step-by-step the activities above.