vSAN Lab on Ravello – Part 1

In the last days I’ve been working on several projects, all of them VMware Cloud stack based.
As happened for the 2 years before, this is my 3rd year I received the award of vExpert, so I can use for free (1000vCPU/hrs/month) Ravello Systems. Thank you guys!

rave
I already tested in the last years products like, a.e., Zerto in some particular architectures, but I gave a boost on standard VMware to test the latest features, beta programs and so on.

My last test is to build up a vSAN cluster on this virtual platform. Virtual, but let’s say – “half virtual” since the hypervisors are running on Bare Metal servers.
I’ll break this post in 2, first part (this) focused on whole Lab building, second part focused in vSAN creation and development.

We start creating a brand new App, let’s call it VSAN_LAB (original, isn’t it?), where I’ll deploy 3 ESXi images – already installed, saved from a previous lab – and a VCSA, thanks to Ian Sanderson‘s  great post.

Then, I’ll assign to the lab 2 VLAN: 10.10.1.0/24 for management and vMotion, and 10.10.2.0/24 for vSAN, connected to 2 different switches and routed by the same router (10.10.0.1). I’m aware that this is NOT a good practice, especially vMotion together with management, but, hey, it’s a lab no? And I’m trying to keep it as simple as possible. In this perspective I also use Ravello’s DNS (10.10.1.2) so I don’t need another VM, I also set a DHCP server (10.10.1.3) but just in case, I won’t use it in this lab.

As told above, I installed all the 4 VM on Bare Metal. On the 3 ESXis (minimum number to have a vSAN) I added 2 more disks: a 400GB for “capacity tier”, and a 80GB for “caching tier”. This last disk will be tagged as SSD as explained by Vladan Seget in his post. The physical NIC assigned to vSAN VLAN won’t have any GW since it’s used just for ESXi to communicate between themselves.

I chose “Performance” option and US East 5 (the screenshot reports Europe – I changed because I already had a vApp running on BM in that region)  to publish this App. Let’s the game start!

2018-03-22_152603.png

After few minutes the lab is up, except for vCSA that takes a little longer. This is the point where I’ll enable the SSD disk. Access the ESXi in SSH (after enabling it) using PuTTY or any other terminal, following Vladan’s instructions.

And from the following image, the 80GB disk has magically turned in a fantastic SSD.

2018-03-22_234800

Next step, create a Datacenter named… “Datacenter”… where we’ll add the 3 nodes (note – to use the host name, preferred, insert the FQDN if you didn’t before, manually in the DNS list).

At this point, since the 3 ESXi came from the very same image, I encountered a duplicated UUID disk issue that didn’t allow me to add the remaining 2 hosts, so I had to unmount that disks, remount and resign, as explained Mohammed Raffic in this post (but I didn’t need to reboot the ESXi).

After adding all the 3 hosts DO NOT proceed with cluster, but before create the networking layer based on a Distributed Switch.

In my case, because of simplicity as stated above, I just used one to manage: vSAN network, Mgt network and VM network. Actually I used the next Port Group wizard to create 3 PG: Mgt, vSAN and VM Net (last one not used in this Lab).

Well, at this moment we have our Datacenter with 3 independent hosts in MM, but connected accordingly to the previous created topology, as shown below:
2018-03-24_001053
Next, let’s add all the hosts to the DVS. This wizard is also useful to add the physical NICs, to create the missing vmk (vSAN) and to modify and move the existing one from controlling only management to manage vMotion too, and moving it to the VDS from the Standard switch. The moving from standard to distributed switch will be analized from the system to understand how much critical could be this operation.

Creation of vmk implies the choice of a network and IP assignment.

Finally, 6 (2 NICS per 3 Hosts) of the 8 uplinks are filled as shown below:

2018-03-24_011205

At last, the cluster: for this moment let’s enable only vSAN, we’ll leave DRS and HA for a second moment. All the possible warnings from vSAN are turning on! Actually, the vSAN doesn’t have any disk assigned, so it exists as an entity but only theorically.

2018-03-24_205346

The next step, the second part, will be claiming disks to build a capacity and a caching tier, complete the cluster with DRS and HA and a storage policy on which vSAN is based. Since I’ll have some more time, I’ll try to drill down other concepts like Dedup, Encryption, Fault Domains, Stretched clusters, and if you have any idea or question feel free to reach me here below or via Twitter.

Stay tuned!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

One thought on “vSAN Lab on Ravello – Part 1”

Leave a comment