vSAN Lab on Ravello – Part 2

In this second and last part (you can find the first one here) we’ll talk about claiming disks to compose a vSAN from the available ones from every host, then we’ll analyze a main component of vSAN configuration, vSAN Storage Policy, and finally a couple of interesting options.

Claim disks

vSAN is made of 2 different groups of disks: capacity tier (usually traditional HDD) and caching tier (only SSD, or “flash” as displayed in vCenter). So from the list of the available disks of the hosts we’ll choose which disks will do what:

And this is the result of our choice:

2018-03-25_235319

2018-03-26_001545

Now we can enable DRS and HA on the cluster

 

vSAN Storage Policy

VM Storage policies are crucial for a vSAN system. They are chosen during a VM deployment, and they’re made of special characteristics of the storage used. They are placed in the Home page of vCenter

2018-03-26_110303

The defautl settings of a vSAN policy are shown below:

2018-03-26_110346

  • Primary level O Failures To Tolerate = 1 – no. of host, disk or network failures a storage object can tolerate. If mirroring: 2n+1 for 1 fault, writing data on n+1 hosts. If erasure coding: 4 hosts for 1 failure.
  • Number of disk stripe per object = 1 – no. of HDD across which each replica of a storage object is striped. If >1, performances are better at the cost of more resources used.
  • Force provisioning = No – The object can be provisioned ONLY if it satisfies all the policy
  • Object space reservation = 0% – Storage reserved in thick provisioning for the VM. The rest will be thin.
  • Flash read cache reservation = 0% – Flash storage reserved for read for the storage object

For any change the system will recalculate the needed storage space needed for an exampe of a 100GB virtual disk (in this example we increased the value of primary level of failures to tolerate to 2):

2018-03-26_110606

These are the standard rules. But many other can be added, let’s see which ones:

2018-03-26_110524

  • Secondary level of failures to tolerate, in case of a second fault domain
  • Failure tolerance method (RAID – mirroring or erasure coding)
  • Data locality
  • IOPS limit per object
  • Disable object checksum

Cluster options

When we previously set up the cluster, we had (and still have) the opportunity to make some choice for our vSAN:

2018-03-26_113525
in short:

  • Manual or automatic disk addition
  • Deduplication and compression (only for all-flash groups)
  • Encryption (with possibility to erase disks before use)
  • Allow Reduced Redundancy (lower VM protection level if SP setup is at the limit of protection level)

Stretched cluster

First of the 2 advanced option to realize a strong vSAN configuration, the streched cluster is useful to have a second cluster for better data protection, tipically available where the distance between data centers is limited (we don’t have a second remote cluster right now…):

Fault domains

The second option is used to protect against rack or chassis failure. Grouping is driven by physical location of the hosts. Failure toleration is the one specified by the relative SP:

2018-03-26_122537

Creating a  domain is quite intuitive: just name the domain and check the belonging hosts.

I probably forgot something on this overview, so feel free to comment, add and argue writing in the space below.

Operationalizing vSAN

Operationalizing vSAN

Operationalizing vSAN

KEVIN LEES Chief Technologist, IT Operations Transformation VMware recently debuted in the Leaders quadrant in the Gartner Magic Quadrant for Hyper-Converged Infrastructure (HCI). This position reflects Gartner’s recognition of the critical role software-defined technologies play in HCI solutions. As VMware’s software-defined storage solution, VMware vSAN is one of the key components and enablers of our The post Operationalizing vSAN appeared first on VMware Professional Services and…Read More


VMware Social Media Advocacy

vSAN Lab on Ravello – Part 1

In the last days I’ve been working on several projects, all of them VMware Cloud stack based.
As happened for the 2 years before, this is my 3rd year I received the award of vExpert, so I can use for free (1000vCPU/hrs/month) Ravello Systems. Thank you guys!

rave
I already tested in the last years products like, a.e., Zerto in some particular architectures, but I gave a boost on standard VMware to test the latest features, beta programs and so on.

My last test is to build up a vSAN cluster on this virtual platform. Virtual, but let’s say – “half virtual” since the hypervisors are running on Bare Metal servers.
I’ll break this post in 2, first part (this) focused on whole Lab building, second part focused in vSAN creation and development.

We start creating a brand new App, let’s call it VSAN_LAB (original, isn’t it?), where I’ll deploy 3 ESXi images – already installed, saved from a previous lab – and a VCSA, thanks to Ian Sanderson‘s  great post.

Then, I’ll assign to the lab 2 VLAN: 10.10.1.0/24 for management and vMotion, and 10.10.2.0/24 for vSAN, connected to 2 different switches and routed by the same router (10.10.0.1). I’m aware that this is NOT a good practice, especially vMotion together with management, but, hey, it’s a lab no? And I’m trying to keep it as simple as possible. In this perspective I also use Ravello’s DNS (10.10.1.2) so I don’t need another VM, I also set a DHCP server (10.10.1.3) but just in case, I won’t use it in this lab.

As told above, I installed all the 4 VM on Bare Metal. On the 3 ESXis (minimum number to have a vSAN) I added 2 more disks: a 400GB for “capacity tier”, and a 80GB for “caching tier”. This last disk will be tagged as SSD as explained by Vladan Seget in his post. The physical NIC assigned to vSAN VLAN won’t have any GW since it’s used just for ESXi to communicate between themselves.

I chose “Performance” option and US East 5 (the screenshot reports Europe – I changed because I already had a vApp running on BM in that region)  to publish this App. Let’s the game start!

2018-03-22_152603.png

After few minutes the lab is up, except for vCSA that takes a little longer. This is the point where I’ll enable the SSD disk. Access the ESXi in SSH (after enabling it) using PuTTY or any other terminal, following Vladan’s instructions.

And from the following image, the 80GB disk has magically turned in a fantastic SSD.

2018-03-22_234800

Next step, create a Datacenter named… “Datacenter”… where we’ll add the 3 nodes (note – to use the host name, preferred, insert the FQDN if you didn’t before, manually in the DNS list).

At this point, since the 3 ESXi came from the very same image, I encountered a duplicated UUID disk issue that didn’t allow me to add the remaining 2 hosts, so I had to unmount that disks, remount and resign, as explained Mohammed Raffic in this post (but I didn’t need to reboot the ESXi).

After adding all the 3 hosts DO NOT proceed with cluster, but before create the networking layer based on a Distributed Switch.

In my case, because of simplicity as stated above, I just used one to manage: vSAN network, Mgt network and VM network. Actually I used the next Port Group wizard to create 3 PG: Mgt, vSAN and VM Net (last one not used in this Lab).

Well, at this moment we have our Datacenter with 3 independent hosts in MM, but connected accordingly to the previous created topology, as shown below:
2018-03-24_001053
Next, let’s add all the hosts to the DVS. This wizard is also useful to add the physical NICs, to create the missing vmk (vSAN) and to modify and move the existing one from controlling only management to manage vMotion too, and moving it to the VDS from the Standard switch. The moving from standard to distributed switch will be analized from the system to understand how much critical could be this operation.

Creation of vmk implies the choice of a network and IP assignment.

Finally, 6 (2 NICS per 3 Hosts) of the 8 uplinks are filled as shown below:

2018-03-24_011205

At last, the cluster: for this moment let’s enable only vSAN, we’ll leave DRS and HA for a second moment. All the possible warnings from vSAN are turning on! Actually, the vSAN doesn’t have any disk assigned, so it exists as an entity but only theorically.

2018-03-24_205346

The next step, the second part, will be claiming disks to build a capacity and a caching tier, complete the cluster with DRS and HA and a storage policy on which vSAN is based. Since I’ll have some more time, I’ll try to drill down other concepts like Dedup, Encryption, Fault Domains, Stretched clusters, and if you have any idea or question feel free to reach me here below or via Twitter.

Stay tuned!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

FTF 004: VMware NSX – The Upgrade Version…

VMware NSX – The Upgrade Version operation failed

FTF 004: VMware NSX – The Upgrade Version…

Recently during updating one of NSX environment from version 6.3.3 to 6.3.5 I faced the following error: The Upgrade Version operation failed for the entity with the following message. Cluster/ResourcePool Resources needs to be prepared for network fabric to upgrade edge Edge-xxx I checked a cluster where mentioned EDGE was running (btw. properly, all services worked without issues)… Read More »


VMware Social Media Advocacy

VMware Cloud Services: Data Collectors

VMware Cloud Services: Data Collectors

VMware Cloud Services: Data Collectors

Before I deep dive into the different management services available as part of VMware Cloud Services, I would like to talk about data collectors. Read on to learn more! Before We Begin As you know, VMware Cloud Services are meant to manage both public and private cloud environments. For the purposes of this post, I […] The post VMware Cloud Services: Data Collectors appeared first on SFlanders.net .


VMware Social Media Advocacy

Deploy NSX 6.4 on Ravello

Yesterday I added to my new home lab (hosted by Ravello) a NSX Manager 6.4.

After following the great post by Ian Sanderson explaining how to deploy VCSA out of a vSphere cluster, I decided to do the same with NSX.
So I begun installing it inside the cluster, then export with all the configuration set, and reimport in Ravello as an OVF.

After starting the whole infrastructure, all based on BareMetal:

rav
the vCenter and its hosts were ready to host the OVF deployment:
2018-03-16_232857

from this point the deployment is quite linear,  as the following images:
2018-03-16_233012

choosing Datacenter as a target, computer resource ad a final review:

After accepting the agrement, I selected the datastore (yes, I configured also a vSAN – Ravello is really enjoiable!) and then a network, which I created on a separate DVS

Now, the most interesting part: custoization, including several info, from password (either GUI and CLI), network properties, DNS and Services:

Closing with the “usual” summary. Clicking “Finish” the deploy will start.

It’s Configuration Time! Access via browser:

In this section we’ll configure NTP server, Syslog Server, Locale, and after the first step, Network settings. SSL if present, FTP for backup and last, but maybe the most important, connection data for SSO and vCenter Server:

This is the tricky point: turn it off, export and reimport in Ravello, out of the vCenter where it was deployed. So, export template, checking the box “Include extra configuration”, and download:

Time to upload to Ravello. Through the import tool and validating it after uploaded:

The new NSX is ready to be “re-deployed” on the canvas of my App:

2018-03-17_004046

It just need some extra conf for the NIC and, for an easier reachability, assign it an “elastic IP address”

I’ll configure HTTP and HTTPS services for DNAT, and turn everything on:

…crossin fingers……

Yes! I have my NSX up&running out of vCenter. Some extra config now will include the value “true” for the option “preferPhysicalHost” of NSX to run it on Bare Metal, and from the GUI of NSX, connecting the SSO and vCenter services:

A big smile on my face when all is green 🙂

2018-03-17_011804

A further step is deploy all the virtual network layer from vCenter GUI:

2018-03-17_012946

but, hey… time to rest, I made it work where I thought it was barely possible.

I’ll follow up with a new post about NSX configuration on vCenter, but this is another story.

I want to emphasize once more a statement: RAVELLO ROCKS!

vExpert 2018: old and new feelings

After some waiting days, I was nervous, maybe more than the past years, looking inceasingly at my monitor ’till late night for the announcement of 2018 vExperts.
The same emotion that I had 3 years ago when the first time I was honored to receive this award.

vexpert
As told in past, this award means very much to me. Not to show it off towards others, but a higher meaning: the acknowledgment that I can give my contribution in several ways, every year at least one way more.
As in Spiderman, “With great power comes great responsibility” – well, I don’t feel Spiderman, but being vExpert I feel responsible for the community development, to help members answering questions, to explain and show any new feature I discovered through VMTN, VMUG, my blog, Twitter, and so on.
These are not words of circumstance, but a sincere feeling, encouraging my peers to apply for the upcoming application.
I would lie if I say that I don’t mind the benefits that come with this award: the most valuable, in my opinion, are the free Pluralsight subscription, Ravello‘s 1000h/month of free vCPU, several vendors’ NFR software. Last but not least, the VMworld exclusive party 🙂
So, guys out there, come on and apply – you’ll make VMware’s community a better place for all users.