Big changes to certification pages on vmware.com!

Big changes to certification pages on vmware.com!

Big changes to certification pages on vmware.com!

The next time you visit a certification page, the look and feel will be different. To improve your ability to access certification content, we have refreshed the home page and all certification description and exam pages. The first thing you’ll notice is new graphics and an improved flow — all designed to communicate more clearly The post Big changes to certification pages on vmware.com! appeared first on VMware Education Services .


VMware Social Media Advocacy

Advertisements

NSX SD-WAN by VeloCloud: Linchpin to the…

NSX SD-WAN by @VeloCloud: Linchpin to the Virtual Cloud Network

NSX SD-WAN by VeloCloud: Linchpin to the…

Author: Sanjay Uppal, vice president and general manager, VeloCloud Business Unit, VMware Networking and cloud were central to VeloCloud Networks as a company and as an SD-WAN pioneer – we coined the phrase “The Cloud Is The Network.” VeloCloud Networks found success in its software-defined technology that transformed and improved the way in which customers and The post NSX SD-WAN by VeloCloud: Linchpin to the Virtual Cloud Network appeared first on VeloCloud by VMware .


VMware Social Media Advocacy

Under the Hood: What’s new in Self-Driving…

Under the Hood: What’s new in Self-Driving Operations with vRealize Operations Manager 6.7

Under the Hood: What’s new in Self-Driving…

Today’s the day many of us have been waiting for – the GA release of vRealize Operations Manager 6.7 and it is impressive in both feature and scope. You have probably read some of the blogs about this release already but in this blog post we will cover as many of the goodies and updates The post Under the Hood: What’s new in Self-Driving Operations with vRealize Operations Manager 6.7 appeared first on VMware Cloud Management .


VMware Social Media Advocacy

New Release: PowerCLI Preview for VMware NSX-T…

New Release: PowerCLI Preview for VMware NSX-T Fling

New Release: PowerCLI Preview for VMware NSX-T…

A new Fling has been released for PowerCLI! The PowerCLI Preview for NSX-T Fling adds 280 high-level cmdlets which operate alongside the existing NSX-T PowerCLI module. What do I mean by ‘high-level’ cmdlets? There are generally two forms of cmdlets available through PowerCLI, high-level and low-level. High-level cmdlets abstract the underlying API calls and provide […] The post New Release: PowerCLI Preview for VMware NSX-T Fling appeared first on VMware PowerCLI Blog .


VMware Social Media Advocacy

VMware Cloud Foundation Architecture Poster 2.3…

VMware Cloud Foundation Architecture Poster 2.3 – Cloud Foundation

VMware Cloud Foundation Architecture Poster 2.3…

VMware is pleased to release the latest VMware Cloud Foundation (VCF) architecture poster. VCF is a fully automated, hyper-converged software stack that includes compute, storage, networking and cloud management. Because VCF has so many tightly integrated products, it is important to understand the architecture of these components and how together they create the Software-Defined Data


VMware Social Media Advocacy

vSAN Lab on Ravello – Part 2

In this second and last part (you can find the first one here) we’ll talk about claiming disks to compose a vSAN from the available ones from every host, then we’ll analyze a main component of vSAN configuration, vSAN Storage Policy, and finally a couple of interesting options.

Claim disks

vSAN is made of 2 different groups of disks: capacity tier (usually traditional HDD) and caching tier (only SSD, or “flash” as displayed in vCenter). So from the list of the available disks of the hosts we’ll choose which disks will do what:

And this is the result of our choice:

2018-03-25_235319

2018-03-26_001545

Now we can enable DRS and HA on the cluster

 

vSAN Storage Policy

VM Storage policies are crucial for a vSAN system. They are chosen during a VM deployment, and they’re made of special characteristics of the storage used. They are placed in the Home page of vCenter

2018-03-26_110303

The defautl settings of a vSAN policy are shown below:

2018-03-26_110346

  • Primary level O Failures To Tolerate = 1 – no. of host, disk or network failures a storage object can tolerate. If mirroring: 2n+1 for 1 fault, writing data on n+1 hosts. If erasure coding: 4 hosts for 1 failure.
  • Number of disk stripe per object = 1 – no. of HDD across which each replica of a storage object is striped. If >1, performances are better at the cost of more resources used.
  • Force provisioning = No – The object can be provisioned ONLY if it satisfies all the policy
  • Object space reservation = 0% – Storage reserved in thick provisioning for the VM. The rest will be thin.
  • Flash read cache reservation = 0% – Flash storage reserved for read for the storage object

For any change the system will recalculate the needed storage space needed for an exampe of a 100GB virtual disk (in this example we increased the value of primary level of failures to tolerate to 2):

2018-03-26_110606

These are the standard rules. But many other can be added, let’s see which ones:

2018-03-26_110524

  • Secondary level of failures to tolerate, in case of a second fault domain
  • Failure tolerance method (RAID – mirroring or erasure coding)
  • Data locality
  • IOPS limit per object
  • Disable object checksum

Cluster options

When we previously set up the cluster, we had (and still have) the opportunity to make some choice for our vSAN:

2018-03-26_113525
in short:

  • Manual or automatic disk addition
  • Deduplication and compression (only for all-flash groups)
  • Encryption (with possibility to erase disks before use)
  • Allow Reduced Redundancy (lower VM protection level if SP setup is at the limit of protection level)

Stretched cluster

First of the 2 advanced option to realize a strong vSAN configuration, the streched cluster is useful to have a second cluster for better data protection, tipically available where the distance between data centers is limited (we don’t have a second remote cluster right now…):

Fault domains

The second option is used to protect against rack or chassis failure. Grouping is driven by physical location of the hosts. Failure toleration is the one specified by the relative SP:

2018-03-26_122537

Creating a  domain is quite intuitive: just name the domain and check the belonging hosts.

I probably forgot something on this overview, so feel free to comment, add and argue writing in the space below.

Operationalizing vSAN

Operationalizing vSAN

Operationalizing vSAN

KEVIN LEES Chief Technologist, IT Operations Transformation VMware recently debuted in the Leaders quadrant in the Gartner Magic Quadrant for Hyper-Converged Infrastructure (HCI). This position reflects Gartner’s recognition of the critical role software-defined technologies play in HCI solutions. As VMware’s software-defined storage solution, VMware vSAN is one of the key components and enablers of our The post Operationalizing vSAN appeared first on VMware Professional Services and…Read More


VMware Social Media Advocacy

vSAN Lab on Ravello – Part 1

In the last days I’ve been working on several projects, all of them VMware Cloud stack based.
As happened for the 2 years before, this is my 3rd year I received the award of vExpert, so I can use for free (1000vCPU/hrs/month) Ravello Systems. Thank you guys!

rave
I already tested in the last years products like, a.e., Zerto in some particular architectures, but I gave a boost on standard VMware to test the latest features, beta programs and so on.

My last test is to build up a vSAN cluster on this virtual platform. Virtual, but let’s say – “half virtual” since the hypervisors are running on Bare Metal servers.
I’ll break this post in 2, first part (this) focused on whole Lab building, second part focused in vSAN creation and development.

We start creating a brand new App, let’s call it VSAN_LAB (original, isn’t it?), where I’ll deploy 3 ESXi images – already installed, saved from a previous lab – and a VCSA, thanks to Ian Sanderson‘s  great post.

Then, I’ll assign to the lab 2 VLAN: 10.10.1.0/24 for management and vMotion, and 10.10.2.0/24 for vSAN, connected to 2 different switches and routed by the same router (10.10.0.1). I’m aware that this is NOT a good practice, especially vMotion together with management, but, hey, it’s a lab no? And I’m trying to keep it as simple as possible. In this perspective I also use Ravello’s DNS (10.10.1.2) so I don’t need another VM, I also set a DHCP server (10.10.1.3) but just in case, I won’t use it in this lab.

As told above, I installed all the 4 VM on Bare Metal. On the 3 ESXis (minimum number to have a vSAN) I added 2 more disks: a 400GB for “capacity tier”, and a 80GB for “caching tier”. This last disk will be tagged as SSD as explained by Vladan Seget in his post. The physical NIC assigned to vSAN VLAN won’t have any GW since it’s used just for ESXi to communicate between themselves.

I chose “Performance” option and US East 5 (the screenshot reports Europe – I changed because I already had a vApp running on BM in that region)  to publish this App. Let’s the game start!

2018-03-22_152603.png

After few minutes the lab is up, except for vCSA that takes a little longer. This is the point where I’ll enable the SSD disk. Access the ESXi in SSH (after enabling it) using PuTTY or any other terminal, following Vladan’s instructions.

And from the following image, the 80GB disk has magically turned in a fantastic SSD.

2018-03-22_234800

Next step, create a Datacenter named… “Datacenter”… where we’ll add the 3 nodes (note – to use the host name, preferred, insert the FQDN if you didn’t before, manually in the DNS list).

At this point, since the 3 ESXi came from the very same image, I encountered a duplicated UUID disk issue that didn’t allow me to add the remaining 2 hosts, so I had to unmount that disks, remount and resign, as explained Mohammed Raffic in this post (but I didn’t need to reboot the ESXi).

After adding all the 3 hosts DO NOT proceed with cluster, but before create the networking layer based on a Distributed Switch.

In my case, because of simplicity as stated above, I just used one to manage: vSAN network, Mgt network and VM network. Actually I used the next Port Group wizard to create 3 PG: Mgt, vSAN and VM Net (last one not used in this Lab).

Well, at this moment we have our Datacenter with 3 independent hosts in MM, but connected accordingly to the previous created topology, as shown below:
2018-03-24_001053
Next, let’s add all the hosts to the DVS. This wizard is also useful to add the physical NICs, to create the missing vmk (vSAN) and to modify and move the existing one from controlling only management to manage vMotion too, and moving it to the VDS from the Standard switch. The moving from standard to distributed switch will be analized from the system to understand how much critical could be this operation.

Creation of vmk implies the choice of a network and IP assignment.

Finally, 6 (2 NICS per 3 Hosts) of the 8 uplinks are filled as shown below:

2018-03-24_011205

At last, the cluster: for this moment let’s enable only vSAN, we’ll leave DRS and HA for a second moment. All the possible warnings from vSAN are turning on! Actually, the vSAN doesn’t have any disk assigned, so it exists as an entity but only theorically.

2018-03-24_205346

The next step, the second part, will be claiming disks to build a capacity and a caching tier, complete the cluster with DRS and HA and a storage policy on which vSAN is based. Since I’ll have some more time, I’ll try to drill down other concepts like Dedup, Encryption, Fault Domains, Stretched clusters, and if you have any idea or question feel free to reach me here below or via Twitter.

Stay tuned!