Category Archives: Companies

Zerto 4.5 – Installing DR between vCenters

Weeks ago I reviewed Zerto in its previous version.
The new one brings many new features that I’ll explain during this post.
I’ll begin it installing in my vCenter Lab, and connecting it to another vCenter. In a later post, I’ll consider connecting to my Org in Cloud (it’s a vCloud Director Org).

First of all, I’ll prepare a Windows Server: you can use 2008R2 or 2012, important is the language – some bugs were discovered with other languages. Remember: it CANNOT be the same server where vCenter is installed.

I’ll download the bundle, made of the application plus the .NET 4.5. If not already installed, setup will ask for it.

2016-04-09_145908

Here is the folder containing the 2 files:

2016-04-09_150541

and this is the warning you’ll receive if .NET 4.5 isn’t present – offering you to install it automatically:

2016-04-09_150903

In some cases (like mine) you could face the following error during .NET installation:

2016-04-09_151447

it means that you need to update Windows via WSUS. It could take much, much time, so maybe you have time for a beer in the while 😉

After udates, I proceed again to install:

2016-04-11_122901

Since .NET updated, the system will require a reboot:

2016-04-11_123441

and it will continue automatically the install process, with related warnings:

After accepting the usual EUL Agreement, it will check for the API version:

During installation you’ll be asked to install the Zerto Storage Controller too – it’s safe, you can allow it:

And now you can open the browser at URL: kttps://<zvmIPorName>:9669/zvm

Maybe you’ll have to wait for a couple of minutes to start services. You’ll be promped for the license number, and you’ll get the login page:

2016-04-11_124943

This is the dashboard:

2016-04-11_125145

Now, it’s time to install the VRAs on every ESXì. The Virtual Replication Appliances are small VMs that will execute the replication, attaching disks for every VM protected, and using them when a failover (test or live) will be performed.

In the following images I already installed 3 out of 4 VRAs, so it will be shown the last one. In the section “Setup” you’ll find the list of installed VRAs and the ESXis. You’ll see that the last one is missing. Clicking on “New VRA” a pop up will appear, asking for some VRA’s detail.

progress is shown below, by “Running tasks”, successfully completed:

2016-04-11_122431

Well, the system is ready to be paired to another vCenter, or to an org in vCloud Director, or to an Hyper-V system or, lastly, to an AWS environment. In this case, we’ll pair it to another vCenter. From tab “Sites”, let’s choose “Pair”, inserting, in the pop up, the address of the remote ZVM – if a FW in the middle, you should permit 9081/TCP (at least).

Bingo! Sites are paired. From now ahead, you’ll create all the needed VPGs and so on.

In a new post I’ll show how to connect to vCloud Director org.

 

 

VMware Horizon 7 Has Been Released – Instant Clones, Blast Extreme ++

VMware Horizon 7 Has Been Released – Instant Clones, Blast Extreme ++

VMware Horizon 7 Has Been Released – Instant Clones, Blast Extreme ++

When few weeks back we reported on VMware Horizon 7 we did not know when this product will finally hit the GA state. It’s now! You can download Horizon 7 now. It’s a major release which allows scaling up to 10 Horizon PODs across 4 sites with up to 500 000 desktops. So VMware Horizon 7 Has Been Released – Instant Clones, Blast Extreme and more features are in.


VMware Social Media Advocacy

NSX 6.2 inside Autolab 2.6 – Part 1

The Ravello Systems blueprint of Autolab 2.6 is a great point of start for any deployment vSphere based.

In this case, I’ll report my experience where I used Autolab 2.6 for NSX 6.2 deployment.

NSXdownload

I will jump over the steps needed to deploy Autolab in Ravello, I’ll report it in another post. This first part will reach the point of deployment of NSX inside the Autolab’s vCenter.

We’ve to move away from standard in the moment when, to have 2 clusters, 3 hosts are nomore enough – 3 hosts is the standard of Autolab.

Luckily the guys at Labguides had a good intuition, so that they added at IPXE’s ESXi menu the “install fourth ESXi” line. So, my task was only to deploy a brand new application from blueprint, save one of the standard ESXi, delete that application and deploy this new ESXi in my current installation, modifying all the networks and hosts stuff.

So, installed the fourth as I did with previously ones, I planned to have 2 Clusters: Management and Prod.

The Management cluster will serve NSX Manager, the Prod is the resource cluster, so it will take care of the Edges.

I’ll put the Host1 and Host2 inside cluster “Management”, and Host3 to cluster “Prod”- before moving it I had to set it in Maintenance Mode

2016-02-27_163702.jpg

That’s the new host ready to be inserted in my Prod cluster:

2016-02-27_163730

Now proceeding with Add host:

We’ll answer “yes” to the security alert, and going straight forward with all the defaults:

Now we can assist at the import process.

In my case, I had to reboot the host to permit HA agent to be installed on it.

Since the Add Host wasn’t automated by Autolab as the previous ones, but manual, I had to add NIC1 in teaming to the first vSwitch, to create the other vSwitch and to recreate all the storage connections.

2016-02-27_233243

From the beginning, I had to modify, for Management network, NIC teaming, setting vmnic0 as active and vmnic1 as standby, overwriting switch failover order:

2016-02-27_234035

And then create the following portgroups: IPStore2

IPStore1:

FT:

and vMotion:

We’re going to recreate the second switch, the one dedicated to VMs:

It’s time to reorder the storage connections too. We’ve to add, to the new host, the following:

2016-02-28_001354.jpg

I’ll jump this part since the purpose of the post is the NSX installation, and not recreate from scratch an host to be compliant with Autolab environment. Anyway, it’s simply a “copy&paste” process from the existing ones to the new one. Regarding iSCSI datastores, we’ll have to set up the HBA interfaces.

Time to deploy NSX. After download the OVA file we’ll use vCenter to deploy it on Management cluster. We’ll use the webclient and not C# client since the first one will give us more options (if we didn’t before, to deploy using webclient we need to download the client integration plug-in – link appears during deployment).

Using IE11 I wasn’t able to use the plugin, and neither Windows Authentication. Following several forum’s advices (https://communities.vmware.com/thread/515713?start=0&tstart=0 is one of them), I downloaded and used Chrome.

By the way, I don’t know if this is only my problem, but my vC server didn’t start automatically the vSphere Web Client service, although set as automatic.

It’s important to check the box accepting the extra configuration options.

Autolab sets its storages not larger than 58GB, that is less than NSX requires in its “thick” deploy. We can use “thin”, and iSCSI2 that is the larger DS available. Storage policy will be the default one – we’re not using vSAN neither VVol

At this point I encountered a boring error: “A connection error occurred. Verify that your computer can connect to vCenter server”:

2016-02-29_012113.jpg

Since my computer IS the vCenter, this didn’t make sense. Googling the error I discovered a related KB – https://kb.vmware.com/kb/2053229 – stating that it depended on a bad DNS configuration. This did’n make sense too, since I connected the webclinet using the DNS name, but I double checked pinging tha name, and I understood that my server was using IPv6 – solution was to disable IPv6 on my VC NIC:

It works now, and I’m able to continue.

The network to map is the “Servers” one, but it’s not important: we only have one physical network, so it doesn’t matter. In the last page we’ll be asked a password for default CLI user, a password for privileged and Network infos. We’ll assign the NSX the IP 192.168.199.200, providing an entry in DNS too, as nsx.lab.local. The same DC server acts as NTP server.

In our installation we won’t use IPv6 (ok… shame on me!)

And this is the summary and deployment:

I don’t choose to start it automatically because I could be forced to modify the resources assigned to nsx: my ESXi’s could offer 24GB of RAM and 12 CPUs – yes, I modified the default values of Autolab.

IMPORTANT: you must change the vNIC from VMXNET3 to E1000, according to Martijn Smit’s post: https://www.vmguru.com/2016/01/ravello-systems-vmware-nsx-6-2-management-service-stuck-in-starting/ . DO IT BEFORE STARTING THE VM – it won’t work changing it after, I had to redeploy. AND you should do it via SSH’ing the ESXi, not deleting and recreating from GUI because if so, the VM will rename it in eth1.

Actually, the nsx doesn’t start:

I must reduce assigned RAM from 16GB to 12GB and CPU from 4 to 3, otherwise it won’t start.

After the first boot, although, if you shut down, you’ll be allowed to use 16GB and 4CPU, as adviced.

And that’s the result of our efforts:

2016-02-29_105004

Logging in, this is the main window:

2016-02-29_105240

And the summary page:

2016-02-29_105355

This is the end of this first part. In the next one we’ll configure the NSX manager and we’ll deploy our Edges and VXlans.

Thank you for following!

 

Zerto: DRaaS in cloud

This is my second post regarding Zerto, but I’d like to focus on the opportunity that it gives to take advantage of the cloud, in particular, VMware vCloud Director or even AWS. We’ll talk about vCD here.

This is the case when a customer has his own infrastructure on premise, but wants to have a parachute in the cloud. Or the opposite: his production is in the cloud, but he wants to keep his investment using the on premise infrastructure as DR.

First of all I’d like to thanks to Vladan Seget, who did a post about Zerto Lab setup here. I’m adding the 3rd case, recovery to/from cloud.

The architecture in this case is slightly different: there’s one more piece, ZCM (Zerto Cloud Manager) that is upon the ZVMs and vCloud Director (s). The installation is simple as ZVM, a light Windows Server and a package to be installed, very few options to set.

2016-02-24_162930.jpg

We’ll access the GUI using the server’s credential.

2016-02-24_163156.jpg

And this is what we get. I apologize for the black boxes, there will be many of them since it’s a production environment. This page summarizes all our customers, name (ZORG: Zerto Organization), a CRM ID for internal purposes (billing, among other) and then the numner of organization they have in vCloud Director and the number of sites on premise that they connect.

Here we should distinguish: there’s one more solution, called by Zerto In-Cloud. The customer can have all in cloud, an organization for production, and another one, usually in another DC, for replication. And nothing at home. Or a combination of this. The only limit is that a single VM can’t be replicated in 2 places. But the same organization can have vApps replicated in cloud and others on premise, for example.

When we’ll create a new ZORG, these is are info requested:

2016-02-24_163431.jpg

Now, let’s take an existing ZORG to explain what’s inside.

2016-02-24_163923.jpg

As before, the ZORG, the ID and, most important, the path where to upload VMDK to preseed big VMs.

2016-02-24_164012.jpg

on permissions tab, we can choose what the customer can do. And, the credentials to access the ZSSP (Zerto Self Service Portal) that we’ll see later. This portal is needed in In-Cloud cases, and when the customer experiences a disaster in his vCenter, so he can access Zerto panel via browser.

2016-02-24_164312.jpg

The organization (s) belonging to the customer are displayed here, with the cloud site (physical one, everyone with its own vCD), nam eof the organization as appears in vCD, and among other info, note the Max limits: we can limit either in number of VM either in storage GB for every customer.

To connect the infrastructure on premise, we need a connector:

2016-02-24_164412.jpg

ZCC, Zerto Cloud Connector – in this case the customer has 2 vCenter connected to his organizations. The connector is a simple VM, deployed automatically by ZCM, that has 2 NICs: one connected to the customer’s organization network, the other one to the main management network. To better understand the role of the connector, it’s like a proxy: in this case Zerto is multitenant, the customer can manage and see only his own VMs.

The last 2 tabs display the groups of VMs replicated (VPG, as Vladan will cover in his next post), and the errors FOR that customer:

2016-02-24_164610.jpg

2016-02-24_164723.jpg

And, lastly, the ZSSP: the portal for the customer to manage his replication. This is the access, where insert the credential previously seen:

2016-02-24_165035.jpg

after which we land here:

2016-02-24_165312.jpg

edit, delete, peer site (vCenter or vCD), status and RPO.

All you need after a catastrophic event (or even much more less…)

Thank you again Vladan.