This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Workshop Guide

Training Environment

Let’s first agree on: AVS = Azure VMware Solution

Azure Portal Credentials

Replace “#” with your group number.

Connect to https://portal.azure.com with the following credentials:

UsernameGPSUS-Group#@vmwaresales101outlook.onmicrosoft.com
PasswordTO BE SUPPLIED

Environment Details

Jumpbox Details

Your first task will be creating a Jumpbox in your respective Jumpbox Resource group.

NOTE: In addition to the instructions below, you can watch this video which will explain the same steps and get you ready for deploying the Jumpbox.

The following table will help you identify the Resource Group, Virtual Network (vNET) and Subnet in which you will be deploying your Jumpbox VM.

Replace the word ‘Name’ with Partner name, and ‘X’ with your Group Number

GroupJumpbox Resource GroupVirtual Network/Subnet
XGPSUS-NameX-JumpboxGPSUS-NameX-VNet/JumpBox

Example - Partner name is ABC and Group number is 2:

GroupJumpbox Resource GroupVirtual Network/Subnet
2GPSUS-ABC2-JumpboxGPSUS-Name2-VNet/JumpBox

Exercise 1: Instructions for Creation of Jumpbox

Step 1: Create Azure Virtual Machine

In the Azure Portal locate the Virtual Machines blade.

  1. Click + Create.
  2. Select Azure virtual machine.

Step 2: Basic information for Azure Virtual Machine

  1. Select Basics tab.
  2. Select the appropriate Resource group per the table below.
  3. Give your Jumpbox a unique name you wish. As suggestion you can use your name initials with the word Jumpbox (i.e. for John Smith it will be JS-Jumpbox)
  4. Ensure the appropriate Region is selected. Usually it is the default region given you have selected the right Resource Group (see step #1).
  5. Select the type of the VM image.

Operating System: Windows 10 or Windows 11

  1. Ensure the correct Size is selected.

Size: Standard D2s v3 (2vCPUs, 8GiB memory)

Leave other defaults and scroll down on the Basics tab.

  1. Enter a user name for your Jumpbox (Anything of your choosing).
  2. Enter and confirm a password for your Jumpbox user.
  3. Ensure to select “None” for Public inbound ports.
  4. Select checkbox for “I confirm I have an eligible Windows 10 license”. Leave all other defaults and jump to Networking tab.

Step 3: Azure Virtual Machine Networking Information

  1. Click on Networking tab.
  2. Select the appropriate VNet based on the table below.

NOTE: This is not the VNet that is loaded by default.

  1. If the appropriate VNet was selected it should auto-populate the JumpBox subnet. If not, please make sure to select the JumpBox Subnet.

NOTE: Please select None for Public IP. You will not need it. Instead, you’ll be using Azure Bastion, which is already deployed, to access (RDP) into the JumpBox VM.

  1. Select “Delete public IP and NIC when VM is deleted” checkbox.
  2. Click Review + Create -> Create.

Step 4: Connect to your Azure Virtual Machine Jumpbox

Once your Jumpbox finishes creating, go to it and click: Connect → Bastion

This should open a new browser tab and connect you to the Jumpbox, enter the Username and Password you specified for your Jumpbox.

Make sure you allow pop-ups in your Internet browser for Azure Bastion to work properly.

VMware Environments

AVS vCenter, HCX, and NSX-T URLs

Please refer to the VMware Credentials section under the AVS blade in the Azure portal to retrieve URLs and Login information for vCenter, HCX, and NSX-T.

NOTE: Use the same vCenter credentials to access HCX portal if needed.

PLEASE DO NOT CLICK GENERATE A NEW PASSWORD BUTTON UNDER CREDENTIALS IN AZURE PORTAL

Note: In a real customer environment, the local cloudadmin@vsphere.local account should be treated as an emergency access account for “break glass” scenarios in your private cloud. It’s not for daily administrative activities or integration with other services. For more information see here

In AVS you can predict the IP addresses for vCenter (.2), NSX-T Manager (.3) and HCX (.9). For instance, if you choose 10.20.0.0/22 as your AVS Management CIDR block, the IPs will be as following:

vCenterNSX-THCX
10.20.0.210.20.0.310.20.0.9

On-Premises VMware Lab Environment

If you are in a group with multiple participants you will be assigned a participant number.

Replace X with your group number and Y with your participant number.

Generic information

GroupParticipantvCenter IPUsernamePasswordWeb workload IPApp Workload IP
XY10.X.Y.2administrator@avs.labMSFTavs1!10.X.1Y.1/2510.X.1Y.129/25

Example for Group number 1 with 4 participants

GroupParticipantvCenter IPUsernamePasswordWeb workload IPApp Workload IP
1110.1.1.2administrator@avs.labMSFTavs1!10.1.11.1/2510.1.11.129/25
1210.1.2.2administrator@avs.labMSFTavs1!10.1.12.1/2510.1.12.129/25
1310.1.3.2administrator@avs.labMSFTavs1!10.1.13.1/2510.1.13.129/25
1410.1.4.2administrator@avs.labMSFTavs1!10.1.14.1/2510.1.14.129/25

Credentials for the Workload VM/s

Usernameroot
Password1TestVM!!

1 - Module 1 Setup AVS Connectivity

Introduction Module 1

Azure VMware Solution offers a private cloud environment accessible from On-Premises and Azure-based resources. Services such as Azure ExpressRoute, VPN connections, or Azure Virtual WAN deliver the connectivity.

Scenario

Customer needs to have connectivity between their workloads in AVS, existing services and workloads in Azure, and access to the internet.

Connectivity Options for AVS

This hands-on lab will show you how to configure the Networking components of an Azure VMware Solution for:

  • Connecting Azure VNet’s to AVS over an ExpressRoute circuit (Preconfigured).
  • Peering with remote environments using Global Reach (Not Applicable in this lab).
  • AVS Interconnect Options
  • Configuring NSX-T (check DNS and configure DHCP, Segments, and Gateway) to manage connectivity within AVS.

The lab environment has a preconfigured Azure VMware Solution environment with an Express Route circuit. A nested or embedded VMware environment is configured to simulate an On-Premises environment (PLEASE DO NOT TOUCH).

Both environments are accessible through JumpBox VM that you can deploy in Azure. You can RDP to Jumpbox through a preconfigured Azure Bastion service.

After this lab is complete, you will have built out this scenario below:

  1. ExpressRoute, for connectivity between Azure VMware Solution and Azure Virtual Networks.
  2. Configure NSX-T to establish connectivity within the AVS environment.
  3. Creation of Test VMs to attach to your NSX-T Network Segments.
  4. Explore some advanced NSX-T features like tagging, creation of groups, Distributed Firewall Features.

1.1 - Module 1 Task 1

Task 1 - AVS Connectivity Options

AVS Connectivity Options

THIS IS FOR REFERENCE ONLY AS IT HAS BEEN PRECONFIGURED FOR THIS LAB.

Section Overview

In this section you will create a connection between an existing, non-AVS, Virtual Network in Azure and the Azure VMware Solution environment. This allows the jumpbox virtual machine you created to manage key components in the VMware management plane such as vCenter, HCX, and NSX-T. You will also be able to access Virtual Machines deployed in AVS and allow those VMs to access resources deployed in the Hub or Spoke VNet’s, such as Private Endpoints and other Azure VMs or Services.

Section Overview

Summary: Generate a new Authorization Key in the AVS ExpressRoute settings, and then create a new Connection from the Virtual Network Gateway in the VNet where the JumpBox is connected to.

The diagram below shows the respective resource groups for your lab environment.

You will replace Name with Partner Name, for example: GPSUS-Name1-SDDC for partner XYZ would be GPSUS-XYZ1-SDDC.

Resource Groups

Option 1: Internal ExpressRoute Setup from AVS -> VNet

NOTE:

  • Since we already have a virtual network gateway, you’ll add a connection between it and your Azure VMware Solution private cloud.
  • The last step of this section is expected to fail, the Connection will be created but it will be in Failed state because another Connection to the same target already exists. This is expected behavior and you can ignore the error.

Step1: Request an ExpressRoute authorization key

ER Authorization Key

In your AVS Private Cloud:

  1. Click Connectivity.
  2. Click ExpressRoute tab.
  3. Click + Request an authorization key.

Step 2: Name Authorization Key

Request Authorization Key

  1. Give your authorization key a name: group-XY-key, where X is your group number, and Y is your participant number.
  2. Click Create. It may take about 30 seconds to create the key. Once created, the new key appears in the list of authorization keys for the private cloud. Copy the authorization key and ExpressRoute ID and keep it handy. You will need them to complete the peering. The authorization key disappears after some time, so copy it as soon as it appears.

Step 3: Create connection in VNet Gateway

VNet Gateway Connection

  1. Navigate to the Virtual Network Gateway named GPSUS-Name#-Network where # is your group number.
  2. Click Connections.
  3. Click + Add.

Step 4: Establish VNet Gateway Setup

VNet Gateway connection setup

  1. Enter a Name for your connection. Use GROUPXY-AVS where X is your group number and Y is your participant number.

  2. For Connection type select ExpressRoute.

  3. Ensure the checkbox next to “Redeem authorization” is selected.

  4. Enter the Authorization key you copied earlier.

  5. For Peer circuit URI paste the ExpressRoute ID you copied earlier.

  6. Click OK.

    The connection between your ExpressRoute circuit and your Virtual Network is created.

    Reminder: It is expected that the connection is in Failed State after the creation, that is because another connection to the same target already exists. Next, delete the connection.

Step 5: Delete connection

Delete connection

  1. Navigate to your Virtual Network Gateway named **GPSUS-NameX-GW where X is your group number.
  2. Click Connections.
  3. Select the 3 ellipses next to the connection with the status of Failed and select Delete.

Option 2: ExpressRoute Global Reach Connection from AVS -> Customer’s on-premises ExpressRoute

ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. The ExpressRoute Global Reach connection is established between the private cloud ExpressRoute circuit and an existing ExpressRoute connection to your on-premises environments. Click here for more information.

Step 1: ExpressRoute Circuits in Azure Portal

NOTE: There are no ExpressRoute circuits setup in this environment. These steps are informational only.

  1. In the Azure Portal search bar type ExpressRoute.
  2. Click ExpressRoute circuits.

Step 2: Create ExpressRoute Authorization

  1. From the ExpressRoute circuits blade, click Authorizations.
  2. Give your authorization key a Name.
  3. Click Save. Copy the Authorization Key created and keep it handy.
  4. Also copy the Resource ID for the ExpressRoute circuit and keep it handy.

Step 3: Create Global Reach Connection in AVS

  1. From your AVS Private Cloud blade, click Connectivity.
  2. Click ExpressRoute Global Reach.
  3. Click + Add.
  4. In the ExpressRoute circuit box, paste the Resource ID copied in the previous step.
  5. Paste the Authorization Key created in the previous step.
  6. Click Create.

Option 3: AVS Interconnect

The AVS Interconnect feature lets you create a network connection between two or more Azure VMware Solution private clouds located in the same region. It creates a routing link between the management and workload networks of the private clouds to enable network communication between the clouds. Click here for more information.

Step 1: Establish AVS Interconnect in your AVS SDDC

  1. In your AVS Private Cloud blade, click Connectivity.
  2. Click AVS Interconnect.
  3. Click + Add.

Step 2: Add connection to another private cloud

  1. Subscription and Location are automatically populated based on the values of your Private Cloud, ensure these are correct.
  2. Select the Resource group of the other Private Cloud you would like to connect to.
  3. Select the AVS Private cloud you wish to connect.
  4. Ensure the checkbox next to “I confirm that the two private clouds to be connected don’t contain overlapping network address space”.
  5. Click Create.

It takes a few minutes for the connection to complete. Once completed the networks found in both Private Clouds will be able to talk to each other. Feel free to perform this exercise if no one in your group has done it as there is a requirement to connect a second Private Cloud in order to perform the exercises in Module 3 (Site Recovery Manager).

Confirm access from Jumpbox

You can now validate access to your Azure VMware Solution components like vCenter and NSX-T from the Jumpbox you created.

Step 1: Obtain AVS Login Credentials

  1. Navigate to the Azure VMware Solution blade associated with your group: GPSUS-Name#-SDDC.
  2. Click your assigned AVS SDDC.
  3. Click Identity.
  4. You will now see the Login Credentials for both vCenter and NSX-T. You will need these credentials for the next few steps. You do not need to copy the Certificate thumbprint.

    PLEASE DO NOT GENERATE A NEW PASSWORD.

Step 2: Access AVS from Jumpbox

Click Connect and Bastion from the previously created Jumpbox blade.

Once connected to your Jumpbox, open a browser and enter the IP Address for AVS vCenter located in a previous step. There might be a secure browser connection message. Click the advanced button and select the option to continue. Then click on LAUNCH VSPHERE CLIENT (HTML 5).

If the VMware vSphere login page launches successfully, then everything is working as expected.

You’ve now confirmed that you can access AVS from a remote environment

References

Tutorial - Configure networking for your VMware private cloud in Azure - Azure VMware Solution | Microsoft Docs

1.2 - Module 1 Task 2

Task 2: Configure NSX-T to establish connectivity within AVS

NSX-T on AVS

After deploying Azure VMware Solution, you can configure an NSX-T network segment from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manager, and vCenter.

NSX-T comes pre-provisioned by default with an NSX-T Tier-0 gateway in Active/Active mode and a default NSX-T Tier-1 gateway in Active/Standby mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity. Machines will not have IP addresses until statically or dynamically assigned from a DHCP server or DHCP relay.

In this Section, you will learn how to:

  • Create additional NSX-T Tier-1 gateways.

  • Add network segments using either NSX-T Manager or the Azure portal

  • Configure DHCP and DNS

  • Deploy Test VMs in the configured segments

  • Validate connectivity

In your Jumpbox, open a browser tab and navigate to the NSX-T URL found in the AVS Private Cloud blade in the Azure Portal. Login using the appropriate credentials noted in the Identity tab.

Exercise 1: Configure DNS Forwarder

NOTE: This task is done by default for every new AVS deployment

AVS DNS forwarding services run in DNS zones and enable workload VMs in the zone to resolve fully qualified domain names to IP addresses. Your SDDC includes default DNS zones for the Management Gateway and Compute Gateway. Each zone includes a preconfigured DNS service. Use the DNS Services tab on the DNS Services page to view or update properties of DNS services for the default zones. To create additional DNS zones or configure additional properties of DNS services in any zone, use the DNS Zones tab.

The DNS Forwarder and DNS Zone are already configured for this training but follow the steps to see how to configure it for new environments.

  1. Ensure the POLICY view is selected.
  2. Click Networking.
  3. Click DNS.
  4. Click DNS Services.
  5. Click the elipsis button -> Edit to view/edit the default DNS settings.

  1. Examine the settings (do not change anything) and click CANCEL.

Exercise 2: Add DHCP Profile in AVS Private Cloud

Please ensure to replace X with your group’s assigned number, Y with your participant number. For participant 10 please replace XY with 20

AVS NSX-T Details
DHCP Server IP10.XY.50.1/30
Segment NameWEB-NET-GROUP-XY
Segment Gateway10.XY.51.1/24
DHCP Range10.XY.51.4-10.XY.51.254

A DHCP profile specifies a DHCP server type and configuration. You can use the default profile or create others as needed.

A DHCP profile can be used to configure DHCP servers or DHCP relay servers anywhere in your SDDC network.

Step 1: Add DHCP Profile

  1. In the NSX-T Console, click Networking.
  2. Click DHCP.
  3. Click ADD DHCP PROFILE.

Step 2: Configure DHCP Profile

  1. Name the profile as DHCP-Profile-GROUP-XY-AVS for your respective group/participant.
  2. Ensure DHCP Server is selected.
  3. Specify the IPv4 Server IP Address as 10.XY.50.1/30 and optionally change the Lease Time or leave the default.
  4. Click SAVE.

Exercise 3: Create an NSX-T T1 Logical Router

NSX-T has the concept of Logical Routers (LR). These Logical Routers can perform both distributed or centralized functions. In AVS, NSX-T is deployed and configured with a default T0 Logical Router and a default T1 Logical Router. The T0 LR in AVS cannot be manipulated by AVS customers, however the T1 LR can be configured however an AVS customer chooses. AVS customers also have the option to add additional T1 LRs as they see fit.

Step 1: Add Tier-1 Gateway

  1. Click Networking.
  2. Click Tier-1 Gateways.
  3. Click ADD TIER-1 GATEWAY.

Step 2: Configure Tier-1 Gateway

  1. Give your T1 Gateway a Name. Use GROUP-XY-T1.
  2. Select the default T0 Gateway, usually TNT**-T0.
  3. Click SAVE. Clck NO to the question “Want to continue configuring this Tier-1 Gateway?”.

Exercise 4: Add the DHCP Profile to the T1 Gateway

  1. Click the elipsis next to your newly created T1 Gateway.
  2. Click Edit.

Step 1: Set DHCP Configuration to Tier-1 Gateway

  1. Click Set DHCP Configuration.
  2. After finishing the DHCP Configuration, click to expand Route Advertisement and make sure all the buttons are enabled.

  1. Ensure DHCP Server is selected for Type.
  2. Select the DHCP Server Profile you previously created.
  3. Click SAVE. Click SAVE again to confirm changes, then click CLOSE EDITING.

Exercise 5: Create Network Segment for AVS VM workloads

Network segments are logical networks for use by workload VMs in the SDDC compute network. Azure VMware Solution supports three types of network segments: routed, extended, and disconnected.

  • A routed network segment (the default type) has connectivity to other logical networks in the SDDC and, through the SDDC firewall, to external networks.

  • An extended network segment extends an existing L2VPN tunnel, providing a single IP address space that spans the SDDC and an On-Premises network.

  • A disconnected network segment has no uplink and provides an isolated network accessible only to VMs connected to it. Disconnected segments are created when needed by HCX. You can also create them yourself and can convert them to other segment types.

Step 1: Add Network Segment

  1. Click Networking.
  2. Click Segments.
  3. Click ADD SEGMENT.

Step 2: Configure Network Segment

  1. Enter WEB-NET-GROUP-XY in the Segment Name field.
  2. Select the Tier-1 Gateway you created previously (GROUP-XY-T1) as the Connected Gateway.
  3. Select the pre-configured overlay Transport Zone (TNTxx-OVERLAY-TZ).
  4. In the Subnets column, you will enter the IP Address for the Gateway of the Subnet that you are creating, which is the first valid IP of the Address Space.
    • For Example: 10.XY.51.1/24
  5. Then click SET DHCP CONFIG.

Step 3: Set DHCP Configuration on Network Segment

  1. Ensure Gateway DHCP Server is selected for DHCP Type.
  2. In the DHCP Config click the toggle button to Enabled.
  3. Then in the DHCP Ranges field enter the range according to the IPs assigned to your group. The IP in in the same network as the Gateway defined above.
    • Use 10.XY.51.4-10.XY.51.254
  4. In the DNS Servers, enter the IP 1.1.1.1.
  5. Click Apply. Then SAVE and finally NO.

Important The IP address needs to be on a non-overlapping RFC1918 address block, which ensures connection to the VMs on the new segment.

References

1.3 - Module 1 Task 3

Create Test VMs and connect to Segment

Create Test VMs

Now that we have our networks created, we can deploy virtual machines and ensure we can get an IP address from DHCP. Go ahead and Login into your AVS vCenter.

Exercise 1: Create a content Library

Step 1: Create vCenter Content Library

  1. From AVS vCenter, click the Menu bars.
  2. Click Content Libraries.

Click CREATE

Step 2: Give your Content Library a Name and Location

  1. Name your Library LocalLibrary-XY where X is your group number and Y is your participant number
  2. Click NEXT
  3. Leave the defaults for Configure content library and for Appy security policy

Step 3: Specify Datastore for Content Library

  1. For Add storage select thevsanDatastore
  2. Click NEXT then FINISH

Exercise 2: Import Item to Content Library

Step 1: Import OVF/OVA to Content Library

  1. Click on your newly created Library and click Templates.
  2. Click OVF & OVA Templates
  3. Click ACTIONS
  4. Click Import item

Step 2: Specify URL for OVF/OVA

Import using this URL - Download Link

https://gpsusstorage.blob.core.windows.net/ovas-isos/workshop-vm.ova

This will now download and import the VM to the library

Exercise 3: Create VMs

Step 1: Create VM from Template

Once downloaded, Right-click the VM Template > New VM from This Template.

Step 2: Select a Name and Folder for the VM

  1. Give the VM a name – e.g VM1-AVS-XY
  2. Select the SDDC-Datacenter
  3. Click NEXT

Step 3: Select a Compute Resource

  1. Select Cluster-1
  2. Click NEXT

Step 4: Review Details, select Datastore

  1. Review Details and click NEXT. Accept the terms and click NEXT
  2. Confirm the storage as the vsanDatastore
  3. Click NEXT

Step 5: Select network for VM

Select the segment that you created previously- “WEB-NET-GROUP-XY” and click NEXT. Then review and click FINISH.

Once deployed, head back to VM’s and Templates and Power On this newly created VM. This VM is provided as a very lightweight Linux machine that will automatically pick up DHCP if configured. Since we have added this to the WEB-NET-GROUP-XY segment, it should get an IP address from this DHCP range. This usually takes few seconds. Click the “Refresh” button on vCenter toolbar.

If you see an IP address here, we have successfully configured the VM and it has connected to the segment and will be accessible from the Jumpbox.

We can confirm this by SSH’ing to this IP address from the Jumpbox.

Username: root

Password: AVSR0cks!

YOU MAY BE ASKED TO CHANGE THE PASSWORD OF THE ROOT USER ON THE VM, CHANGE IT TO A PASSWORD OF YOUR CHOOSING, JUST REMEMBER WHAT THAT PASSWORD IS.

Once you SSH into the VMs, enter these 2 commands to enable ICMP traffic on the VM:

iptables -A OUTPUT -p icmp -j ACCEPT

iptables -A INPUT -p icmp -j ACCEPT

PLEASE REPEAT THESE STEPS AND CREATE A SECOND VM CALLED ‘VM2-AVS-XY’

1.4 - Module 1 Task 4

Task 4: Advanced NSX-T Features within AVS

Section Overview:

You can find more information about NSX-T capabilities in VMware’s website under VMware NSX-T Data Center Documentation.

In this Section, you will learn just a few additional NSX-T Advanced Features. You will learn how to:

  • Create NSX-T tags for VMs

  • Create NSX-T groups based on tags

  • Create Distributed Firewall Rules in NSX-T

NSX-T Tags

NSX-T Tags help you label NSX-T Data Center objects so that you can quickly search or filter objects, troubleshoot and trace, and do other related tasks.

You can create tags using both the NSX-T UI available within AVS and APIs.

More information on NSX-T Tags can be found here: VMware NSX-T Data Center Documentation.

Exercise 1: Assign NSX-T Tags to VMs

Step 1: Assign Tags to your VMs

  1. From the NSX-T UI, click Inventory.
  2. Click Virtual Machines.
  3. Locate your 2 Virtual Machines you created in the previous task, notice they have no tags.

Click the elipsis next to the first VM and click Edit.

Step 2: Name your VM’s tag

  1. Type “GXY”, where X is your group number and Y is your participant number.
  2. Click to add GXY as a tag to this VM.
  3. Click SAVE.

REPEAT THE ABOVE STEPS FOR VM2 USING THE SAME TAG.

NSX-T Groups

Groups include different objects that are added both statically and dynamically, and can be used as the source and destination of a firewall rule.

Groups can be configured to contain a combination of Virtual Machines, IP sets, MAC sets, segment ports, segments, AD user groups, and other groups. Dynamic including of groups can be based on a tag, machine name, OS name, or computer name.

You can find more information on NSX-T Groups on VMware’s NSX-T Data Center docs.

Exercise 2: Create NSX-T Groups

Step 1: Create an NSX-T Group

Now that we’ve assigned tags to the VMs, we’ll create a group based of those tags.

  1. Click Inventory
  2. Click Groups
  3. Click ADD GROUP

Step 2: Name your NSX-T Group and Assign Members

  1. Name your group GROUP-XY where X is your group number and Y is your participant number
  2. Click on *Set Members

Step 3: Set the Membership Criteria for your Group

  1. Click ADD CRITERIA
  2. Select Virtual Machine
  3. Select Tag
  4. Select Equals
  5. Select your previously created group GXY
  6. Click APPLY. Then click SAVE

NSX-T Distributed Firewall

NSX-T Distributed Firewall monitors all East-West traffic on your AVS Virtual Machines and allows you to either deny or allow traffic between these VMs even if the exist on the same NSX-T Network Segment. This is the example of your 2 VMs and we will assume they’re 2 web servers that should never have to talk to each other. More information can be found here: Distributed Firewall.

Exercise 3: Create an NSX-T Distributed Firewall Policy

Step 1: Add a Policy

  1. Click Security
  2. Click Distributed Firewall
  3. Click + ADD POLICY
  4. Give your policy a name “Policy XY” where X is your group number and Y is your participant number
  5. Click the elipsis and select Add Rule

Step 2: Add a Rule

  1. Name your rule “Rule XY
  2. Click under Sources column and select your newly created “GROUP-XY
  3. Click under Destinations and also select “GROUP-XY
  4. Leave all other defaults, and for now, leave the Action set to Allow. We will change this later to understand the behavior
  5. Click PUBLISH to publish the newly created Distributed Firewall Rule

Step 3: Reject communication in your Distributed Firewall Rule

Hopefully you still have the SSH sessions open to your 2 VMs you created earlier. If not, just SSH again. From one of the VMs, run a continuous ping to the other VM’s IP address like the example below.

  1. Change the Action in your Rule to Reject
  2. Click PUBLISH

You can notice that after publishing the change to Reject on your rule, the ping now displays “Destination Host Prohibited”. NSX-T DFW feature is allowing the packet to get from one VM to another but it rejects it once the second VM receives the packet. You can also change this option to Drop where the packet is completely dropped by the second VM.

2 - Module 2 - HCX for VM Migration

Module 2: Deploy HCX for VM Migration

Introduction to VMware HCX

VMware HCX™ is an application mobility platform designed for simplifying application migration, workload rebalancing and business continuity across data centers and clouds. HCX supports the following types of migrations:

  • Cold Migration - Offline migration of VMs.
  • Bulk Migration - scheduled bulk VM (vSphere, KVM, Hyper-V) migrations with reboot – low downtime.
  • HCX vMotion - Zero-downtime live migration of VMs – limited scale.
  • Cloud to Cloud Migrations – direct migrations between VMware Cloud SDDCs moving workloads from region to region or between cloud providers.
  • OS Assisted Migration – bulk migration of KVM and Hyper-V workloads to vSphere (HCX Enterprise feature).
  • Replication Assisted vMotion - Bulk live migrations with zero downtime combining HCX vMotion and Bulk migration capabilities (HCX Enterprise feature).

In this module, we will go through the steps to Install HCX, configure and migrate a test VM to Azure VMware Solution (AVS).

For more information on HCX, please visit VMware’s HCX Documentation.

HCX Setup for Azure VMware Solution (AVS)

Prerequisites

  • Ensure that Module 1 has been completed successfully as this will be required to connect HCX from AVS to the On-Premises Lab.
  • Ability to reach out to vCenter portal from Jumpbox VM:
    • AVS vCenter: Get IP from Azure Portal - AVS blade
    • On-premises vCenter: 10.X.Y.2

Remember that X is your group number and Y your participant number.

2.1 - Module 2 Task 1

Task 1 : Install VMware HCX on AVS Private Cloud

Exercise 1: Enable HCX on AVS Private Cloud

In the following task, we will be installing HCX on your AVS Private Cloud. This is a simple process from the Add-ons section in the Azure Portal, or via Bicep/ARM/PowerShell.

NOTE: This task may or may not have been completed for you in your AVS environment. Only one participant per group can enable HCX in the SDDC so if you’re not the first participant in the group to enable HCX, just use these instructions for reference.

Step 1: Navigate to your SDDC

Navigate to your SDDC in Azure portal

  1. Navigate to the Azure Portal, search for Azure VMware Solution in the search bar.
  2. Click on Azure VMware Solution.

Step 2: Locate your AVS SDDC

Locate your AVS SDDC in Azure portal

  1. Select the private cloud assigned to you or your group.

Step 3: Enable HCX on your AVS Private Cloud

Enable HCX on your AVS Private Cloud

  1. From your Private Cloud blade, click on + Add-ons.
  2. Click Migration using HCX.
  3. Select the checkbox to agree with terms and conditions.
  4. Click Enable and deploy.

HCX will start getting deployed in your AVS Private Cloud and it should take about 10-20 minutes to complete.

2.2 - Module 2 Task 2

Task 2: Download the HCX OVA to On-Premises vCenter

Exercise 1: Download HCX OVA for Deployment of HCX on-premises

The next step is to download HCX onto our On-Premises VMware environment, this will allow us to setup the connectivity to AVS and allow us to migrate. The HCX appliance is provided by VMware and has to be requested from within the AVS HCX Manager.

Step 1: Locate AVS SDDC Identity Information

Locate AVS SDDC Identity Information

  1. Obtain the AVS vCenter credentials by going to your AVS Private Cloud blade in the Azure portal, select VMware credentials.
  2. cloudadmin@vsphere.local is the local vCenter user for AVS, keep this handy.
  3. You can copy the Admin password to your clipboard and keep it handy as well.

Step 3: Locate HCX Cloud Manager IP

Locate HCX Cloud Manager IP

  1. In your AVS Private Cloud blade, click + Add-ons.
  2. Click Migration using HCX.
  3. Copy the HCX Cloud Manager IP URL, open a new browser tab and paste it, and enter the cloudadmin credentials obtained above.

The Request Download Link button will be grayed out initially but will be live after a minute or two. Do not navigate away from this page. Once available, you will have an option to Download the OVA or Copy a Link.

This link is valid for 1 week.

2.3 - Module 2 Task 3

Task 3: Import the OVA file to the On-Premises vCenter

Import the OVA file to the On-Premises vCenter

In this step we will import the HCX appliance into the on premises vCenter.

You can choose to do this Task in 2 different ways:

Step 1: Obtain your AVS vCenter Server credentials

Obtain your AVS vCenter Server credentials

  1. In your AVS Private Cloud blade click Identity.
  2. Locate and save both vCenter admin username cloudadmin@vsphere.local and password.

Step 2: Locate HCX Cloud Manager IP

Locate HCX Cloud Manager IP

  1. Click on + Add-ons.
  2. Copy the HCX Cloud Manager IP.

Step 3: Log in to HCX Cloud Manager IP

Log in to HCX Cloud Manager IP

Open a browser tab and paste the HCX Cloud Manager IP and enter the credentials obtained in the previous step.

  1. In the left pane click System Updates.

Request Download Link for HCX OVA

  1. Click REQUEST DOWNLOAD LINK, please keep in mind that the button might take a couple of minutes to become enabled.

HCX download links

Option 1: Download and deploy HCX OVA to on-premises vCenter

  1. Click VMWARE HCX to download the HCX OVA localy.

Option 2: Deploy HCX from a vCenter Content Library

  1. Click COPY LINK if you will install HCX with this method.

Step 1: Access Content Libraries from on-premises vCenter

Browse to the on-premises vCenter URL, See Getting Started section for more information and login details.

Access Content Libraries from on-premises vCenter

  1. From your on-premises vCenter click Menu.
  2. Click Content Libraries.

Step 2: Create a new Content Library

Create a new Content Library

Create a new local content library if one doesn’t exist by clicking the + sign.

Step 3: Import Item to Content Library

Import Item to Content Library

  1. Click ACTIONS.
  2. Click Import Item.

Enter the HCX URL copied in a step 1

  1. Enter the HCX URL copied in a previous step.
  2. Click IMPORT.

Accept any prompts and actions and proceed. The HCX OVA will download to the library in the background.

2.4 - Module 2 Task 4

Task 4: Deploy the HCX OVA to On-Premises vCenter

Deploy HCX OVA

In this step, we will deploy the HCX VM with the configuration from the On-Premises VMware Lab Environment section.

Step 1: Deploy HCX connector VM

If Option 1: Deploy OVA from download.

Deploy OVF Template from download

  1. Right-click Cluster-1.
  2. Click Deploy OVF Template.

Select the OVA Template

  1. Click the button to point to the location of the downloaded OVA for HCX.
  2. Click NEXT.

If Option 2: Deploy HCX from Content Library

Create new VM from Template

  1. Once the import is completed from the previous task, click Templates.
  2. Right Click the imported HCX template.
  3. Click New VM from This Template.

Select a Name and Folder

  1. Give your HCX Connector a name: HCX-OnPrem-X-Y, where X is your group number and Y is your participant number.
  2. Click NEXT.

Step 2: Name the HCX Connector VM

Name the HCX Connector VM

  1. Give your HCX Connector a name: HCX-OnPrem-X-Y, where X is your group number and Y is your participant number.
  2. Click NEXT.

Step 3: Assign the network to your HCX Connector VM

Assign the network to your HCX Connector VM

Keep the defaults for:

  • Compute Resource
  • Review details
  • License agreements (Accept)
  • Storage (LabDatastore)
  1. Click to select management network.
  2. Click NEXT.

Step 4: Customize template

Customize template

PropertyValue
HostnameSuggestion: HCX-OnPrem-X-Y) Note: Do not leave a space in the name as this causes the webserver to fail)
CLI “admin” User Password/root PasswordMSFTavs1!
Network 1 IPv4 Address10.X.Y.9
Network 1 IPv4 Prefix Length27
Default IPv4 Gateway10.X.Y.1
DNS Server list1.1.1.1

Step 5: Validate deployment

Once done, navigate to Menu > VM’s and Templates > Power on the newly created HCX Manager VM.

The boot process may take 10-15 minutes to complete.

2.5 - Module 2 Task 5

Task 5: Obtain HCX License Key

Obtain HCX License Key

While the HCX installation runs, we will need to obtain a license key to activate HCX. This is available from the AVS blade in the Azure Portal.

Step 1: Create HCX Key from Azure Portal

Create HCX Key from Azure Portal

  1. Click + Add-ons.
  2. Click + Add.
  3. Give your HCX Key a name: HCX-OnPrem-X-Y, where X is your group number and Y your participant number.
  4. Click Yes.

Save the key, you will need it to activate it in your on-premises setup.

2.6 - Module 2 Task 6

Task 6: Activate VMware HCX

Activate VMware HCX

In this task, we will activate the On-Premises HCX appliance that we just deployed in Task 4.

Step 1: Log in to HCX Appliance Management Interface

Log in to HCX Appliance Management Interface

  1. Browse to the On-Premises HCX Manager IP specified in Task 4 on port 9443 IP and login (Make sure you use https:// in the address bar in the browser).
  2. Login using the HCX Credentials specified in Task 4.
    • Username: admin
    • Password: MSFTavs1! (Specified earlier in Task 4).

Step 2: Enter HCX Key

Enter HCX Key

Once logged in, follow the steps below.

  1. Don’t change the HCX Activation Server field. Please keep as is.
  2. In HCX License Key field, please enter your key for HCX Activation Key that you obtains from AVS blade in Azure Portal.
  3. Lastly, select Activate. Please keep in mind that this process can take several minutes.

Step 3: Enter Datacenter Location, System Name

Enter Datacenter Location, System Name

In Datacenter Location, provide the nearest biggest city to your location for installing the VMware HCX Manager On-Premises. Then select Continue. In System Name, modify the name to HCX-OnPrem-X-Y and click Continue.

Note: The city location does not matter in this lab. It’s just a named location for visualization purposes.

Step 4: Continue to complete configuration

Continue to complete configuration

Click “YES, CONTINUE” for completing next task. After a few minutes HCX should be successfully activated.

2.7 - Module 2 Task 7

Task 7: Configure HCX and connect to vCenter

Configure On-Premises HCX

In this section, we will integrate and configure HCX Manager with the On-Premises vCenter Server.

Step 1: Connect vCenter Server

Connect your vCenter Server

  1. In Connect your vCenter, provide the FQDN or IP address of on-premises vCenter server and the appropriate credentials.
  2. Click CONTINUE.

Step 2: Configure SSO/PSC

Configure SSO/PSC

  1. In Configure SSO/PSC, provide the same vCenter IP address: https://10.X.Y.2
  2. Click CONTINUE.

Step 3: Restart HCX Appliance

Restart HCX appliance to save changes

Verify that the information entered is correct and select RESTART.

HCX configuration dashboard after restart

  1. After the services restart, you’ll see vCenter showing as Green on the screen that appears. Both vCenter and SSO must have the appropriate configuration parameters, which should be the same as the previous screen.
  2. Next, click on Configuration to complete the HCX configuration.

Edit the HCX Administrator role mapping configuration

  1. Click Configuration.
  2. Click HCX Role Mapping.
  3. Click Edit.
  4. Change User Groups value to match lab SSO configuration: avs.lab\Administrators
  5. Save changes.

Please note that by default HCX assigns the HCX administrator role to “vsphere.local\Administrators”. In real life, customers will have a different SSO domain than vsphere.local and needs to be changed. This is the case for this lab and this needs to be changed to avs.lab.

2.8 - Module 2 Task 8

Task 8: Create Site Pairing from On-premises HCX to AVS HCX

HCX Site Pairing

In this task, we will be creating the Site Pairing to connect the On-Premises HCX appliance to the AVS HCX appliance.

Step 1: Access On-Premises HCX

Refrehs On-Premises vCenter UI to load HCX plugin

There are 2 ways to access HCX:

  1. Through the vCenter server plug-in. Click Menu -> HCX.
  2. Through the stand-alone UI. Open a browser tab and go to your local HCX Connector IP: https://10.X.Y.9 In either case, log in with your vCenter credentials:
    • Username: administrator@avs.lab
    • Password: MSFTavs1!

NOTE: If working through vCenter Server, you may see a banner item to Refresh the browser, this will load the newly installed HCX modules. If you do not see this, log out and log back into vCenter.

Step 2: Connect to Remote Site

Connect to remote site

  1. Click Site Pairing in the left pane.
  2. Click CONNECT TO REMOTE SITE.

Step 3: Enter Remote (AVS) HCX Information

Remote site configuration wizard

  1. Enter credentials for your AVS vCenter found in the Azure Portal. The Remote HCX URL is found under the Add-ons blade and it is NOT the vCenter URL.
  2. Click CONNECT.
  3. Accept certificate warning and Import

NOTE: Ideally the identity provided in this step should be an AD based credential with delegation instead of the cloudadmin account.

Established site pairing

Connection to the remote site will be established.

2.9 - Module 2 Task 9

Task 9: Create network profiles

HCX Network Profiles

A Network Profile is an abstraction of a Distributed Port Group, Standard Port Group, or NSX Logical Switch, and the Layer 3 properties of that network. A Network Profile is a sub-component of a complete Compute Profile.

Customer’s environments may vary and may not have separate networks.

In this Task you will create a Network Profile for each network intended to be used with HCX services. More information can be found in VMware’s Official Documentation, Creating a Network Profile.

  • Management Network - The HCX Interconnect Appliance uses this network to communicate with management systems like the HCX Manager, vCenter Server, ESXi Management, NSX Manager, DNS, NTP.
  • vMotion Network - The HCX Interconnect Appliance uses this network for the traffic exclusive to vMotion protocol operations.
  • vSphere Replication Network - The HCX Interconnect Appliance uses this network for the traffic exclusive to vSphere Replication.
  • Uplink Network - The HCX Interconnect appliance uses this network for WAN communications, like TX/RX of transport packets.

These networks have been defined for you, please see below section.

In a real customer environment, these will have been planned and identified previously, see here for the planning phase.

Step 1: Create 4 Network Profiles

Create Network Profile

  1. Click Interconnect.
  2. Click Network Profiles.
  3. Click CREATE NETWORK PROFILE.

In this lab, these are in the Network Profile Information section.

We will create 4 separate network profiles:

Step 2: Enter information for each Network Profile

Enter information for each Network Profile

  1. Select Distributed Port Groups.
  2. Select Management Network.
  3. Enter the Management Network IP range from the table below. Remeber to replace X with your group number and Y with your participant number. Repeat the same steps for Replication, vMotion and Uplink Network profiles.
  4. Ensure the select the appropriate checkboxes depending on type of Network Profile you’re creating.

You should create a total of 4 Network Profiles.

Network Profile Information

Management Network Profile

PropertyValue
Management Network IP10.X.Y.10-10.X.Y.16
Prefix Length27
Management Network Gateway10.X.Y.1
PropertyValue
Uplink Network IP10.X.Y.34-10.X.Y.40
Prefix Length28
Uplink Network Gateway10.X.Y.33
DNS1.1.1.1

vMotion Network Profile

PropertyValue
vMotion Network IP10.X.Y.74-10.X.Y.77
Prefix Length27
vMotion Network Gateway10.X.Y.65
DNS1.1.1.1

Replication Network Profile

PropertyValue
Replication IP10.X.Y.106-10.X.Y.109
Prefix Length27
Replication Network Gateway10.X.Y.97
DNS1.1.1.1

2.10 - Module 2 Task 10

Task 10: Create compute profiles

HCX Compute Profile

A compute profile contains the compute, storage, and network settings that HCX uses on this site to deploy the interconnected dedicated virtual appliances when service mesh is added. For more information on compute profile and its creation please refer to VMware documentation.

Step 1: Compute Profile Creation

Compute Profile Creation

  1. In your on-premises HCX installation, click Interconnect.
  2. Click Compute Profiles.
  3. Click CREATE COMPUTE PROFILE.

Step 2: Name Compute Profile

Name Compute Profile

  1. Give your Compute Profile a Name. Suggestion: OnPrem-CP-X-Y, where X is your group number and Y is your participant number.
  2. Click CONTINUE.

Step 3: Select Services for Compute Profile

Select Services for Compute Profile

  1. Review the selected services. By default all the above services are selected. In a real world scenario, if a customer let’s say doesn’t need Network Extension, you would unselect that service here. Leave all defaults for the purpose of this workshop.
  2. Click CONTINUE.

Step 4: Select Service Resources

Select Service Resources

  1. Click the arrow next to Select Resource(s).
  2. In this on-premises simulation, you only have one Cluster called OnPrem-SDDC-Datacenter-X-Y. In a real world scenario, it’s likely your customer may have more than one Cluster. HCX Service Resources are resources from where you’d like HCX to either migrate or protect VMs from. Select the top level OnPrem-SDDC-Datacenter-X-Y.
  3. Click OK.
  4. Click CONTINUE.

Step 5: Select Deployment Resources

Select Deployment Resources

  1. Click the arrow next to Select Resource(s). Here you will be selecting the Deployment Resource, which is where the additional HCX appliances needing to be installed will be placed in the on-premises environment. Select OnPrem-SDDC-Cluster-X-Y.
  2. For Select Datastore click and select the LabDatastore that exists in your simulated on-premises environment. This will be the on-premises Datastore the additional HCX appliances will be placed in.
  3. (Optional) click to Select Folder in the on-premises vCenter Server where to place the HCX appliances. You can select vm for example.
  4. Interconnect Appliance Reservation Settings, here you would set CPU/Memory Reservations for these appliances in your on-premises vCenter Server.
    • Leave the default 0% value.
  5. Click CONTINUE.

Step 6: Select Management Network Profile

Select Management Network Profile

  1. Select the Management Network Profile you created in a previous step.
  2. Click CONTINUE.

Select Uplink Network Profile

  1. Select the Management Network Profile you created in a previous step. DO NOT select the uplink network profile, this network profile was created to simulate what an on-premises environment might look like, but the only functional uplink network for this lab is the Management Network.
  2. Click CONTINUE.

Step 8: Select vMotion Network Profile

Select vMotion Network Profile

  1. Select the vMotion Network Profile you created in a previous step.
  2. Click CONTINUE.

Step 9: Select vSphere Replication Network Profile

Select vSphere Replication Network Profile

  1. Select the vSphere Replication Network Profile you created in a previous step.
  2. Click CONTINUE.

Step 10: Select Network Containers

Select Network Containers

  1. Click the arrow next to Select Network Containers.
  2. Select the virtual distributed switch you’d like to make eligible for Network Extension.
  3. Click CLOSE.
  4. Click CONTINUE.

Step 11: Review Connection Rules

Review Connection Rules

  1. Review the connection rules.
  2. Click CONTINUE.

Step 12: Finish creation of Compute Profile

Finish creation of Compute Profile

Click FINISH to create the compute profile.

Compute Profile is created successfully

Your Compute Profile is created successfully.

2.11 - Module 2 Task 11

Task 11: Create a service mesh

HCX Service Mesh Creation

An HCX Service Mesh is the effective HCX services configuration for a source and destination site. A Service Mesh can be added to a connected Site Pair that has a valid Compute Profile create on both of the sites.

Adding a Service Mesh initiates the deployment of HCX Interconnect virtual appliances on both sites. An interconnect Service Mesh is always created at the source site.

More information can be found inf VMware’s Official Documentation, Creating a Service Mesh.

Step 1: Create Service Mesh

Create Service Mesh

  1. Click Interconnect.
  2. Click Service Mesh.
  3. Click CREATE SERVICE MESH.

Step 2: Select Sites

Select Sites

  1. Select the source site (on-premises).
  2. Select the destination site (AVS).
  3. Click CONTINUE.

Step 3: Select Compute Profiles

Select Compute Profiles

  1. Click to select Source Compute Profile which you recently created, click CLOSE.
  2. Click to select Remote Compute Profile from AVS side, click CLOSE.
  3. Click CONTINUE.

Step 4: Select Services to be Activated

Select Services to be Activated

Leave the Default Services and click CONTINUE.

Advanced Configuration - Override Uplink Network Profiles

  1. Click to select the previously created Source Management Network Profile, click CLOSE. Even though you created an Uplink Network Profile, for the purpose of this lab, the management network is used for uplink.
  2. Click to select the Destination Uplink Network Profile (usually TNTXX-HCX-UPLINK), click CLOSE.
  3. Click CONTINUE.

Step 6: Advanced Configuration: Network Extension Appliance Scale Out

Advanced Configuration: Network Extension Appliance Scale Out

In Advanced Configuration – Network Extension Appliance Scale Out, keep the defaults and then click CONTINUE.

Step 7: Advanced Configuration - Traffic Engineering

Advanced Configuration - Traffic Engineering

In Advanced Configuration – Traffic Engineering, review, leave the defaults and click CONTINUE.

Step 8: Review Topology Preview

Review Topology Preview

Review the topology preview and click CONTINUE.

Step 9: Ready to Complete

Ready to Complete

  1. Enter a name for your Service Mesh (SUGGESTION: HCX-OnPrem-X-Y, where X is your group number, Y your participant number).
  2. Click FINISH.

Note: the appliance names are derived from service mesh name (it’s the appliance prefix, essentially).

Step 10: Confirm Successful Deployment

Confirm Successful Deployment

The Service Mesh deployment will take 5-10 minutes to complete. Once successful, you will see the services as green. Click on VIEW APPLIANCES.

Confirm Successful Deployment - finished

  1. You can also navigate by clicking Interconnect - Service Mesh.
  2. Click Appliances.
  3. Check for Tunnel Status = UP.

You’re ready to migrate and protect on-premises VMs to Azure VMware Solution using VMware HCX. Azure VMware Solution supports workload migrations (with or without a network extension). So you can still migrate workloads in your vSphere environment, along with On-Premises creation of networks and deployment of VMs onto those networks.

For more information, see the VMware HCX Documentation.

2.12 - Module 2 Task 12

Task 12: Network Extension

HCX Network Extension

You can extend networks between and HCX-activated on-premises environment and Azure VMware Solution (AVS) with HCX Network Extension.

With VMware HCX Network Extension (HCX-NE), you can extend a VM’s network to a VMware HCX remote site like AVS. VMs that are migrated, or created on the extended network at the remote site, behave as if they exist on the same L2 network segement a VMs in the source (on-premises) environment. With Network Extension from HCX, the default gateway for an extended network is only connected at the source site. Traffic from VMs in remote sites must be routed to a different L3 network will flow through the source site gateway.

With VMware HCX Network Extension you can:

  • Retain the IP and MAC addresses of the VMs and honor existing network policies.
  • Extend VLAN-tagged networks from a VMware vSphere Distributed Switch.
  • Extend NSX segments.

For more information please visit VMware’s documentation for Extending Networks with VMware HCX.

Once the Service Mesh appliances have been deployed, the next important step is to extend the on-premises network(s) to AVS, so that any migrated VM’s will be able to retain their existing IP address.

Step 1: Network Extension Creation

Start Network Extension creation wizard

  1. Click Network Extension.
  2. Click CREATE A NETWORK EXTENSION.

Step 2: Select Source Networks to Extend

Select Source Networks to Extend

  1. Select Service Mesh - Ensure you select your own Service Mesh you created in an earlier step.
  2. Select OnPrem-workload-X-Y network.
  3. Click NEXT.

Step 3: Configure Network Extension

Configure Network Extension settings

  1. Destination First Hop Router
    • If applicable, ensure your own NSX-T T1 router you created earlier is selected.
    • Otherwise, select the TNT**-T1 router.
  2. Enter the Gateway IP Address / Prefix Length for the OnPrem-workload-X-Y network. You can find this information in the On-Premises Lab Environment section.
    • Example: 10.X.1Y.129/25, where X is your group number and Y is your participant number.
  3. Ensure your own Extension Appliance is selected.
  4. Confirm your own T1 is selected under Destination First Hop Router.
  5. Click SUBMIT.

It might take 5-10 minutes for the Network Extension to complete.

Step 4: Confirm Status of Network Extension

Confirm Status of Network Extension

Confirm the status of the Network Extension as Extension complete.

2.13 - Module 2 Task 13

Task 13: Migrate a VM using HCX vMotion

Migrate a VM using HCX vMotion

Now that your Service Mesh has deployed the additional appliances HCX will utilize successfully, you can now migrate VMs from your on-premises environment to AVS. In this module, you will migrate a test VM called Workload-X-Y-1 that has been pre-created for you in your simulated on-premises environment using HCX vMotion.

Exercise 1: Migrate VM to AVS

Step 1: Examine VM to be migrated

Examine VM to be migrated

  1. Click the VMs and Templates icon in your on-premises vCenter Server.
  2. You will find the VM named Workload-X-Y-1, select it.
  3. Notice the IP address assigned to the VM, this should be consistent with the network you stretched using HCX in a previous exercise.
  4. Notice the name of the Network this VM is connected to: OnPrem-workload-X-Y.
  5. (Optional) You can start a ping sequence to check the connectivity from your workstation to the VM’s IP address.

Step 2: Access HCX Interface

Access HCX Interface

  1. From the vCenter Server interface, click Menu.
  2. Click HCX.

You can also access the HCX interface by using its standalone interface (outside vCenter Server interface) by opening a browser tab to: https://10.X.Y.9, where X is your group number and Y is your participant number.

Step 3: Initiate VM Migration

Initiate VM Migration

  1. From the HCX interface click Migration in the left pane.
  2. Click MIGRATE.

Step 4: Select VMs for Migration

Select VMs for Migration

  1. Search for the location of your VM.
  2. Click the checkbox to select your VM named Workload-X-Y-1.
  3. Click ADD.

Step 5: Transfer and Placement of VM on Destination Site

Transfer and Placement of VM on Destination Site

Transfer and Placement options can be entered in 2 different ways:

  1. If you’ve selected multiple VMs to be migrated and all VMs will be placed/migrated with the same options, setting the options in the area with the green background will set the options for all VMs.
  2. To set the options individually per VM can be set and they can be different from each other.
  3. Click either GO or VALIDATE button. Clicking VALIDATE will validate that the VM can be migrated (This will not migrate the VM). Clicking GO will both validate and migrate the VM.

Use the following values for these options:

OptionValue
Compute ContainerCluster-1
Destination FolderDiscovered virtual machine
StoragevsanDatastore
FormatSame format as source
Migration ProfilevMotion
Switchover ScheduleN/A

Step 6: Monitor VM Migration

Monitor VM Migration

As you monitor the migration of your VM, keep an eye on the following areas:

  1. Percentage status of VM migration.
  2. Sequence of events as the migration occurs.
  3. Cancel Migration button (do not use).

Step 7: Verify Completion of VM Migration

Verify Completion of VM Migration

Ensure your VM was successfully migrated. You can also check for the VM in your AVS vCenter to Ensure it was migrated.

Exercice 2: Migration rollback

Step 1: Reverse Migration

Reverse Migration

VMware HCX also supports Reverse Migration, migrating from AVS back to on-premises.

  1. Click Reverse Migration checkbox.
  2. Select the Discovered virtual machine folder.
  3. Select your same virtual machine to migrate back to on-premises.
  4. Click ADD.

Use the following values for these options:

OptionValue
Compute ContainerOnPrem-SDDC-Cluster-X-Y
Destination FolderOnPrem-SDDC-Datacenter-X-Y
StorageLabDatastore
FormatSame format as source
Migration ProfilevMotion
Switchover ScheduleN/A

The rest of the steps are similar to what you did on Step 5.

Step 2: Verify Completion of VM Migration

Verify that the VM is back running on the On-Premises vCenter.

2.14 - Module 2 Task 14

Task 14: Migrate a VM using HCX Replication Assisted vMotion

HCX Replication Assisted vMotion (RAV) uses the HCX along with replication and vMotion technologies to provide large scale, parallel migrations with zero downtime.

HCX RAV provides the following benefits:

  • Large-scale live mobility: Administrators can submit large sets of VMs for a live migration.
  • Switchover window: With RAV, administrators can specify a switchover window.
  • Continuous replication: Once a set of VMs is selected for migration, RAV does the initial syncing, and continues to replicate the delta changes until the switchover window is reached.
  • Concurrency: With RAV, multiple VMs are replicated simultaneously. When the replication phase reaches the switchover window, a delta vMotion cycle is initiated to do a quick, live switchover. Live switchover happens serially.
  • Resiliency: RAV migrations are resilient to latency and varied network and service conditions during the initial sync and continuous replication sync.
  • Switchover larger sets of VMs with a smaller maintenance window: Large chunks of data synchronization by way of replication allow for smaller delta vMotion cycles, paving way for large numbers of VMs switching over in a maintenance window.

HCX RAV Documentation

Migrate a VM using HCX vMotion

As you are more comfortable now with HCX components, some steps will be less documented to provide you with the opportunity to discover new side of this tool by yourself.

Prerequisites

First thing, we need to check that Replication Assisted vMotion Migration feature is enabled on each of the following:

  • AVS HCX Manager Compute Profile
  • On premises Compute Profile
  • On premises Service Mesh

For example:

Enable Replication Assisted vMotion Migration on Compute Profiles and Service Mesh

If not enabled on one of the previous items, you need to:

  1. Edit the component
  2. Enable the Replication Assisted vMotion Migration capability
  3. Continue the wizard up to the Finish button (no other change is required)
  4. Click on Finish button to validate.

Note: Changes to the service mesh will require a few minutes to complete. You can look at Tasks tab to monitor the progress.

Exercise 1: Migrate VMs to AVS

Step 1: Initiate VMs migration

Initiate VMs migration

  1. From the HCX interface click Migration in the left pane.
  2. Click MIGRATE.

Step 2: Select VMs for Migration

  1. Search for the location of your VM.
  2. Click the checkbox to select your VM named Workload-X-Y-1 and Workload-X-Y-2.
  3. Click ADD.

Step 3: Transfer and Placement of VM on Destination Site

Transfer and Placement of VM on Destination Site

Use the following values for these options:

OptionValue
Compute ContainerCluster-1
Destination FolderDiscovered virtual machine
StoragevsanDatastore
FormatSame format as source
Migration ProfileReplication-assisted vMotion
Switchover ScheduleN/A

Click either GO or VALIDATE button. Clicking VALIDATE will validate that the VM can be migrated (This will not migrate the VM). Clicking GO will both validate and migrate the VM.

Step 4: Monitor VM Migration

As you monitor the migration of your VM, keep an eye on the following areas:

  1. Percentage status of VM migration.
  2. Sequence of events as the migration occurs.
  3. Cancel Migration button (do not use).

Step 5: Verify completion of VM Migration

Verify Completion of VM Migration

Ensure your VM was successfully migrated. You can also check for the VM in your AVS vCenter to Ensure it was migrated.

Exercice 2: Migration rollback

Step 1: Reverse Migration with switchover scheduling

VMware HCX also supports Reverse Migration, migrating from AVS back to on-premises.

  1. Click Reverse Migration checkbox.
  2. Select the Discovered virtual machine folder.
  3. Select your same virtual machines to migrate back to on-premises.
  4. Click ADD.

Use the following values for these options:

OptionValue
Compute ContainerOnPrem-SDDC-Cluster-X-Y
Destination FolderOnPrem-SDDC-Datacenter-X-Y
StorageLabDatastore
FormatSame format as source
Migration ProfileReplication-assisted vMotion
Switchover ScheduleSpecify a 1hr maintenance window timeframe starting at least 15 minutes from now.

Reverse Migration with switchover scheduling

The rest of the steps are similar to what you did on Step 5.

Step 2: Monitor a scheduled VM migration

After few minutes, Replication-assisted vMotion will start replicating virtual disks of the virtual machines to the destination.

When ready, the switchover will not happen before entering the maintenance window timeframe provided in the migration wizard. VM is still running on the source side. In the interval, replication will continue to synchronize disk changes to the target side.

On going VM replication

When the switchover scheduling is reached, the VM computation runtime, storage and network attachments will switch to the destination and the migration will complete with no downtime for the VM.

VM switchover

Note: The switchover may not happen as soon as we reach the maintenance window timeframe: it may take a few minutes to start.

After the switchover is completed, VM should be running in the destination.

Completed switchover

2.15 - Module 2 Task 15

Task 14: Observe the effects of extended L2 networks with and without MON

HCX L2 extended networks are virtual networks that span across different sites, allowing VMs to keep their IP addresses and network configuration when migrated or failed over.

HCX provides:

  • HCX Network Extension: This service creates an overlay tunnel between the sites and bridges the L2 domains, enabling seamless communication and mobility of VMs.
  • HCX Mobility Optimized Networking (MON): Improves network performance and reduces latency for virtual machines that have been migrated to the cloud on an extended L2 segment. MON provides these improvements by allowing more granular control of routing to and from those virtual machines in the cloud.

Observe the effects of extended L2 networks with and without MON

Prerequisites

Please migrate one of the workload VM to AVS side. You can select the migration method of your choice.

The VM needs to be migrated and powered-on to continue the Lab Task.

Assess the current routing path

From workstation, run a traceroute to get a view on current routing path.

From a windows command line, run: tracert IP_OF_MIGRATED_VM

Last network hop before the VM should be the On Prem routing device: 10.X.1Y.8.
Tracing route to 10.1.11.130 over a maximum of 30 hops
  1    23 ms    23 ms    22 ms  10.100.199.5
  2     *        *        *     Request timed out.
  3    23 ms    23 ms    23 ms  10.100.100.65
  4     *        *        *     Request timed out.
  5     *        *        *     Request timed out.
  6     *        *        *     10.1.1.8    # <------- On Premises router
  7    25 ms    24 ms    24 ms  10.1.11.130 # <------- Migrated VM

Step 1: Enable Mobility Optimized Networking on existing network extension

From HCX console, select the Network Extension menu and expand the existing extended network.

Then activate the Mobility Optimized Networking button.

Enable Mobility Optimized Networking on existing network extension

Accept the change by clicking on Enable when prompted for.

The change will take a few minutes to complete.

Step 4: Assess the current routing path after MON enablement

You can re-run a traceroute from jump server but no change to the routing path should be effective yet.

Step 5: Enable MON for the migrated VM

MON is effective at VM level, and so should be activated per VM (in an extended network where MON is already setup).

  1. From the Network Extension, and the expanded MON-enabled network, select the migrated VM.
  2. Select AVS side router location.
  3. Click on Submit.

Enable MON for the migrated VM

Step 6: Configure MON Policy routes

By default, MON redirect all the flow from the migrated VM to On Prem if they are matching RFC1918 subnets.

  • In the Lab setup, this is also a reason for the migrated VM to be unreachable at this stage if we do not customize Policy Routes.
  • In a real world scenario, not configuring Policy Routes, is often a reason of asymmetric traffic as incoming and outgoing traffic for/from the migrated VM could be not using the same path.

We will customize the Policy Routes to ensure that traffic for 10.0.0.0/8 will use the AVS side router location.

From the HCX console:

  1. Select the Network Extension menu, then the Advanced menu and the Policy Routes item.
  2. In the popup, remove the 10.0.0.0/8 network and validate the change.
  3. Wait a minute for the change to propagate.

Step 7: Assess the current routing path after MON enablement at VM level

You can re-run a traceroute from workstation and analyze the result:

Tracing route to 10.1.11.130 over a maximum of 30 hops
  1    23 ms    23 ms    22 ms  10.100.199.5
  2     *        *        *     Request timed out.
  3    23 ms    23 ms    23 ms  10.100.100.65
  4     *        *        *     Request timed out.
  5     *        *        *     Request timed out.
  6    24 ms    23 ms    23 ms  10.1.11.130 # <------- Migrated VM

There should be no 10.X.1Y.8 (as last hop) anymore as flow is directly routed by NSX to the target AVS VM with the help of a /32 static route set by HCX at the NSX-T T1 GW level.

2.16 - Module 2 Task 16

Task 15: Achieving a migration project milestone by cutting-over network extension

In a migration strategy, HCX L2 extended networks can be used to facilitate the transition of workloads from one site to another without changing their IP addresses or disrupting their connectivity. This can reduce the complexity and risks associated with reconfiguring applications, firewalls, DNS, and other network-dependent components.

Once a subnet is free from other resources On Premises, this is important to consider the last phase of the migration project: the network extension cutover.

This operation will provide a direct AVS connectivity for migrated workload, only relying on NSX-T components, and not anymore on the combination of HCX and On Premises network components.

In a large migration project, the cutover can be done network by network, based on the rhythm of workload migrations. When a subnet is free from On Premises resources, and firewall policies effective on AVS side, the cutover operation can be proceeded.

Achieving a migration project milestone by cutting-over network extension

Exercice 1: Perform a network extension cutting-over

Step 1: Migrate remaining workload to AVS

Ensure that all the On Premises workload attached to the extended network are migrated on AVS, except the router VM and the HCX-OnPrem-X-Y-NE-I1 appliance(s).

If not: migrate the VM to AVS.

Note: You can keep or remove MON on the extended network for the current exercice. It should not affect the end result.

Step 2: Start the network extension removal process

From HCX console, select the Network Extension menu. Select the Extended network and click on Unextend network.

Start the network extension removal process

In the next wizard, ensure that the Cloud Edge Gateway will be connected at the end of removal operation. Then submit.

Cloud Edge Gateway will be connected

You can proceed to the next step while the current operation is running.

Step 3: Shutdown On Premises network connectivity to the subnet

For the current lab setup, the On Premises connectivity to the migrated subnet is handled by a Static Route set on an NSX-T T1 gateway. We will remove this rule to simulate the end of BGP route advertising from On Premises.

On a real-world scenario, the change could be to shutdown a virtual interface on a router or a Firewall device to achieve the end of BGP route advertising for the subnet.

  1. Connect to NSX-T Manager console by using credentials from AVS SDDC Azure UI.
  2. Click on Networking tab, then Tier-1 Gateways section.
  3. Select the TNTXX-T1 gateway and start the edition of the component.

Edit Tier-1 Static Routes

  1. From the edit pane, click on the Static Routes link to edit this section (the number will vary depending on the number of effectives routes).
  2. Remove the route associated with your Lab based on its name and the target network.

Remove Static Route for your lab

  1. Validate the change.
  2. Monitor the process of the network extension removal on HCX.

Step 5: Network connectivity checks

When operation is ended, check the network connectivity to one of the migrated VM on the subnet.

You can use a ping from your workstation up to the VM IP address.

The connectivity should be ok.

Exercice 2: Rollback if needed!

Note: The following content is only there to provide guidance in case you need or choose to roll back the network setup to its previous configuration.

Step 1: Roll back changes

Running this step is not part of the Lab, except if you are facing issues.

If needed, network extension can be recreated, with same settings (especially the network prefix), and the route recreated to roll back a change.

To recreate the network extension, refer to the settings of Task 12.

If you need to recreate the static route on NSX-T, specify the following settings:

OptionValue
NameNested-SDDC-Lab-X-Y
Network10.X.1Y.128/27
Next hop / IP address10.X.Y.8
Next hop / Admin Distance1
Next hop / ScopeLeave empty

After the route re-creation, the connectivity via On Premises routing device should be restored to reach your workloads on the extended segment. You can roll back VM to On Premises too if needed.

3 - Module 3 - VMware Site Recovery Manager (SRM)

Module 3: Setup SRM for Disaster Recovery to AVS

Site Recovery Manager

VMware Site Recovery Manager (SRM) for Azure VMware Solution (AVS) is an add-on that customers can purchase to protect their virtual machines in the event of a disaster. SRM for AVS allows customers to automate and orchestrate the failover and failback of VMs between an on-premises environment and AVS, or between two AVS sites.

For more information on VMware Site Recovery Manager (SRM), visit VMware’s official documentation for Site Recovery Manager.

This module walks through the implementation of a disaster recovery solution for Azure VMware Solution (AVS), based on VMware Site Recovery Manager (SRM).

Click here if you’d like to see 10 minutes demo for SRM on AVS.

What you will learn

In this module, you will learn how to:

  • Install Site Recovery Manager in an AVS Private Cloud.
  • Create a site pairing between two AVS Private Clouds in different Azure regions.
  • Configure replications for AVS Virtual Machines.
  • Configure SRM protection groups and recovery plans.
  • Test and execute recovery plans.
  • Re-protect recovered Virtual Machines and execute fail back.

Prerequisite knowledge

  • AVS Private Cloud administration (Azure Portal).
  • AVS network architecture, including connectivity across private clouds in different regions based on Azure ExpressRoute Global Reach.
  • Familiarity with disaster recovery DR concepts such as Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
  • Basic concepts of Site Recovery Manager and vSphere Replication.

Module scenario

In this module, two AVS Private Clouds are used. VMware Site Recovery Manager will be configured at both sites to replicate VMs in the protected site to the recovery site.

Group X is your original assigned group, Group Z is the group you will be using as a Recovery site, for example, Group 1 will be using Group 2’s SDDC as a Recovery site.

For Example:

Private Cloud NameLocationRole
GPSUS-PARTNERX-SDDCBrazil SouthProtected Site
GPSUS-PARTNERZ-SDDCBrazil SouthRecovery Site

The two private clouds should have been already interconnected with each other in Module 1, using ExpressRoute Global Reach or AVS Interconnect. The diagram below depicts the topology of the lab environment.

VMware Site Recovery Manager Information

Recovery Types with SRM

VMware Site Recovery Manager (SRM) is a business continuity and disaster recovery solution that helps you plan, test, and run the recovery of virtual machines between a protected vCenter Server site and a recovery vCenter Server site. You can use Site Recovery Manager to implement different types of recovery from the protected site to the recovery site:

Planned Migration

  • Planned migration: The orderly evacuation of virtual machines from the protected site to the recovery site. Planned migration prevents data loss when migrating workloads in an orderly fashion. For planned migration to succeed, both sites must be running and fully functioning.

Disaster Recovery

  • Disaster recovery: Similar to planned migration except that disaster recovery does not require that both sites be up and running, for example if the protected site goes offline unexpectedly. During a disaster recovery operation, failure of operations on the protected site is reported but is otherwise ignored.

Site Recovery Manager orchestrates the recovery process with VM replication between the protected and the recovery site, to minimize data loss and system down time. At the protected site, Site Recovery Manager shuts down virtual machines cleanly and synchronizes storage, if the protected site is still running. Site Recovery Manager powers on the replicated virtual machines at the recovery site according to a recovery plan. A recovery plan specifies the order in which virtual machines start up on the recovery site. A recovery plan specifies network parameters, such as IP addresses, and can contain user-specified scripts that Site Recovery Manager can run to perform custom recovery actions on virtual machines.

Site Recovery Manager lets you test recovery plans. You conduct tests by using a temporary copy of the replicated data in a way that does not disrupt ongoing operations at either site.

Site Recovery Manager supports both hybrid (protected site on-prem, recovery site on AVS) and cloud-to-cloud scenarios (protected and recovery sites on AVS, in different Azure regions). This lab covers the cloud-to-cloud scenario only.

Site Recovery Manager is installed by deploying the Site Recovery Manager Virtual Appliance on an ESXi host in a vSphere environment. The Site Recovery Manager Virtual Appliance is a preconfigured virtual machine that is optimized for running Site Recovery Manager and its associated services. After you deploy and configure Site Recovery Manager instances on both sites, the Site Recovery Manager plug-in appears in the vSphere Web Client or the vSphere Client. The figure below shows the high-level architecture for a SRM site pair.

vSphere Replication

SRM can work with multiple replication technologies: Array-based replication, vSphere (aka host-based) replication, vVols replication and a combination of array-based and vSphere replication (learn more).

AVS Private Clouds run on hyperconverged physical infrastructure powered by VMware’s first-party storage virtualization software, vSAN. As such, the only replication technology that can be used with SRM in AVS is vSphere replication, which does not require storage arrays. With vSphere replication, the storage source and target can be any storage device. vSphere Replication is configured on a per-VM basis, allowing you to control which VMs are duplicated.

vSphere Replication requires a virtual appliance to be deployed from an Open Virtualization Format (OVF) file using the vSphere Web Client. The first virtual appliance deployed at each site is referred to as the vSphere Replication Management Server. It contains the necessary components to receive replicated data, manage authentication, maintain mappings between the source virtual machines and the replicas at the target location and provide support for Site Recovery Manager. Additional vSphere Replication appliances can be deployed to support larger-scale deployments and topologies with multiple target locations. These additional virtual appliances are referred to as vSphere Replication Servers.

The components that transmit replicated data (the vSphere Replication Agent and a vSCSI filter) are built into vSphere. They provide the plug-in interfaces for configuring and managing replication, track the changes to VMDKs, automatically schedule replication to achieve the RPO for each protected virtual machine, and transmit the changed data to one or more vSphere Replication virtual appliances. There is no need to install or configure these components, further simplifying vSphere Replication deployment.

When the target is a vCenter Server environment, data is transmitted from the source vSphere host to either a vSphere Replication management server or vSphere Replication server and is written to storage at the target location.

vSphere Replication begins the initial full synchronization of the source virtual machine to the target location, using TCP port 31031. A copy of the VMDKs to be replicated can be created and shipped to the target location and used as seeds, reducing the time and network throughput. Changes to the protected virtual machine are tracked and replicated on a regular basis. The transmissions of these changes are referred to as lightweight delta syncs. Their frequency is determined by the RPO that was configured for the virtual machine. A lower RPO requires more-frequent replication and network bandwidth consumed by the initial full synchronization.

The replication stream can be encrypted. As data is being replicated, the changes are first written to a file called a redo log, which is separate from the base disk. After all changes for the current replication cycle have been received and written to the redo log, the data in the redo log is consolidated into the base disk. This process helps ensure the consistency of each base disk so virtual machines can be recovered at any time, even if replication is in progress or network connectivity is lost during transmission.

Site Recovery Manager Concepts

Inventory Mappings

  • Inventory Mappings. For array-based protection and vSphere Replication protection, Site Recovery Manager applies inventory mappings to all virtual machines in a protection group when you create that group. Inventory mappings provide default objects in the inventory on the recovery site for the recovered virtual machines to use when you run recovery. Site Recovery Manager cannot protect a virtual machine unless it has valid inventory mappings. However, configuring site-wide inventory mappings is not mandatory for array-based replication protection groups and vSphere Replication protection groups. If you create vSphere Replication protection group without having defined site-wide inventory mappings, you can configure each virtual machine in the group individually. You can override site-wide inventory mappings by configuring the protection of the virtual machines in a protection group. You can also create site-wide inventory mappings after you create a protection group, and then apply those site-wide mappings to that protection group.

Protection Groups

  • Protection Groups. A protection group is a collection of virtual machines that Site Recovery Manager protects together. After you create a vSphere Replication protection group, Site Recovery Manager creates placeholder virtual machines on the recovery site and applies the inventory mappings to each virtual machine in the group. If Site Recovery Manager cannot map a virtual machine to a folder, network, or resource pool on the recovery site, Site Recovery Manager sets the virtual machine to the Mapping Missing status, and does not create a placeholder for it.

Recovery Plan

  • Recovery Plan. A recovery plan is like an automated run book. It controls every step of the recovery process, including the order in which Site Recovery Manager powers on and powers off virtual machines, the network addresses that recovered virtual machines use, and so on. Recovery plans are flexible and customizable. A recovery plan includes one or more protection groups. You can include a protection group in more than one recovery plan. For example, you can create one recovery plan to handle a planned migration of services from the protected site to the recovery site for the whole organization, and another set of plans per individual departments. You can run only one recovery plan at a time to recover a particular protection group.

Reprotection

  • Reprotection. After a recovery, the recovery site becomes the primary site, but the virtual machines are not protected yet. If the original protected site is operational, you can reverse the direction of protection to use the original protected site as a new recovery site to protect the new protected site. Manually re-establishing protection in the opposite direction by recreating all protection groups and recovery plans is time consuming and prone to errors. Site Recovery Manager provides the reprotect function, which is an automated way to reverse protection. Reprotect uses the protection information that you established before a recovery to reverse the direction of protection. You can initiate the reprotect process only after recovery finishes without any errors. You can conduct tests after a reprotect operation completes, to confirm that the new configuration of the protected and recovery sites is valid.

3.1 - Module 3 Task 1

Task 1: Configure the Protected Site (GPSUS-PARTNERX-SDDC)

Configure Protected Site

IMPORTANT - Some of these exercises can only be done by one person in the group. If you find that someone in your group has already performed some of the Tasks/Exercises, please only use them as reference.

Exercise 1: Enable SRM in your AVS Private Cloud

Step 1: Private Cloud SRM Installation

  1. In your AVS Private Cloud blade, click + Add-ons.
  2. Click Get Started under Disaster Recovery.

Step 2: Deploy SRM Appliance

  1. Select **VMware Site Recovery Manager (SRM) from the drop down box.
  2. Select I don’t have a license key. I will use evaluation version..
  3. Click the checkbox next to I agree with terms and conditions.
  4. Click Install.

It may take between 10-20 minutes for installation to completed.

Monitor the progress of your deployment. When deployment completes click on Go to resource.

Step 3: Setup vSphere Replication

  1. Click + Add-ons.
  2. Click Disaster recovery.
  3. Select vSphere Replication from the drop down box.
  4. Move the slider for vSphere servers to 1.
  5. Click Install.

It may take between 10-20 minutes for installation to completed.

IMPORTANT - These steps may need to also be completed on your Recovery site (The AVS Private Cloud you’ll be paired with). Either ask someone on that group to go through these steps or perform them yourself in the other AVS Private Cloud.

Exercise 2: Create NSX-T Segment in Protected Site

Remember X is your group number, Y is your participant number, Z is the SDDC you’ve been paired with.

In this exercise you will create a network segment in the production site and deploy a test VM to be protected with VMware Site Recovery Manager (SRM).

This task requires a DHCP profile to be available in the private cloud. DHCP profiles have been configured in Module 1 for both GPSUS-PARTNERX-SDDC and GPSUS-PARTNERZ-SDDC (The other group should have configured this). If you did not complete the corresponding steps in Module 1, please go back to it and configure DHCP profiles before proceeding. Add DHCP Profile in AVS Private Cloud.

Step 1: Create NSX-T Network Segment

  1. Log in to your AVS Private Cloud NSX-T interface, click on Networking.
  2. Click Segments.
  3. Click ADD SEGMENT.

Step 2: Configure NSX-T Network Segment

  1. Give your network segment a name: SRM-SEGMENT-XY, where X is your group number and Y is your participant number.
  2. Connected gateway: Select your T1 gateway you created in a previous module.
  3. Transport Zone: Select your private cloud’s default transport zone, should read TNT**-OVERLAY-TZ.
  4. Gateway CIDR IPv4: Enter 10.XY.60.1/24.
  • For Participant 10 use 21 for group 1, 22 for group 2, 23 for group 3, etc. in lieu of XY.
  1. Click on SET DHCP CONFIG.

Step 3: Set DHCP Config on NSX-T Network Segment

  1. Under DHCP Type, ensure Gateway DHCP Server is selected.
  2. Ensure the DHCP Config toggle is set to Enabled.
  3. For DHCP Ranges enter: 10.XY.60.100-10.XY.60.120.
  4. For DNS Servers enter: 10.1.0.192.
  5. Click APPLY.

Step 4: Save your NSX-T Network Segment

Scroll down and click SAVE to save your NSX-T Network Segment. Click NO to close the configuration window. Confirm the segment is successfully configured by checking that it appears in the segments list.

3.2 - Module 3 Task 2

Task 2: Create a VM in the protected site.

Create a VM in Protected Site

Remember X is your group number, Y is your participant number, Z is the SDDC you’ve been paired with.

In this task you will create a test VM in the protected site.

This task requires a VM template file to be available in the private cloud. A template has been added to the private cloud’s Local Library in Module 1. If you did not complete the corresponding steps in Module 1, please go back to it and add a template to your protected site’s Local Library.

Exercise 1: Create VM fron Content Library

Step 1: Access Content Library

  1. Log into the AVS vCenter Server for the protected site GPSUS-PARTNERX-SDDC. Click the Menu bar.
  2. Select Content Libraries.

Step 2: Create VM from Template

  1. Select the Content Library you created earlier: LocalLibrary-XY.
  2. Click Templates.
  3. Click OVF & OVA Templates.
  4. Right-click on workshop-vm template which added to this library earlier.
  5. Select New VM from This Template.

Step 3: Select Name and Folder for your VM

  1. Set your VM’s name to G-XY-SRM-VM1, where X is your group number and Y is your participant number.
  2. Select SDDC-Datacenter as the location.
  3. Click NEXT.

Step 4: Select a compute resource for your VM

  1. Select Cluster-1.
  2. Click NEXT.

Step 5: Select Storage for your VM

Click NEXT on Review details and agree to License agreements.

  1. Select vsanDatastore.
  2. Click NEXT.

Step 6: Select Network for your VM

  1. For Destination Network select your previously created SRM-SEGMENT-XY network.
  2. Click NEXT.

In the Ready to Complete page, click FINISH.

Step 7: Power-on VM

  1. Click on the Menu Bar.
  2. Select Inventory.

  1. Select your newly created VM G-XY-SRM-VM1.
  2. Click the play button to power your VM on.

Step 8: Ensure IP Address has been assigned to VM

3.3 - Module 3 Task 3

Task 3: Create an NSX-T segment in the recovery site

Recovery Site

Remember X is your group number, Y is your participant number, Z is the SDDC you’ve been paired with.

In this task you will configure the recovery site GPSUS-PARTNERZ-SDDC with a network segment for the VMs moved by SRM from the primary site. In this lab, we focus on a basic scenario where the VMs protected by SRM do not need to retain their IP address when moved to the recovery site. A DHCP service is used both in the protected and in the recovery site to assign IP addresses to VMs when they boot.

This task requires a DHCP profile to be available in the recovery private cloud. DHCP profiles have been configured in Module 1 for both GPSUS-PARTNERX-SDDC and GPSUS-PARTNERZ-SDDC. If you did not complete the corresponding steps in Module 1, please go back to it and configure DHCP profiles before proceeding.

Log into NSX-T for the recovery site GPSUS-PARTNERZ-SDDC. Please note that, because of the AVS Interconnect connectivity that has been configured in Module 1 between the protected and the recovery private clouds, you can access vCenter and NSX-T for both from the same jump-box.

Exercise 1: Add Network Segment in Recovery Site

Step 1: Add Segment

  1. In the Recovery Site NSX-T interface click Networking.
  2. Click Segments.
  3. Click ADD SEGMENT.
  4. Give the segment a name: SRM-RECOVERY-XY.
  5. Select the appropriate T1 Connected Gateway. Use the default TNT**-T1 gateway in the recovery site.
  6. Select the appropriate Transport Zone overlay - TNT**-OVERLAY-TZ.
  7. For Subnets add 10.XY.160.1/24.
  8. Click SET DHCP CONFIG.

Step 2: Set DHCP Configuration

  1. Ensure the DHCP Type is set to Gateway DHCP Server.
  2. Ensure the DHCP Config toggle button is set to Enabled.
  3. For DHCP Ranges enter: 10.XY.160.100-10.XY.160.120.
  4. For DNS Servers enter 10.1.0.192.
  5. Click APPLY, then SAVE the network segment configuration, click NO for the next question.

3.4 - Module 3 Task 4

Task 4: Configure a Site Pairing in Site Recovery Manager

SRM Site Pairing

Remember X is your group number, Y is your participant number, Z is the SDDC you’ve been paired with.

In this task you will pair the protected site GPSUS-PARTNERX-SDDC and the recovery site GPSUS-PARTNERZ-SDDC.

Exercise 1: Site Pairing

Site pairing can be configured from vCenter on either the primary or the recovery private cloud. You will work on the primary site’s vCenter.

Step 1: Access Site Recovery Manager from vCenter Server

  1. Log into vCenter Server in the primary AVS private cloud GPSUS-PARTNERX-SDDC and click the menu bat.
  2. Select Site Recovery from the main menu.

Click OPEN Site Recovery.

Step 2: Create New Site Pair

Click NEW SITE PAIR.

Step 3: Select local vCenter Server

  1. Ensure your local vCenter Server is selected.
  2. Ensure Pair with a peer vCenter Server located in a different SSO domain is selected.
  3. Click NEXT.

Step 4: Peer vCenter Server

  1. Enter the vCenter Server information for the Recovery site. This should be GPSUS-PARTNERZ-SDDC.
  2. Click FIND VCENTER SERVER INSTANCES. If a warning shows up click CONNECT.
  3. Select your peer vCenter Server.
  4. Click NEXT.

Step 5: Select services identified

  1. Select the top checkbox to select all services.
  2. Click NEXT. Then click FINISH.

Step 6: Confirm Site Pairing Completes

When the configuration process completes, the SRM main page displays the new site pairing.

3.5 - Module 3 Task 5

Task 5: Configure Inventory Mappings

SRM Inventory Mappings

Remember X is your group number, Y is your participant number, Z is the SDDC you’ve been paired with.

In this task you will configure inventory mappings, which define the resources (networks, folders, compute resources, storage policies) that VMs must use when moved to the recovery site. It is also possible to define reverse mappings, which control resource allocation for failback processes.

Exercise 1: Network mappings

Step 1: Log in to Site Recovery Manager

In the Site Recovery interface, click the VIEW DETAILS button on the paired sites.

Step 2: Authenticate to Recovery Site

You may need to re-authenticate to the recovery site. Enter those credentials and click LOG IN.

Step 3: Create New Network Mapping

  1. Click Network Mappings in the left pane.
  2. Click NEW.

Step 4: Network Mappings Creation Mode

  1. Click to select Prepare mappings manually.
  2. Click NEXT.

Step 5: Configure Recovery Network Mappings

  1. Select first the SRM Protected Segment created earlier called SRM-SEGMENT-XY.
  2. On the right side select SRM-REcOVERY-XY.
  3. Click ADD MAPPINGS button.
  4. Click NEXT.

Step 6: Configure Reverse Mappings

  1. Select the checkbox to set the reverse mappings for the network.
  2. Click NEXT.

Step 7: Test Networks

  1. Define a Test Network. SRM allows you to specify a test network your recovered VMs will connect to when performing a DR test or you can let SRM auto create the test networks. For the purposes of this workshop leave the default of auto created.
  2. Click NEXT.

Step 8: Complete Network Mappings

To complete the network mappings setup click FINISH.

Exercise 2: Folder Mappings

Step 1: New Folder Mappings

  1. Click Folder Mappings.
  2. Click NEW.

Step 2: Folder Mappings Creation Mode

  1. Ensure Automatically prepare mappings for folders with matching names is selected.
  2. Click NEXT.

Step 3: Configure Recovery Folders

  1. Select SDDC-Datacenter on both sides.
  2. Click ADD MAPPINGS.
  3. Click NEXT.

Step 4: Folder Reverse Mappings

  1. Select all checkboxes to create the folder reverse mappings.
  2. Click NEXT.

Step 5: Complete Folder Mappings

Click FINISH button to complete Folder Mappings.

Exercise 3: Resource Mappings

Step 1: Create New Resource Mapping

  1. Click Resource Mappings.
  2. Click NEW.

Step 2: Configure Recovery Resource Mappings

  1. Expand SDDC-Datacenter on both sides and select Cluster-1.
  2. Click ADD MAPPINGS.
  3. Click NEXT.

Step 3: Reverse Mappings

  1. Select all checkboxes to create the resource reverse mappings.
  2. Click NEXT.

Step 4: Complete Resource Mappings

Click FINISH button to complete Resource Mappings.

Exercise 4: Storage Policy Mappings

Step 1: Create New Storage Policy Mappings

  1. Click Storage Policy Mappings.
  2. Click NEW.

Step 2: Storage Policy Creation Mode

  1. Ensure Automatically prepare mappings for storage policies with matching names is selected.
  2. Click NEXT.

Step 3: Configure Recovery Storage Policy Mappings

  1. Click and select vSAN Default Storage Policy on both sides.
  2. Click ADD MAPPINGS.
  3. Click NEXT.

Step 4: Reverse Mappings

  1. Select all checkboxes to create the Storage Policy Reverse Mappings.
  2. Click NEXT.

Step 5: Complete Storage Policy Mappings

Click FINISH button to complete the Storage Policy Mappings.

Placeholder Datastores

For this exercise there’s no need to create a Placeholder Datastore. If there’s no Placeholder Datastore, you are free to go and create one, just select the vsanDatastore.

3.6 - Module 3 Task 6

Task 6: Configure Protection Groups, vSphere Replication and Recovery Plan

SRM Protection Groups

In this task you will configure vSphere replication for the test VM created in Task 2 as well as a Protection Group for this VM and a recovery plan to protect it. This task is performed from the primary site’s vCenter Server.

Exercise 1: Create Protection Group

Step 1: Create New Protection Group

  1. Click Protection Groups.
  2. Click NEW.

Step 2: Name and Direction for Protection Group

  1. Give your Protection Group a name: PG-XY, where X is your group number and Y is your participant number.
  2. Select the direction for your Protection Group (leave the default).
  3. Click NEXT.

Step 3: Select the Type of Protection Group

  1. Select Individual VMs (vSphere Replication).
  2. Click NEXT.

Step 4: Virtual Machines

Click NEXT and do not include any Virtual Machines in the protection group yet.

Step 5: Recovery Plan

  1. Select Do not add to recovery plan now.
  2. Click NEXT.

Step 6: Complete Protection Group

Click FINISH to complete the creation of your Protection Group.

Exercise 2: Protect Virtual Machine with SRM

Step 1: Configure Replication

Make sure to disable pop-ups in your browser for this step.

  1. From your Protected vCenter Server locate the VM you created earlier, right-click.
  2. Select All Site Recovery actions.
  3. Click Configure Replication.

Step 2: Configure Target Site

  1. Select the target site to replicate the VM to.
  2. Ensure Auto-assign vSphere Replication Server is selected.
  3. Click NEXT.

Step 3: VM Validation

  1. Ensure the status of the VM validation is OK.
  2. Click NEXT.

Step 4: Target Datastore

  1. Select vsanDatastore as the location for the replicated files. Leave all other defaults.
  2. Click NEXT.

Step 5: VM Replication Settings

  1. Leave all defaults, Recovery point objective (RPO) should be set to 1 hour.
  2. Click NEXT.

Step 6: Assign to Protection Group

  1. Ensure Add to existing protection group is selected.
  2. Select the PG-XY Protection Group you recently created.
  3. Click NEXT.

Step 7: Complete Configuring Replication

Click FINISH to complete the configuration of the replication for the VM.

Exercise 3: Recovery Plans

Step 1: Name and Direction for Recovery Plan

  1. Give your Recovery Plan a name: RP-XY, where X is your group number and Y is your particpant number.
  2. Ensure the Direction of the Recovery Plan is correct.
  3. Click NEXT.

Step 2: Add Protection Group to Recovery Plan

  1. Ensure Protection groups for individual VMs or datastore groups is selected.
  2. Select your PG-XY Protection Group you created earlier.
  3. Click NEXT.

Step 3: Test Networks

Leave the defaults for Test Networks and click NEXT.

Step 4: Complete Creation of Recovery Plan

Click FINISH to complete the creation of your Recovery Plan.

Confirm Placeholder VM in Recovery Site

Log in to your Recovery Site vCenter Server and locate the Placeholder VM created by SRM.

3.7 - Module 3 Task 7

Task 7: Test Recovery Plan

Recovery Plan Testing

In this task you will test the recovery plan created in the previous step.

Exercise 1: Test Recovery Plan

Step 1: Initiate Test

  1. In the protected site’s SRM console, click Recovery Plans.
  2. Click the Recovery Plan you created earlier.

Click on TEST to intiate the test of your Recovery Plan.

Step 2: Confirmation Options

  1. Review the confirmation options, especifically, if you’d like to Replicate recent changes to recovery site.
  2. Click NEXT. Then click FINISH.

Step 3: Monitor Plan Status

Monitor the plan status until it reads Test complete.

Step 4: Confirm Recovery of VM in Recovery vCenter Server

Log in to the Recovery vCenter Server’s and confirm the VM you created earlier and protected has been successfully powered on at the Recovery Site.

Since this was a recovery plan test:

  • The VM in the protected site has NOT been shut down.
  • The VM in the recovery site has been attached to an isolated network segment, as per the configuration you created in a previous task.

Step 5: Cleanup

You can now complete your recovery plan testing process by cleaning up the recovery site. In the SRM console, click on the CLEANUP button.

Under Confirmation options click NEXT, then click FINISH.

This process cleans up (powers off) the VM and returns everything to the previous state for protection.

3.8 - Module 3 Task 8

Task 8: Run Recovery Plan

Run Failover Recovery Plan

In this task you will execute the recovery plan you configured in the previous tasks. For planned migrations, a recovery plan can be run from either the primary or the protected site. In case of an actual disaster at the protected site, it must be triggered from the recovery site (the only one that is still online). The steps to run a recovery plan are the same in both cases. In this task, we will run a recovery plan from the recovery site to simulate a disaster recovery scenario.

Exercise 1: Run Recovery Plan from Recovery Site

Step 1: Access Site Recovery in Recovery Site

  1. Log into the recovery site’s vCenter Server, click the menu bar.
  2. Select Site Recovery from the main menu.
  3. Click on the OPEN SITE RECOVERY button.

In the SRM console, open the already configured site pair by clicking on the VIEW DETAILS button.

Step 2: Run Recovery Plan

When prompted for the credentials to log into the protected site, click on the CANCEL button – we are assuming that the protected site is no longer online, because of a disaster.

  1. Click on Recovery Plans.
  2. Select the Recovery Plan you previously created.
  3. Click RUN.

Step 3: Confirm Options for Recovery Plan

  1. Click the checkbox that reads I understand that this process will permanently alter the virtual machines and infrastructure of both the protected and recovery datacenters.
  2. Select Disaster recovery.
  3. Click NEXT, then click FINISH.

Step 4: Monitor Recovery until Completion

Monitor progress in the SRM console until it shows Recovery complete. Also note the Reprotect needed label.

Step 5: Confirm Recovery

When the recovery process is marked complete, go to the recovery site’s vCenter Server and verify that the test VM you created earlier is powered on (1) and attached to the network segment also created earlier (2).

3.9 - Module 3 Task 9

Task 9: Reprotect the Migrated VM

Reprotection of Recovered VM

In this task, we assume that the primary site has been brought back online. Reprotection is the SRM feature that allows migrated VMs in the recovery site to be synchronized back to the protected site.

Exercise 1: Reprotect VM

Step 1: Execute Recovery Plan Reprotection

  1. Remember that your primary (Protected) site was assumed to be offline. You will need to login to it now that it’s back up, so clcik the LOGIN button and enter the credentials for your protected site.
  2. Go to Recovery Plans.
  3. Select your recovery plan.
  4. Click the 3 dots.
  5. Click Reprotect.

Step 2: Reprotect Confirmation Options

  1. Ensure the checkbox is checked for I understand that this operation cannot be undone.
  2. Click NEXT, then click FINISH.

Step 3: Confirm Successful Reprotection

Go to the protected site’s vCenter Server and confirm that a placeholder VM has been created. Because your VM in the Recovery site is ahead of the original VM on premises, roles have been reversed and the VM in the Recovery site is being replicated to the Protected (primary) site.

3.10 - Module 3 Task 10

Task 10: Run Failback Recovery Plan

Failback Recovery Plan

In this task, you will move the test VM back to the original protected site. This task can be performed either from the protected site’s or the recovery site’s SRM console. The steps are identical in both cases.

Exercise 1: Run Recovery Plan

Step 1: Run Failback Recovery Plan

  1. In either the recovery or protected site’s SRM interface, click Recovery Plans. You may need to re-authenticate to the other site.
  2. Select your Recovery Plan.
  3. Click RUN.

Step 2: Failback Recovery Confirmation Options

  1. Ensure the checkbox is selected for I understand that this process will permanently alter the virtual machines and infrastructure of both the protected and recovery datacenters.
  2. Since you’re performing a planned failback to the original protected site, select Planned migration as the recovery type.
  3. Click NEXT, then click FINISH.

Monitor progress. When the recovery plan run completes, go to the primary site’s vCenter Server and confirm that the test VM is back online and attached to its original network segment.

Now that the test VM is running in the protected site, you need to also restore replication towards the recovery site. This is done by reprotecting the VM again. Repeat the steps you followed in the previous task for Reprotecting a VM.

This completes the lab for VMware SRM Disaster Recovery scenario.

4 - Module 4 - Secure Hub

Module 4: Create and configure a Secure Hub to route traffic to the internet

This module is being updated. Some of the instructions and screenshots may not be applicable anymore. However, the concept is still valid and can be applied.

Introduction

Now that the Tier-0 and Tier-1 routers are configured, it’s time to see if workloads can access the internet. The key takeaway here is to setup a Secured vWAN Hub to allow internet egress and ingress (if necessary) for the VMs on AVS.

In this section you will learn how to:

  • Create a secure VWAN hub

  • Configure Azure Firewall with a public IP

  • Configure Azure Firewall

Before we start the steps, let’s validate if the AVS VMs can access internet. In the previous section, you accessed VM1 from the vCenter portal. Verify that from VM1 that you can not

  • Access www.google.com by name. On the server, type:
    wget www.google.com

  • Access www.google.com by IP. On the server, type:

    wget https://142.250.9.101

You may also use utilities such as ping or nslookup to validate.

4.1 - Module 4 Task 1

Module 4: Deploy Virtual WAN

Public IP for vWAN

Exercise 1: Configure vWAN in AVS Private Cloud

Step 1: Configure Public IP for vWAN

  1. In the Azure portal, in your AVS Private Cloud blade, click Connectivity.
  2. Click Public IP for vWAN.
  3. Click Configure.

Step 2: Create Public IP Connection

  1. Virtual wide area network resource group is auto-populated and cannot be modified in the portal.
  2. Virtual wide area network name is also auto-populated.
  3. Virtual hub address block - Use the following value: 10.XY.4.0/24, where X is your group number and Y is your participant number.

It takes about an hour to complete the deployment of all components. This deployment only must occur once to support all future public IPs for this Azure VMware Solution environment.

Step 3: Confirm Successful Deployment

Ensure your deployment succeeds.

4.2 - Module 4 Task 2

Module 4: Propagate Default Route

Propagate Default Route

*Exercise 1: Default Route Propagation to Virtual WAN

Step 1: Access Virtual WAN

  1. In your AVS Private Cloud blade click Connectivity.
  2. Click Public IP for vWAN.
  3. Click your newly created Virtual wide area network.

Step 2: Access Hub in Virtual WAN

  1. Click Hubs.
  2. Click the name of your newly created virtual WAN.

Step 3: Edit ExpressRoute Connection

  1. Click ExpressRoute.
  2. Click on the elipsis then click Edit connection.

Step 4: Enable Propagate Default Route

  1. Ensure the Enable button is enabled for Propagate Default Route.
  2. Click Confirm.

4.3 - Module 4 Task 3

Task 3: Configure Azure Firewall policies

Azure Firewall Policies

Exercise 1: Azure Firewall Policies

Step 1: Navigate to Azure Firewall

  1. In the Azure Portal search bar type Firewalls.
  2. Click Firewalls.

Step 2: Select your Virtual WAN Firewall

Select your virtual WAN Firewall that should have been automatically created after the previous task.

Step 3: Azure Firewall Manager

  1. Click Firewall Manager.
  2. Click to Visit Azure Firewall Manager to configure and manage this firewall.

Step 4: Access Azure Firewall Policies

  1. Click Azure Firewall Policies.
  2. Click + Create Azure Firewall Policy.

Step 5: Firewall Policies Basics Tab

  1. Ensure you’re in the Basics tab.
  2. Leave the defaults for Subscription and Resource group.
  3. Give your policy a name: InternetEnabledXY, where X is your group number and Y is your participant number.
  4. Ensure to select your appropriate Region.
  5. For Policy tier select Standard.
  6. For Parent policy select None.

Step 6: Firewall Policies DNS Settings Tab

  1. Click DNS Settings tab.
  2. Select Enabled for DNS settings.
  3. For DNS Servers ensure Default (Azure provided) is selected.
  4. For DNS Proxy select Enabled.

Step 7: Firewall Policies Rules Tab

  1. Select Rules tab.
  2. Click + Add a rule collection.

Step 8: Add a Rule Collection

  1. Give the rule collection a name: InternetOuboundEnabled-XY, where X is your group number and Y is your participant number.
  2. For Rule collection type select Network.
  3. Give the rule collection a Priority - Should be a numeric valued between 100-65000.
  4. For Rule collection action select Allow.
  5. Leave the default for Rule collection group.
  6. Use the following values for your Rule.
NameSource typeSourceProtocolDestination PortsDestination TypeDestination
Rule1-XYIP Address*TCP80,443IP Address*

Click Review + Create and then the Create button.

5 - Additional Resources

Please visit our main Additional Resources section for a comprehensive list of AVS content and resources.