Vmware HCX Deployment Considerations and Best Practices: Technical White Paper - July 2019
Vmware HCX Deployment Considerations and Best Practices: Technical White Paper - July 2019
VMWARE HCX
DEPLOYMENT
CONSIDERATIONS AND
BEST PRACTICES
VMWARE HCX DEPLOYMENT
CONSIDERATIONS AND BEST PRACTICES
Table of Contents
Introduction 4
About HCX Terminology ................................................................................... 4
HCX Services Overview ................................................................................... 4
HCX Network Extension............................................................................... 4
HCX Virtual Machine Mobility....................................................................... 5
HCX WAN Optimization ............................................................................... 5
HCX Disaster Recovery ............................................................................... 5
Component Summary ....................................................................................... 6
HCX Installation Workflows .............................................................................. 7
Considerations for Deployments to HCX Enabled Public Clouds 8
Component Considerations for an HCX Source Site 9
Considerations for Multi-Site HCX Deployments 10
Considerations for Deploying the HCX Manager OVA 12
Considerations for Deploying the HCX Manager in Environments with HTTPS Proxy Servers 14
Considerations for HCX Deployments with vCenter Servers Linked Mode 15
Considerations for Integrating HCX with an Active Directory Domain 16
Considerations for Creating HCX Compute Profiles 17
Considerations for Selecting the Service Clusters in a Compute Profile 20
Considerations for Selecting the Deployment Resources in a Compute Profile 23
Considerations for Creating HCX Network Profiles 24
Considerations for Creating an HCX Service Mesh 26
Considerations for HCX WAN Optimization Service Deployments 27
Considerations for Deploying the HCX Network Extension Service 29
Considerations for Maximum Transmission Unit (MTU) 30
Considerations for Deploying HCX for OS-Assisted Migrations (OSAM) 32
About the Author 34
Introduction
This considerations paper describes VMware HCX best practices and implementation
considerations. It has been prepared from iterating through hundreds of HCX
implementations on a variety of production architectures and deployment scenarios.
Although there was a considerable effort in collating the best practices information, some
deployment scenarios may not be covered. This paper is not intended as a comprehensive
guide for implementing VMware HCX in every scenario. Please send any questions or
feedback regarding this document to your VMware account team.
We may use Source, On-Premises, Legacy or HCX Enterprise interchangeably to refer to the
source vSphere installation in an HCX deployment.
Similarly, we may use Destination, Target Cloud, or HCX Cloud interchangeably to refer to
the destination vSphere or vCloud Director(VCD) based installation in an HCX deployment.
In less common single vCenter Server deployments, Source and Destination refers to the
clusters.
HCX Network Extension connects vDS, NSX or Nexus 1000v networks at the source site to
an NSX Logical Switch at the destination site. This service can expedite the use of the
destination environment’s resources by allowing virtual machines to be migrated into the
networks without re-IP or complicated VM transformations, by leveraging the routing and
security policies at the source site.
HCX Network Extension with Proximity Routing leverages the strong integration with NSX-v
Dynamic Routing to achieve local ingress/egress for virtual machines as they are migrated.
The HCX Proximity Routing for Layer 3 Aware VM Mobility whitepaper explores this feature
with more details.
• HCX Bulk Migration uses the vSphere Replication protocol to transfer up to multiple
virtual machines in parallel. virtual machines are “rebooted” into the target site and can
be transformed to the latest VM Hardware/VM Tools available. With the Bulk migration
option, virtual machines can have their vNIC IP addresses updated as part of the
migration operation.
• HCX vMotion uses the VMware vMotion protocol to transfer individual virtual machines.
Used with HCX Network Extension for zero-downtime migrations of applications that are
sensitive to downtime.
• HCX Cold Migration uses the VMware NFC protocol. This migration type is automatically
selected when transferring powered-off virtual machines.
• HCX vMotion with vSphere Replication combines Bulk and vMotion to deliver zero
downtime failover for Virtual Machines prepared in parallel (in preview with VMware
Cloud on AWS).
Component Summary
The HCX technologies are delivered as three distinct service-level virtual appliances,
deployed as peers at the source and destination environments.
The HCX enabled sites are paired and then service components are deployed simultaneously
at the source and destination sites whenever services are enabled for the selected site pair.
• HCX Manager
The HCX manager component is deployed from an OVA, is integrated with the
vSphere environment, and enables it to be connected with other HCX-enabled
environments to deliver HCX services. HCX Manager is typically deployed one-to-
one with each vCenter Server.
HCX Manager is deployed at the source site. Currently this manager is labeled
“Enterprise” but this label will be deprecated in the future.
HCX Manager is also deployed at the target site. The HCX manager at the
destination automates the deployment of peer appliances when a service mesh is
created at the source site.
An HCX Site Pair is created when the source HCX Manager is connected to a
destination site’s HCX Manager.
Complete the HCX Installation Checklist after reviewing and understanding the deployment
and considerations and best practices presented in this document.
NOTE: The remainder of this document is prepared with the assumption of a basic
understanding of HCX services, and the source to destination nature of the technologies.
• The public cloud provider automation manages the deployment of the HCX Cloud
Manager component. This is not true with private targets.
• The public cloud infrastructure will be running modernized SDDC software (i.e.
vSphere 6 and above, NSX 6.2 and above.).
• There will be a Public Access URL that an HCX source site can use for the site
pairing operation.
https://hcx-cloud-public-ip-or-fqdn
o The public access URL for HCX must be resolvable and reachable over TCP-
443(HTTPS) from the source site’s HCX Enterprise Manager.
• In a private to public deployment, the activation keys for the HCX Enterprise
Manager system on-premises will come from the public cloud provider.
• For various reasons, VMware HCX features may be available on some public cloud
providers, but not on others. Consider what is available in the target HCX enabled
public cloud, and what is required for a successful HCX deployment.
This paper avoids listing public cloud provider-specific considerations. Public cloud
providers publish information for enabling the HCX service in their public cloud.
The public cloud provider published instructions should supersede any conflicting
information that may have been included erroneously in this paper.
• Using the HCX User Interface (the vCenter Web Client HCX Plugin) for any of the
operations above requires vCenter Server 5.5U2 in the on-premises vSphere
environment. For older environments, the HCX Standalone UI can be used (by
launching HCX HTTPS in a browser.
• HCX can coexist with vSphere Replication 8.1 or later. Older versions of vSphere
Replication will disrupt HCX Bulk Migration or any other replication based HCX
operation.
• HCX Manager for Enterprise is deployed 1 to 1 with vCenter Server at the Source
site(s).
• HCX Manager for Cloud is generally deployed 1 to 1 with vCenter Servers at the
Destination site.
* Under specific conditions, a single HCX Manager may be able to register with
multiple VC/NSX sets (Secondary NSX Managers are not supported).
• A source site HCX Enterprise site can connect to many destination sites with HCX
Cloud.
Multiple source HCX Enterprise sites can connect to a single destination HCX Cloud
HCX Service appliances will be instantiated for each site pair. In the illustration
below, Source Site #1 is paired with destination Sites #3 and #4.
A single HCX/VC pair establishes both of the connections, and service appliances are
deployed for each pair.
• Activation Keys for multi-site deployments is based on the deployment type for the
destination sites. In the illustration above, destination site #3 is HCX on a public
cloud, and site #4 is a private vSphere installation.
In this scenario, activation keys for site #3 come from the HCX Public Cloud. Site #4
activation will come from NSX Data Center + licensing. The source site #1 can
activated with keys from site 3# or site #4.
• Multi-Site architectures have been tested for up to 10 site pairs per source or
destination HCX Manager system.
• The HCX Enterprise and Cloud Manager performs management and control
functions for the HCX service. Mobility, Replication and Extension data flows are not
transmitted/received or traverse the HCX Managers.
• Deploy the OVA using distributed port group, datastores and compute resources
designed for Management virtual machine.
Once the HCX Cloud Manager installed Use the Download HCX Enterprise
Client link to download the HCX Enterprise Manager OVA for the source site
installation.
• During the OVA deployment, provide a functional DNS server that can provide
resolution for both external targets like connect.hcx.vmware.com and internal
targets, like the vCenter Server and ESXi hosts FQDNs.
• During the OVA deployment, provide live NTP servers that are reachable. Confirm
the HCX Manager system is using synchronized time.
• The HCX Manager IP address should able to route to both internal and external
targets.
• At the source site, the perimeter firewall should allow outbound HTTPS/TCP-443
connections from the HCX Enterprise Manager to:
https://connect.hcx.vmware.com
https://hybridity-depot.vmware.com
https://hcx-cloud-mgr-ip-or-fqdn
• At the destination site, the perimeter firewall should allow outbound HTTPS/TCP-
443 connections from the HCX Cloud Manager to:
https://connect.hcx.vmware.com
https://hybridity-depot.vmware.com
Also, inbound connections from the HCX Enterprise Manager.
HCX supports service operations with an HTTPS proxy server in the path.
• If the environment uses a proxy server for outbound HTTPS connections, it should
be defined in the HCX Manager’s appliance management interface.
o Once a proxy server is defined, all HTTPS connections are sent to the proxy
server. HCX Manager makes internal HTTPS connections to vCenter Server
and the HCX Interconnect appliance.
VMware HCX supports linked mode for single-pane operation at the source site.
• The HCX Enterprise Manager is deployed and paired with each vCenter Server and
registered to a common platform services controller.
• The resulting behavior is that HCX operations for the combined inventory can be
initiated from any of the vCenter Servers that are in Linked Mode.
VMware HCX Supports Active Directory logins through integration with vCenter Single Sign-
On.
• HCX Enterprise at the source site and HCX Cloud at the destination site can be
integrated with distinct Single Sign-On Domains, there is no requirement for the sites
to share identity domains.
• The HCX appliance management interface should contain a valid SSO URL, matching
the vCenter Server’s SSO URL.
• The HCX appliance management interface has a role mapping configuration screen
where one can define the SSO Active Directory or vSphere.local domain groups that
can perform HCX operations.
o Create a new hcx-admins local group in the vCenter Server SSO Users and
Groups screen.
o Add the Active Directory Groups that will operate HCX services to the hcx-
admins@vsphere.local group.
o Right click the vCenter and Add Permission. Assign the Administrator Role
to the new hcx-admins@vsphere.local group.
Open the HCX appliance management Role Mapping interface. Add the hcx-
admins@vsphere.local group to the roles.
A Compute Profile defines the HCX services that will be allowed to run in the configuration,
the compute/network boundaries, as well as the compute, storage and network settings that
HCX will use to deploy the Multi-Site Service Mesh virtual appliances.
• Compute Profile creation is identical at the source and destination HCX systems.
• Creating a Service Mesh requires at minimum one Compute Profile in HCX at the
source, and one Compute Profile in HCX at the destination.
• A single Compute Profile can be used for all clusters as Service Clusters when the
clusters shares a common vMotion/Replication/vSphere-Mgmt networks.
• Multiple Compute Profiles are required when the clusters that will be designated as
Service Clusters have different vMotion/Replication/vSphere-Mgmt networks.
• A single Compute Profile can be used with multiple Service Mesh configurations.
• This step can be used to explicitly prevent the unselected HCX service from being
selected during a service mesh deployment.
• In most deployments all services can be left selected when a Compute Profile is
created.
• When a Service Mesh is being configured, if a service was unselected in the source
or destination Compute Profile (or both), it will be grayed out in the Service Mesh
interface.
• When a Service Mesh is being configured, steps related to the unselected service will
be excluded.
For example: If Network Extension is unselected, the Service Mesh creation will
exclude the Network Extension Appliance Scale Out step.
Select the Service Clusters for which HCX services should be enabled.
• The Service Cluster(s) must exist in the single vCenter Server that was registered to
the HCX Enterprise or Cloud Manager.
• Virtual Machines in clusters that are designated as Service Clusters in the Compute
Profile will be valid objects for HCX migrations and DR operations.
• The Service Clusters in a Compute Profile must have common cluster networks.
• When the Service Clusters have different cluster networks (for example, when each
cluster has a dedicated vmotion subnet), they should be separated into different
Compute Profiles
Once the service mesh appliances are deployed and HCX operations become
available, HCX Migration/Extension operations on objects outside of the
selected Non-Production will fail.
o Alternatively, if in this example all Prod and Non-Prod have common cluster
networks, the production and non-production Service Clusters can be added
later to a single compute profile:
• The Deployment Clusters, Resource Pools and datastores foremost define where the
HCX Migration, Optimization and Extension virtual appliances will be created when
the Service Mesh is created.
• The Deployment Cluster must exist in the registered vCenter Server containing the
service cluster.
This means a Deployment Cluster can be different than the Service Clusters, but it
cannot be in a different vCenter Server.
• The Deployment Cluster selected for the Compute Profile will be used for all
appliance types in the service mesh.
• The Deployment Cluster should be able to satisfy the system requirements for the
HCX Service virtual appliances. Refer to the User Guide.
• A Resource Pool can be selected as the compute container for deployments (instead
of a cluster).
• The Deployment Cluster selected will determine which Deployment Datastores can
be selected.
• When configured using the Network Profiles interface, the network profile
configuration does specify the HCX function assigned to the network profile. It is
purely a network resource allocated for HCX.
• During Compute Profile creation, you will configure how HCX service appliances will
perform the following HCX functions:
• The functions above are assigned to a Network Profile during the CP creation:
o When multiple HCX functions are assigned using single network profile (e.g.
Uplink and Management using the same Network Profile), HCX IX/EXT will
use only one vNic with one 1 IP address.
• Jumbo MTU can be configured when the underlying network infrastructure supports
it end to end.
• The IP ranges in the Network Profiles should include only IP addresses reserved for
HCX (the Manager IP address should not be included in the ranges.
• Network Profiles can be utilized with multiple Compute Clusters in multiple service
mesh configurations they should be sized to accommodate the potential scale.
• The Network Profiles can be expanded while in-use. Service Mesh may need to be
resynchronized.
• The DNS setting is only required on the Network Profile that will be used for
Management function.
• The Gateway IP is not required in the Network Profile created for non-routed
vMotion or Replication networks.
• The Uplink Network Profile is shared by the migration and extension services.
• Creating a Service Mesh configuration requires the source and destination HCX sites
to have valid Compute Profiles.
• A Service Mesh configuration uses only one CP at the source HCX site and one CP
at the destination HCX site.
• When the Service Mesh is created, HCX interconnect, WAN Optimization and
Network Extension appliances are deployed (if selected).
• An existing Service Mesh between two HCX sites cannot be used with a newly
paired site pair.
o A Service Mesh for Source Site A and Destination Site B cannot be used with
Destination Site C. A new A-to-C Service Mesh needs to be created.
o Only HCX services that are enabled in the selected CPs will be selectable.
HCX services that are not selected in the underlying CP will be grayed out.
• In 10Gbit, low latency deployments, using WAN Optimization may not yield
improved migration performance.
• The WAN Optimization service is always deployed in conjunction with the HCX-
WAN-IX. It cannot be deployed to service the HCX Network Extension appliances.
• This component uses a simplified deployed model that leverages the compute,
storage and network options selected for the HCX-WAN-IX component.
The HCX-WAN-IX appliance is deployed in a fixed form factor that requires 8vCPU,
14GB Memory and two disks: a 30GB OS disk and a 70GB Deduplication backing
disk. These resource values are not configurable.
When deploying WAN Optimization service, select SSD Datastores capable of 2500
IOPS when configuring the HCX-WAN-IX.
The WAN Optimization component performs its function in-line (in the path for
HCX-WAN-IX flows). Once enabled the HCX-WAN-IX will use internal policy routing
to send HCX service packets to its adjacent WAN-OPT component before egressing
to the peer site.
• CPU (vCPU) CPU Memory (GB) Storage IOPS Storage Throughput Guaranteed
requirements Requirements (Mbps) WAN
Throughput
Capability
(Mbps)
8 vCPU 14 1000-2500 250 1000
• The Network Extension appliance is deployed in pairs. HCX will always deploy an
Initiator and a Receiver.
• The ESXi hypervisor hosting the HCX-NET-EXT appliance must be connected to the
distributed switches of the Virtual Machine networks being extended.
• The HCX-NET-EXT should be configured to use the default 1500 MTU in most
deployments. The default MTU setting should not be changed when the internet is
traversed to reach the peer HCX-NET-EXT appliance.
• Jumbo MTU can be configured to improve performance when the Jumbo MTU is
supported end to end between the source and destination HCX sites.
• Use HCX to extend virtual machine networks (VLAN Port Groups and
VXLANs/GENEVE Logical Switches).
o Never extend same VLAN to the same target more than once.
• Jumbo MTU settings can be applied per HCX interface using the Network Profile
Configuration.
• Assign jumbo MTU when the increased MTU size is valid end to end (from the HCX
initiator to the HCX receiver appliances.
The HCX Network extension pipeline adds 150 byte header, when all interfaces in the
path support jumbo MTU you can configure MTU to prevent fragmentation during L2
forwarding. Valid Jumbo MTU scenario for HCX:
o Destination Site DVS typically has MTU of 9000 (must be 1650 or higher).
o The ESXi cluster may not have a dedicated Replication vmkernel network. It
is a good practice but less common than having a dedicated vMotion
network. If there is no dedicated Replication network, the management
network is used for Replication traffic. In this case it may or may not be
possible to use jumbo MTU.
• This is a new HCX migration option with the HCX service that will enable the
migration of non-vSphere virtual machines to an HCX enabled vSphere environment.
o With the initial release, the service will be certified for KVM to vSphere
migrations.
• This service will only be available through a premium SKU called “HCX Enterprise”.
Replication Assisted vMotion (RAV), OSAM, and SRM Integration will also be
available through the premium SKU.
• About OSAM: Conceptually OSAM is similar to Bulk Migration, the source virtual
machine remains online during replication.
The source VM is quiesced for a final sync before the migration. HCX performs a
software stack adaptation (fixup).
The source VM is powered off, and then the migrated VM is powered on at the
target site, for a low downtime switchover.
• OSAM Migrations are only supported for certified Operating Systems. In the initial
release, the following operating systems are supported for OSAM:
• The virtual machine will communicate with the HCX Sentinel Gateway. The HCX
Compute Profile will include a Virtual Machine agent assignment to achieve this
connectivity.
• The architecture for OSAM-based HCX deployments will include two new appliances:
o The HCX Sentinel Gateway (SGW) at the source, and the HCX Sentinel Data
Receiver (SDR) at the target site.
o The HCX agents will send replication data to the SGW. The SGW will use the
HCX-WAN-IX path to replicate data to the remote site.
• To extend KVM networks to the target site, a minimum interim vSphere Cluster with
a DVS is required. Virtual Machine VLANs should be extended from the Non-VMW
cluster/switches to the minimum cluster HCX will provide the extension capability
between the KVM environment and the destination vSphere environment.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright © 2017 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by
one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. and its subsidiaries in the United States and
other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.