Description

User-Manageable Clusters are an optional feature available with Private Cloud and Hosted Private Cloud data center locations. This article provides a complete overview of the capabilities and functions associated with this feature.

Content / Solution:

Overview

A Cloud Server runs on a given physical server (the VMware ESXi "host") at any point in time. In a single data center location, there are multiple ESXi hosts running in parallel. The system groups these physical ESXi hosts into "clusters". Based on system load and other variables, VMware's Distributed Resource Scheduler (DRS) monitors distribution and usage of CPU and memory resources for all hosts and Cloud Servers in the cluster and manages load appropriately based on the ideal resource utilization, migrating Cloud Servers between physical ESXi hosts as needed. When you first power on a Cloud Server, DRS attempts to maintain proper load balancing by placing the virtual machine on an appropriate host within the cluster. If an ESXI host were to fail, the system would attempt to recover the Cloud Server onto a different host as described in What Happens When the Physical Server Hosting my Cloud Server Crashes

The only additional rule governing this behavior is Anti-Affinity. When you provision two Cloud Servers on the same cluster in an anti-affinity relationship as described in How to Manage Server Anti-Affinity, the system will ensure those two Cloud Severs never run on the same ESXi host. For more information on Anti-Affinity, see Introduction to Server Anti-Affinity.

In MCP 2.0 Public Cloud locations, all Cloud Servers run on the same cluster. In MCP 2.0 Private and Hosted Private Cloud locations, clients can choose to set up their environment with multiple physical clusters. This introduction describes that feature, which is called "user manageable" since the user has full control over which cluster supports both specific Cloud Servers and Client Images. The feature has advantages for a variety of scenarios, including:

  1. Reducing license costs for software applications licensed on physical hosts - Some Enterprise applications are licensed based on characteristics of the physical server hardware, such as the number of CPU cores. In these scenarios, licensees often must pay for all ESXi hosts on which a Cloud Server might run. Establishing a separate cluster for running Cloud Servers with that software application may reduce the licensing cost by limiting the number of servers.
  2. Physical isolation of different classes of workloads - If you desire to isolate different workloads on different physical servers (e.g. keep production and development servers physically isolated), setting up multiple clusters allows you to achieve that without having to set up separate data center locations.
  3. Customized hardware supporting different Cloud Server characteristics - If you need to run a number of very high RAM Cloud Servers, you can set up a cluster with physical ESXi RAM that can support those characteristics, while leaving another cluster with "standard" hardware to handle your other loads. This configuration reduces cost - instead of upgrading all ESXi hosts to handle the load when it's not needed by all Cloud Servers, you can create a cluster-specific to the needs of those very high RAM Cloud Servers.

Storage Considerations

In all multiple cluster scenarios, Cloud Servers running on different clusters will be:

  1. Located on different sets of physical ESXi hosts, meaning the CPU/RAM associated with the Cloud Servers running on each cluster are always physically isolated from each other
  2. Set up to use separate VMFS datastores on the storage infrastructure, meaning that although the same physical storage infrastructure might be used to support the different clusters, the VMFS LUN's are isolated from each other. This means VMDK ("disks") from servers on different clusters don't share the same logical disk volumes.

The use of separate datastores provides isolation, but one disadvantage relates to server deployment times. If the Source Image is located on a different cluster than the cluster chosen for a Cloud Server deployment, deployment times are adversely affected as the copy portion of the process must traverse the networking layer between the clusters rather than staying within the faster disk infrastructure layer. A general estimate is that the cloning process will take approximately four times longer between different clusters than a clone within the same cluster.

For this reason, the system provides the ability for users to choose which cluster to place Client Images on, both when importing and cloning to create an Image. In addition, users can move a Client Image between clusters as described in How to Move a Client Image between User-Manageable Clusters.

All Clusters must have Standard disk speed available since this is required for Images and other CloudControl functions. However, you can choose whether or not other disk speeds are available on a given cluster.

  • For example, if you have three clusters, Cluster A could have Economy, Standard, and High Performance disk speeds. Cluster B could have one Economy and Standard disk speeds. Cluster C could have Standard and High Performance speeds. All must have Standard, but the availability of other speeds is optional. Note that you cannot move Cloud Servers between clusters unless the disk speed currently associated with the Cloud Server is available on the target cluster.

Availability

User-Manageable Clusters can be added to any MCP 2.0 Private Cloud or Hosted Private Cloud location. Because this functionality requires changes to the physical hardware, it must be enabled by the Operations team either as part of the order for the Private Cloud/Hosted Private Cloud location or it needs to be submitted as an upgrade change request. In addition, some scenarios (such as customized hardware) may require pricing and/or contractual changes.
You can identify whether User-Manageable Clusters is currently enabled in a given location as described in How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location.

Functionality

Once User-Manageable Clusters is enabled in a location, users can select the cluster when creating servers and images in addition to moving existing servers and images between clusters. Each data center location has a "default" cluster - if a cluster is not specified, the system will default to this particular cluster. Links to the articles describing the functions include:

Cluster Specific Functions - Available only in locations with User-Manageable Clusters

Key Cluster Aware Functions - Functions where the choice of cluster is available and system will apply the "default" cluster if a selection is not made

Anti-Affinity Impact

The anti-affinity functions work within a given cluster to prevent two Cloud Servers from sharing the same physical ESXi host. Since Cloud Servers on separate clusters will never share the same host (that's the point of using clustering!), there are two impacts on the anti-affinity functions:

  1. You cannot create an Anti-Affinity rule unless the Cloud Severs are located on the same cluster
  2. If you move a Cloud Server that has an Anti-Affinity rule associated with it onto a different cluster, the rule will be automatically deleted in conjunction with the move.

For more information on Anti-Affinity functionality, see Introduction to Server Anti-Affinity.

Advanced Virtualization Settings

Advanced Virtualization Settings are a series of obscure vSphere configuration settings supported by CloudControl primarily for use in SAP private cloud environments. In vSphere, these settings are mainly set in the extraConfig named-pair values of a VMX file. If a Server has Advanced Virtualization Settings set to something other than the default values, the Cluster must be enabled to support those settings. For more information on Advanced Virtualization Settings, see: Introduction to Advanced Virtualization Settings. You can verify if the Cluster can support Advanced Virtualization Settings by checking the Cluster Hardware Specifications. See How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location




  • No labels