Create Content

Description

User-Manageable Clusters are an optional feature available with Private Cloud and Hosted Private Cloud data center locations. This article provides a complete overview of the capabilities and functions associated with this feature.

Content / Solution:

Overview

A Cloud Server runs on a given physical server (the VMware ESXi "host") at any point in time. In a single data center location, there are multiple ESXi hosts running in parallel. The system groups these physical ESXi hosts into "clusters". Based on system load and other variables, VMware's Distributed Resource Scheduler (DRS) monitors distribution and usage of CPU and memory resources for all hosts and Cloud Servers in the cluster and manages load appropriately based on the ideal resource utilization, migrating Cloud Servers between physical ESXi hosts as needed. When you first power on a Cloud Server, DRS attempts to maintain proper load balancing by placing the virtual machine on an appropriate host within the cluster. If an ESXI host were to fail, the system would attempt to recover the Cloud Server onto a different host as described in What Happens When the Physical Server Hosting my Cloud Server Crashes

The only additional rule governing this behavior is Anti-Affinity. When you provision two Cloud Servers on the same cluster in an anti-affinity relationship as described in How to Manage Server Anti-Affinity, the system will ensure those two Cloud Severs never run on the same ESXi host. For more information on Anti-Affinity, see Introduction to Server Anti-Affinity.

In MCP 2.0 Public Cloud locations, all Cloud Servers run on the same cluster. In MCP 2.0 Private and Hosted Private Cloud locations, clients can choose to set up their environment with multiple physical clusters. This introduction describes that feature, which is called "user manageable" since the user has full control over which cluster supports both specific Cloud Servers and Client Images. The feature has advantages for a variety of scenarios, including:

  1. Reducing license costs for software applications licensed on physical hosts - Some Enterprise applications are licensed based on characteristics of the physical server hardware, such as the number of CPU cores. In these scenarios, licensees often must pay for all ESXi hosts on which a Cloud Server might run. Establishing a separate cluster for running Cloud Servers with that software application may reduce the licensing cost by limiting the number of servers.
  2. Physical isolation of different classes of workloads - If you desire to isolate different workloads on different physical servers (e.g. keep production and development servers physically isolated), setting up multiple clusters allows you to achieve that without having to set up separate data center locations.
  3. Customized hardware supporting different Cloud Server characteristics - If you need to run a number of very high RAM Cloud Servers, you can set up a cluster with physical ESXi RAM that can support those characteristics, while leaving another cluster with "standard" hardware to handle your other loads. This configuration reduces cost - instead of upgrading all ESXi hosts to handle the load when it's not needed by all Cloud Servers, you can create a cluster specific to the needs of those very high RAM Cloud Servers.

"Shared" and "Dedicated" Storage Variations

In all multiple cluster scenarios, Cloud Servers running on different clusters will be located on different sets of physical ESXi hosts, meaning the CPU/RAM associated with the Cloud Servers running on each cluster are always physically isolated from each other. However, in terms of the local disk associated with the storage, there are two hardware configurations available for User-Manageable Clusters. These options are described in detail in this section, which assumes you are familiar with the CloudControl concept of local disk and disk speeds. For more information on this subject, see Introduction to Cloud Server Local Storage ("Disks") and Disk Speeds.

The decision of which storage variation to choose for User-Manageable Clusters must be made in advance of placing your order. A summary of the two options and their advantages/disadvantages are as follows:

  1. Dedicated ("Separate") Datastores - With this option, each ESXi cluster is supported by its own physical disk layer that is isolated from the other clusters. This means that Cloud Servers running on different clusters will have both CPU/RAM and local storage associated with the cluster physically isolated from each other. Cloud Servers on different clusters will never share the same disk LUNs. Note this configuration can be supported with either separate disk arrays for each cluster or a single physical array configured so that the shelves of disk are separate per cluster.
    1. Advantages of Dedicated Datastores
      1. Isolation Between Clusters - If the goal of having multiple clusters is to completely isolate individual or small groups of Cloud Servers on separate clusters, then this is the best option. As noted above, by using separate physical disk arrays, you can completely physically isolate each cluster from the other. Even if you choose to use the same array, configuring the cluster to use its own physical disk shelves will both physically and logically isolate the local storage used by Cloud Servers on each cluster from each other.
    2. Disadvantages of Dedicated Datastores
      1. Deployment Times Are Affected for Cross-Cluster Server Deployments - One disadvantage of separate datastores is that if the Source Image is located on a different cluster than the cluster chosen for a Cloud Server deployment, deployment times are adversely affected as the copy portion of the process must traverse the networking layer between the clusters rather than staying within the faster disk infrastructure layer. A general estimate is that the cloning process will take approximately four times longer between different clusters in a Dedicated Datastore environment than a clone within the same cluster. For this reason, the system provides the ability for users to choose which cluster on which to place Client Images, both when importing and cloning to create an Image. In addition, users can move a Client Image between clusters as described in How to Move a Client Image between User-Manageable Clusters.
      2. More Disk Capacity Required (Higher Cost) - Since each cluster will need its own "pool" of disk storage for each requested disk speed, you will usually have a higher initial deployment of storage with this model than with the Shared Datastore model described below. In addition, since you need to leave free capacity for each available disk speed for each cluster, there is usually more "unused" capacity involved with this model. Both of these factors tend to drive higher costs.
  2. Shared Datastores - With this option, the underlying disk storage supporting the separate physical clusters is shared between the ESXi hosts, meaning that although the CPU/RAM associated with the Cloud Servers running on each cluster is physically isolated from each other, the local storage ("local disk") is not. Cloud Servers on different clusters using the same disk speed may end up sharing the same disk LUNs since there is one "pool" of each disk speed available for use for Cloud Servers and Images regardless of cluster. The advantages and disadvantages are pretty much the opposite of the ones stated above.
    1. Advantages of Shared Datastores
      1. Lower Entry Capacity and Spare Capacity (Lower Costs) - Since a single pool of each disk speed is shared among the clusters, it is usually a smaller initial deployment and less "waste" in terms of unused capacity, ultimately leading to lower costs.
      2. Deployment Times Are Affected for Cross-Cluster Server Deployments - All disk speeds use a shared pool, so all clone requests can be serviced within the storage layer, which means you will not see differences in deployment times when copying between a Client Image in one cluster and a target server deployment in another one. You are still able to migrate Client Images between clusters, but doing so has no performance benefit.
    2. Disadvantages of Shared Datastores
      1. Less Isolation Between Clusters - Since the "pool" of disk for a given disk speed is shared between the clusters, Cloud Servers may not be isolated from each other at the local disk level. Depending on your intent, this may or may not matter.

In either case, there are a couple of common rules that apply to both scenarios:

  1. All Clusters must have Standard disk speed available since this is required for Images and other CloudControl functions. However, you can choose whether or not Economy, High Performance, or other disk speeds are available on a given cluster.
    1. For example, if you have three clusters, Cluster A could have Economy, Standard, and High Performance disk speeds. Cluster B could have one Economy and Standard disk speeds. Cluster C could have Standard and High Performance speeds. All must have Standard, but the availability of other speeds is optional. Note that you cannot move Cloud Servers between clusters unless the disk speed currently associated with the Cloud Server is available on the target cluster.
  2. All disk storage on all clusters in a given location must use either Dedicated Datastores or Shared Datastores. You cannot mix the two approaches within the same data center location. That said, in Shared Datastore scenarios, there is no requirement that any speed but Standard be available on the different clusters.
    1. For example:
      1. You cannot choose to have three clusters in a data center location where the disk speed is shared between Cluster A and Cluster B, but Cluster C has its own dedicated datastores.
      2. You can choose to have two clusters in a location where Cluster A has Standard disk speed and Cluster B has Standard and High Performance disk speeds. In this case, when Standard speed is used, the datastores are shared. However, when using High Performance storage on Cluster B, that speed of storage is effectively dedicated as it isn't shared to any other cluster.

Availability

User-Manageable Clusters can be added to any MCP 2.0 Private Cloud or Hosted Private Cloud location. Because this functionality requires changes to the physical hardware, it must be enabled by the Operations team either as part of the order for the Private Cloud/Hosted Private Cloud location or it needs to be submitted as an upgrade change request. In addition, some scenarios (such as customized hardware) may require pricing and/or contractual changes.
You can identify whether User-Manageable Clusters is currently enabled in a given location as described in How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location.

Functionality

Once User-Manageable Clusters is enabled in a location, users can select the cluster when creating servers and images in addition to moving existing servers and images between clusters. Each data center location has a "default" cluster - if a cluster is not specified, the system will default to this particular cluster. Links to the articles describing the functions include:

Cluster Specific Functions - Available only in locations with User-Manageable Clusters

Key Cluster Aware Functions - Functions where the choice of cluster is available and system will apply the "default" cluster if a selection is not made

Anti-Affinity Impact

The anti-affinity functions work within a given cluster to prevent two Cloud Servers from sharing the same physical ESXi host. Since Cloud Servers on separate clusters will never share the same host (that's the point of using clustering!), there are two impacts on the anti-affinity functions:

  1. You cannot create an Anti-Affinity rule unless the Cloud Severs are located on the same cluster
  2. If you move a Cloud Server that has an Anti-Affinity rule associated with it onto a different cluster, the rule will be automatically deleted in conjunction with the move.

For more information on Anti-Affinity functionality, see Introduction to Server Anti-Affinity.

Advanced Virtualization Settings

Advanced Virtualization Settings are a series of obscure vSphere configuration settings supported by CloudControl primarily for use in SAP private cloud environments. In vSphere, these settings are mainly set in the extraConfig named-pair values of a VMX file. If a Server has Advanced Virtualization Settings set to something other than the default values, the Cluster must be enabled to support those settings. For more information on Advanced Virtualization Settings, see: Introduction to Advanced Virtualization Settings. You can verify if the Cluster can support Advanced Virtualization Settings by checking the Cluster Hardware Specifications. See How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location




  • No labels