Provides a complete overview of how local storage attached to Cloud Servers operates, including an explanation of the Disk Speed feature which provides users with the ability to apply different levels of disk performance to local Cloud Disks.
When provisioning and managing assets in CloudControl, a "Gigabyte" (GB) is actually based on Base 2 (binary) methodology, meaning that 1 GB = 1024^3 bytes ("gibibyte"). For example, if you provision a "100 GB" local disk, the system provisions a local disk of 100 x 1024^3 bytes = 107374182400 bytes.
Usage reporting follows the same methodology. For more details, see Introduction to Usage Reporting.
CloudControl supports local storage via storage volumes (i.e. "disks") that are attached to a variety of virtual controllers which emulate a specific hardware type. Most Images and Servers use SCSI controllers for this purpose, but the system also supports IDE and SATA controllers as well as read-only files on CD-ROM and Floppy devices. Being a virtualized environment, each storage volume ("disk") is delivered via a VMDK file that's deployed to a Datastore on the underlying storage infrastructure. Each disk is assigned a "disk speed" which determines the performance characteristics of the underlying storage that will be used for that specific local storage volume. Local disks using the same disk speed may or may not be deployed to different Datastores on the storage infrastructure. ISO and FLP files do not have disk speeds and are always on the Standard disk speed.
From the Operating System, each local disk appears to be an external hard disk attached to a specific spot on the controller. When a Cloud Server is deployed, it will inherit the same virtual controllers, disk sizes, and disk locations as the Source Image. However, users can choose to modify the speed of a disk on a deployed Server to be different than that which was used when the Image was created. For details, see the instructions on how to deploy Servers at How to Deploy a Cloud Server from a Guest OS Customization Image and How to Deploy a Cloud Server from a Non-Guest OS Customization Image.
Note: Client Images always use Standard Storage and are always billed as Standard Storage. Client Images have “disk speed" information associated with each disk, but these speeds are simply metadata that the system defaults to when you deploy a server from that Image. The Images themselves are actually stored on Standard storage. Since the system "defaults" to the disk speed of the Source Image, users can also modify the default disk speeds associated with a Client Image as described in How to Manage Client and Long-Term Retention Images.
Note: Most Server BIOS are programmed to boot the Operating System from SCSI Controller 0, Position 0, but users can modify the BIOS through the Console to change this behavior.
The system supports up to four virtual SCSI controllers per Cloud Server. Each SCSI controller uses a specific "adapter" that defines how the SCSI controller is perceived by the guest Operating System. There are four adapters supported by vSphere:
Note that although there are four adapters supported, VMware does not support all of them on every Operating System. When adding a Controller, the system will only make available adapters approved for the Server's Operating System. The Supported Operating Systems dashboard will help you identify what adapters are "available" and which one is "recommended" by VMware for a given operating system as described in Navigating the Supported Operating Systems Dashboard
Each SCSI controller provides 15 "positions" in which a local storage "disk" can be attached. These positions are numbered from 0 through 15, with the 7 position reserved as it is used by the virtual SCSI adapter. This means that with the full complement of four SCSI controllers, there are 60 potential positions where local disks can be placed.
Users can add or remove SCSI controllers for a Cloud Server as described in:
The system supports up to four virtual SATA controllers per Cloud Server. All SATA controllers emulate a standard AHCI 1.0 Serial ATA Controller (Broadcom HT 1000) - there are no separate adapters as there are with SCSI controllers. As described in VMware SCSI Controller Options, use of SATA controllers is not recommended for high I/O environments.
Each SATA controller provides 30 "positions" in which either a local storage "disk" or a CD-ROM device can be attached. These positions are numbered from 0 through 29. This means that with the full complement of four SATA controllers, there are 120 potential positions where either local disks or CD-ROM devices can be placed.
SATA controllers are supported only if already present on Images or Servers, you cannot add or remove SATA controllers on an existing Cloud Server. Therefore, if you wish to use SATA, you will need to import a Client Image with the desired number of SATA controllers already present. SATA is not supported on all Operating Systems. Check the VMware Compatibility Guide for more details.
The system supports up to two virtual IDE controllers per Cloud Server. All IDE controllers emulate a standard Intel 82371AB/EB PCI Bus Master IDE controller - there are no separate adapters as there are with SCSI controllers.
Each IDE controller provides only two "positions" in which either a local storage "disk" or a CD-ROM device can be attached - slots 0 and 1. IDE works under a Master/Slave configuration, so the "1" position can be used only if either a "disk" or CD-ROM device is in the "0" position of the same controller. Therefore, the system will prevent the deletion of a local disk in position 0 if there is a local disk in position 1.
VMware's import process inserts IDE controllers on all images imported through the OVF process, so the controllers are present on almost all Images and Cloud Servers. The system does not support the addition or removal of IDE controllers.
NOTE: IDE Controllers do not support "expanding" a disk and require the server to be powered off in order to add a disk to the controller.
CD-ROM devices may be present on IDE or SATA controllers only if already present on Images or Servers, you cannot add or remove the CD-ROM devices from the controller. Therefore, if you are looking to use a CD-ROM device as described below, you will need to import a Client Image with the desired CD-ROM devices and/or ISO files already present.
CD-ROM devices may have an ISO file attached, in which case they provide read-only access to the ISO file through the virtual CD-ROM device. Otherwise, the CD-ROM device does not provide any use. All such ISO files are placed on Standard Disk Speed (see below for details) and billed based on their file size as if they were a local disk on Standard disk speed. A server or image can have as many CD-ROM devices and ISO files as supported by the controllers, but the total combined size of all ISO and FLP (see below) on a Cloud Server/Image cannot exceed the Maximum Disk Size (GB) for the data center location.
Currently, the system allows ISO only if it is already present on the Image or Server, so they need to be included on an Imported Image. ISO files may be permanently removed from a Cloud Server, but they cannot be added, modified, or replaced. You will have to import a new Image if you want to make changes.
The system does support up to two Floppy Controllers that can have an FLP file attached, in which case they provide read-only access to the FLP file through the virtual floppy device in the same manner as ISO support. The system allows Floppy and associated FLP files only if it already present on the Image or Server, so they need to be included on an Imported Image. FLP files may be permanently removed from a Cloud Server, but they cannot be added, modified, or replaced. You will have to import a new Image if you want to make changes.
Beyond that, there are four sets of limits associated with local storage on a Cloud Server, each of which varies by User-Manageable Cluster or Data Center location and can be identified as described in How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location:
The following 'default' limits are set in most (but not all) Public Cloud locations. We are providing this list for convenience but we recommend reviewing the specifics of the location for as described in How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location for a more accurate answer.
|Property||MCP 2.0 Default Setting||MCP 1.0 Default Setting|
|Maximum Total Storage for Cloud Server (GB)||30,000||11,000|
|Maximum Total Storage for Client Image (GB)||3,000||2,600|
|Minimum Disk Size (GB)||1||10|
|Maximum Disk Size (GB)||1000||1000|
|Minimum Disk Count||0||0|
|Maximum Disk Count||60||15|
When disks are added to a Cloud Server, they will appear as unformatted drives that need to be formatted for use. A User can choose a specific controller and position on which to install the drive or have the system insert the drive in the "next available" SCSI position. For additional details on how to add a disk, see:
These disks can also be removed from the Cloud Server as described in:
Any individual disk attached to a deployed Cloud Server can be increased in size should greater capacity be required – though note that like a newly-added disk, the additional storage is delivered as an unformatted increase in the size of the volume and therefore needs to be formatted by the OS for use. For more details, see:
Local storage ("disks") each have a specific "disk speed" that defines the performance characteristics associated with the local storage. CloudControl will allow you to decide which performance characteristic a given Cloud Server disk should utilize based on the disk speed. Each local storage volume is treated independently in terms of its performance, allowing you to "mix and match" performance on different disks in the same Cloud Server. Disk speeds allow the user to select a level of performance that is focused on the intended function of the disk. For example, log file storage might require a lower level of performance while database files may require a faster level of performance. Because the disk speed can be set at an individual disk level, a given Cloud Server can have different disk speeds depending on how they plan to use the disks.
The "disk speed" associated with each Server is visible in the Manage Server dialog of the Admin UI in a row that is shown underneath the associated Controller with the position and size of the individual disk:
The disk speeds available vary by data center location and can vary by hypervisor cluster within a location. For more details, see:
Each different "disk speed" will be charged at its own rate. Refer to your Cloud Provider's rate card for more information on the specific rates for each disk speed a given data center location.
There are two types of available disk speeds:
More details on these types are described in the sections below.
Real-world storage performance is governed by a multitude of factors, including application and OS variables, storage latency, and other factors. However, at a high level, one of the key factors is Disk Throughput, where:
In the context of CloudControl, when the system provisions disk speed performance based on IOPS, it assumes a block size and sets a corresponding Disk Throughput limit based on the IOPS and block size. These limits are enforced in the hypervisor on each individual disk, meaning that disks are governed by both a maximum IOPS limit and a separate Disk Throughput limit where the Throughput limit is equal to IOPS x block size.
For disks added or modified starting on November 22, 2019, the system will calculate throughput based on a 32 KB block size, effectively doubling Throughput. Any change (disk speed change, IOPS change, or size change) to an existing disk on or after this date will result in a new Throughput value based on the new 32 KB block size.
Prior to November 22, the system calculated all Throughput values based on a 16 KB block size, so Throughput = IOPS x 16 KB.
The net effect is that only one limiter is likely to establish the maximum performance at any given time. If a user's actual block size is less than the 16/32 KB size, IOPS will be the limiting factor and corresponding Disk Throughput will be less than the maximum Disk Throughput allowed. If the actual block size is greater than the 16/32 KB size, Throughput will be the limiting factor and the corresponding IOPS will be less than the maximum IOPS allowed.
In real-world applications, the block sizes often vary and other OS and application variables will have an effect on actual performance. Therefore, the IOPS and Throughput settings represent maximum IOPS and Throughput performance in locations and on disk speeds where these values are enforced. In the case of Provisioned IOPS where the values are committed, users can expect consistent performance based on the IOPS value assigned to a given disk. In the case of Standard/Economy/ High Performance disk speeds, these IOPS values are the maximum "burstable" limit that can be achieved at any given time.
Provisioned IOPS is designed to provide a specific user-defined IOPS and Throughput performance value to a given disk at all times. The IOPS and Throughput limits are enforced in the hypervisor and the underlying storage infrastructure is designed to commit itself to deliver those values at all times as there is no oversubscription on the underlying Datastores.
The user-defined committed IOPS value must conform to a set of rules based on the size of the disk. The rules for the Provisioned IOPS disk speed are the same in all Public Cloud locations where the disk speed is available but may vary in Private Cloud locations. The Public Cloud location values are listed below in parenthesis but you can identify the specific characteristics of any location as described in How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location.
When setting IOPS values for a Variable IOPS disk, the UI will present a slider with the available IOPS values based on the location's setting and the current disk size:
Once a disk is assigned a Provisioned IOPS disk speed, you can modify the IOPS as described in How to Manage a Local Storage Disk on a Cloud Server. However, note that you cannot expand the size of the disk and modify the IOPS value at the same time. In scenarios where a user is trying to drastically expand the size of a small disk, users may need to expand the disk size and modify the IOPS value in steps to stay within the Provisioned IOPS Min/Max IOPS/GB rules.
Standard, High Performance, and Economy disk speeds differ from Provisioned IOPS in that the storage infrastructure does not commit a specific performance level to the disk at all times. However, they do provide differing performance levels based on the disk speed chosen and the size of the disk. There are currently two types of storage infrastructure currently supporting these disk speeds, each of which uses a separate methodology to differentiate performance. The Standard/High Performance/Economy Disk Speed Architecture Matrix below identifies the methodology for each data center location.
In this legacy architecture, performance characteristics are primarily based on the underlying disk storage infrastructure supporting the disk speed. To deliver each disk speed, the following infrastructure is used:
Users of different disk speeds will see differing performance characteristics based on the underlying storage infrastructure, but there are no specific IOPS or Throughput limitations enforced by the hypervisor. Actual performance will be based on a combination of factors, including how "busy" the infrastructure is servicing other disks on the same storage Datstores and/or storage infrastructure. To increase IOPS and Throughput performance for a given volume, users can upgrade to a higher-performing disk speed to change the underlying infrastructure but there is no specific change in maximum performance level.
Over time, we are migrating locations off this infrastructure onto the new "Burstable IOPS" architecture in order to provide improved and more predictable performance. The Standard/High Performance/Economy Disk Speed Architecture Matrix below outlines the implementation dates for this migration.
A few Cloud locations offer a "SSD" disk speed that predates the Provisioned IOPS disk speed described below. The characteristics of this speed vary by location - contact Support if you have questions about the IOPS and Throughput configuration in a given location.
The Burstable IOPS disk speed architecture is designed to provide a clearly defined maximum performance level based on the IOPS and Throughput setting enforced within the hypervisor. However, these maximums have limited oversubscription on the underlying storage so the maximum performance level represents a "burstable" maximum rather than a committed value. In addition to the disk speed, the architecture is designed to provide greater performance to larger sized disks, meaning the maximum performance is defined by both the disk speed and the disk size. This means that to increase IOPS and Throughput performance for a given volume, users can either increase the size of the volume or upgrade to a higher-performing disk speed as either action will increase the maximum IOPS and Throughput performance according to the table below.
In locations where the "Burstable" architecture is used, the system will apply IOPS and Throughput limits to each local disk based on the GREATER of:
The speeds are enforced in Public Cloud locations according to the following table. Private Cloud locations may use different values. (See the Private Disk Speed Architecture section below.)
Minimum IOPS and Throughput
Size Calculated IOPS and Throughput
Before 22-Nov-19: 8000 KB/second*
On or After 22-Nov-19: 16000 KB/second*
Before 22-Nov-19: 48 KB/second per GB*
On or After 22-Nov-19: 96 KB/second per GB*
Before 22-Nov-19: 12800 KB/second*
On or After 22-Nov-19: 25600 KB/second*
Before 22-Nov-19: 96 KB/second per GB*
On or After 22-Nov-19: 192 KB/second per GB*
Before 22-Nov-19: 1600 KB/second*
On or After 22-Nov-19: 3200 KB/second*
Before 22-Nov-19: 8 KB/second per GB*
On or After 22-Nov-19: 16 KB/second per GB*
*As discussed in the Understanding IOPS and Throughput Performance section above, the Throughput value is calculated based on the block size so the Disk Throughput limit will always be equal to the IOPS value x 32 KB/second (16 KB for disks prior to November 22 2019)
The matrix below identifies which locations are currently using which architecture. In the event of a change from Traditional to Burstable IOPS, a maintenance announcement will be issued to notify users of the change and effective implementation date.
MCP 2.0 Location Matrix
|MCP 2.0 Location||Burstable IOPS Architecture||Traditional Architecture||Transition Plan|
All new or modified disks use Burstable Architecture effective November 1, 2018
Existing disks receive IOPS/Throughput limits effective November 15, 2018, and will be transitioned to Burstable architecture over time.
All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is March 4, 2019
All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is April 1, 2019
|NA12 (Santa Clara)||X|
All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is May 1, 2019
All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is May 15, 2019
|NA9 (US East)||X|
All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is May 29, 2019
All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is June 10, 2019
All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is June 24, 2019
|EU7 (Amsterdam)||X||All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is August 1, 2019|
|AU11 (New Zealand)||X||All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is August 15, 2019|
|AP13 (Malaysia)||X||Built under the new model|
|Private Cloud Locations AU13, AP14||X||Built under the new model|
|CA2 (Canada)||X||All new or modified disks use Burstable Architecture and existing disks will receive IOPS/Throughput limits. Effective date is April 6, 2020|
|AP2 (Hong Kong)||X||Transition plan is TBD|
|Other Private Cloud Locations||X||Transition plan is TBD|
MCP 1.0 Location Matrix
|MCP 1.0 Location||Burstable Architecture||Traditional Architecture||Transition Plan|
|All MCP 1.0 Locations||X||Transition plan is TBD|
Because the NUTANIX infrastructure used in Enterprise Private Cloud shares storage across all nodes, the architecture does not provide any differentiation of performance similar to what can be done in Burstable IOPS or Traditional architectures. However, it is important that individual disks do not cause "noisy neighbor" problems that would interfere with the performance of other disks. Therefore, such locations offer a single PRIVATE disk speed with fairly high IOPS and Throughput limits that exist solely to prevent excessive I/O usage by a given disk.
These limits are enforced in Private Cloud locations according to the following table. The limits work in the same manner as Burstable IOPS described about. However, note the Private Disk Speed uses a 16 KB block size for calculating Throughput.
Minimum IOPS and Throughput Per Disk
Size Calculated IOPS and Throughput Per Disk
(used in Enterprise Private Cloud)
12,800 KB/second (800 IOPS x 16 KB)
320 KB/second per GB* (GB x 20 IOPS x 16 KB)
Users can add disks and modify existing disks as described in
The rules regarding adding or modifying disks vary depending on the storage controller and the disk speed associated with the disk a well as the server's running state. In most cases, the system will allow disk changes on a server in a running state. If the disk speed is either Provisioned IOPS or a disk speed in a location using Burstable IOPS Architecture, the system must update the disk's associated Throughput setting. When performed on a running server, this requires the system to relocate the Cloud Server between physical ESXi hosts in order to implement the new Throughput settings. This requirement adds some unique impacts to changes on a running server.
If the system does not have enough excess ESXi host capacity to ensure such a relocation can be accomplished, the system will block changes to a Provisioned IOPS or Burstable IOPS Architecture disk speed on a running server. Users can either choose to try the change at a later time or stop the server and initiate the change. Should this situation occur, the error will look like:
In the unlikely case the relocation between ESXi hosts fails, and the correct Throughput setting is not applied, the Cloud Server will be flagged with a status of "Requires Restart". Users will not be able to take any action against this Server until the server is either restarted via CloudControl or shutdown.
When Restarting a Server to clear a "Requires Restart" state, users must use the "Restart" function in CloudControl. If the Server is restarted from within the Guest OS, the System will not be able to detect that and the Server will remain in "Requires Restart" status. For details on how to Restart a Server, refer to the Restart/Reset Server section in How to Manage a Cloud Server.
However, shutting down a server within the Guest OS will clear the "Requires Restart" state, as that change will be detected by the system.
The chart below summarizes the rules:
|Function||Rules Regarding Changes on Running Servers|
|Change Disk Speed|
SCSI or SATA controllers: Cloud Server can be running, but servers using Provisioned IOPS or Burstable IOPS disk speeds will migrate ESXi hosts to implement Throughput changes.
IDE Controller: Cloud Server must be stopped
SCSI or SATA controllers: Cloud Server can be running, but servers using Provisioned IOPS or Burstable IOPS disk speeds will migrate ESXi hosts to implement Throughput changes.
IDE Controller: Cloud Server must be stopped
Expand Disks on SCSI controller is allowed when the Server is running, but servers using Provisioned IOPS or Burstable IOPS disk speeds will migrate ESXi hosts to implement Throughput changes.
Expand disk on a SATA controller requires server to be stopped
Expand disk on an IDE controller is never allowed
|Change IOPS of a Disk with Provisioned IOPS Disk Speed||Cloud Server can be running, but servers using Provisioned IOPS or Burstable IOPS disk speeds will migrate ESXi hosts to implement Throughput changes.|