Deploy resources
Concept
Deployment tracking for new resources is handled in the 'Orders' menu, accessible from the green sidebar on the left side of the screen.
It allows you to view Cloud resources that have been ordered, are currently being deployed, and any potential errors within a Tenant of your Organization.
Note: At this stage, a global view across an organization of all resources deployed across various tenants is not yet available. This topic will be addressed later through the implementation of a dedicated portal for the sponsor (i.e., the signing party) and the management of their organization.
Resource deployment or deletion is performed within each product via the 'IaaS' and 'Network' menus located in the green sidebar on the left side of the screen.
You can also view deliveries directly through the Cloud Temple console notifications:
From the Orders page, you can monitor the progress of a delivery and, if needed, communicate with the team by adding comments or clarifications:
Note: It is not possible to initiate multiple orders of the same resource type simultaneously. You must wait for the current order to be processed and completed before placing a new one. This ensures efficient and orderly resource management within your environment.
Order a New Availability Zone
It is possible to add a new availability zone by accessing the "Order" menu. This option allows you to expand your resources and enhance the availability and resilience of your applications with just a few clicks:
Begin by selecting your desired location, first choosing the geographic region, then selecting the corresponding availability zone (AZ) from the available options. This step enables you to tailor your resource deployment according to your infrastructure's location and requirements:
Next, select the desired type of hypervisor cluster, choosing the one that best meets the performance and management needs of your cloud infrastructure:
Then, select the number of hypervisors and the desired amount of memory to adapt resources to your workload and specific cloud environment requirements:
Next, select the number of datastores to provision within the cluster, along with their types. Note that the maximum number of allowed datastores is 10, with a minimum of 2 datastores required. Each different datastore type will result in the creation of an additional datastoreCluster. For example, selecting 2 "live" type datastores and 1 "mass" type datastore will create 2 distinct datastoreClusters:
Define the required storage size for backup, ensuring capacity equivalent to your production storage. Consider an average compression ratio of 2 to optimize backup space and ensure effective data protection:
Select the networks to propagate based on your needs. You also have the option to enable "Internet Access" if required, specifying the desired number of IP addresses, with a range between 1 and a maximum of 8:
You will then receive a summary of your selected options before validating your order:
Requesting Additional Storage Resources
The block storage allocation logic on compute clusters is based on the IBM SVC (San Volume Controller) and IBM FlashSystem technologies. Storage is organized into LUNs of at least 500 GiB, presented according to the technology used:
- For VMware: as datastores grouped into SDRS clusters (Storage Distributed Resource Scheduler)
- For Bare Metal: as volumes
- For Open IaaS: as Storage Repositories (SRs)
Each datastore inherits a performance class defined in IOPS/To (ranging from 500 to 15,000 IOPS/To for FLASH, or no guarantee for MASS STORAGE). IOPS limitations are enforced at the datastore level (not per VM), meaning all virtual machines sharing the same datastore share the allocated IOPS quota.
Key points to remember:
- Minimum size: 500 GiB per LUN
- Performance: Proportional to allocated volume (e.g., 2 TiB in Standard class = maximum 3,000 IOPS)
- Organization: Datastores of the same type are automatically grouped into datastore clusters
- Availability: 99.99% measured monthly, including maintenance windows
- Required space: Always reserve 10% free space for backup snapshots and an amount equivalent to the total RAM of VMs for .VSWP files
Deploy a new compute cluster
Proceed with ordering a hypervisor cluster by selecting options tailored to your virtualization requirements. Define key characteristics such as the number of hypervisors, cluster type, amount of memory, and required computing resources:
Select the availability zone:
Choose the compute blade type:
You can then choose to select existing networks and propagate them, or create new networks directly at this stage, depending on your infrastructure needs. Note that the total number of configurable networks is limited to a maximum of 20:
You will then receive a summary of your selected options before confirming your order, after which you can view your ongoing order:
Deploy a new storage cluster
In the "command" menu, proceed with ordering a new storage cluster for your environment by selecting options that match your requirements in terms of capacity, performance, and redundancy. Choose the location:
Define the number of datastores to provision within the cluster, as well as their type, adhering to the following limits: a minimum of 2 datastores and a maximum of 10 can be configured. Select the datastore types that best meet your needs regarding performance, capacity, and usage, in order to optimize storage in your environment:
Select the desired storage type from the available options:
You will then access a complete summary of the options you have selected, allowing you to review all settings before definitively confirming your order:
Deploy a new datastore within a VMware SDRS cluster
In this example, we will add block storage for a VMware infrastructure.
To add an additional datastore to your SDRS cluster, navigate to the 'Infrastructure' submenu, then select 'VMWare'.
Choose the appropriate VMware stack and availability zone, then go to the 'Storage' submenu.
Select the SDRS cluster that matches your desired performance characteristics, and click the 'Add a datastore' button located in the table listing the existing datastores.
Note:
- The smallest activatable LUN size on a cluster is 500 GiB.
- Datastore performance ranges from an average of 500 iops/TiB up to 15,000 iops/TiB on average. This is a software-based limit enforced at the storage controller level.
- The disk volume consumption billed to your organization is the sum of all LUNs across the availability zones used.
- The 'order' and 'compute' permissions are required on the account to perform this action.
Ordering New Networks
The networking technology used on the Cloud Temple infrastructure is based on VPLS. It enables you to benefit from Layer 2 networks seamlessly connected across your availability zones within a region.
It is also possible to share networks between your tenants and terminate them in hosting zones.
In essence, you can think of a Cloud Temple network as an 802.1q VLAN available at any point within your tenant.
Networks on the Cloud Temple platform are Layer 2 (VLANs) based on the VPLS (Virtual Private LAN Service) technology. This technology provides you with seamless network connectivity across your availability zones within a region, with guaranteed performance:
- Intra-AZ latency: < 3 ms
- Inter-AZ latency: < 5 ms
Network Flexibility:
- A network can be shared across multiple clusters within the same availability zone
- A network can be propagated across multiple availability zones within the same region
- A network can be shared between different tenants within your organization
- A network can be terminated in a hosting zone for your physical equipment
- Limit: Maximum of 20 networks per order. You can place multiple successive orders to extend this number according to your needs
Ordering a new network and defining sharing policies between your tenants is done in the 'Network' menu of the green sidebar on the left side of the screen. Networks are first created, then a separate order is generated to propagate them. You can track the progress of ongoing orders by accessing the "Orders" tab in the menu, or by clicking on the information labels that redirect you to active or processing orders.
It is also possible to propagate existing networks, or to separate the two steps—first creating the network, then propagating it later as needed. The propagation option is available in the settings of the selected network:
Click on the "Propagate" option for an existing network, then select your desired propagation target. This step allows you to define the location or resources where the network should be propagated:
Disabling a network
A network can also be disabled if necessary. This option allows you to temporarily pause access to or usage of the network without permanently deleting it, providing flexibility in managing your infrastructure according to your needs.
The disable option is located within the settings of the selected network.
Add additional hypervisors to a compute cluster
A compute cluster is a grouping of hypervisors that must comply with the following rules:
For VMware ESXi Clusters
Homogeneity Rules:
- All hosts within a cluster must be of the same server type (ECO, STANDARD, ADVANCE, PERFORMANCE, etc.)
- All hosts must belong to the same tenant and availability zone
- Limit: Maximum of 32 hypervisors per cluster
Memory Allocation:
- Each server is delivered with all physical memory activated from the start
- Example: A cluster of 3 STANDARD v3 servers (each with 384 GB of physical memory) = 3 × 384 GB = 1,152 GB available
- Recommendation: Do not exceed 85% memory utilization per server to avoid VMware's compression and ballooning mechanisms
High Availability:
- Recommended minimum: 2 hypervisors per cluster to benefit from the 99.99% SLA
- Enable the VMware HA (High Availability) feature to automatically restart VMs in case of host failure
Adding hypervisors to a compute cluster is done in the 'IaaS' menu in the green sidebar on the left side of the screen.
In the following example, we will add compute capacity to a VMware-based hypervisor cluster.
Go to the 'Infrastructure' submenu, then 'VMWare'. Select the VMware stack, the availability zone, and the compute cluster.
In this example, it is 'clu001-ucs12'. Click the 'Add Host' button located in the top-right corner of the table listing the hosts.
Note:
- __Cluster configuration must be homogeneous. Mixing different hypervisor types within a cluster is not allowed. All servers must be of the same type.*
- The 'order' and 'compute' permissions are required for the account to perform this action.
For Open IaaS clusters
Open IaaS clusters follow similar rules regarding homogeneity and high availability. Compute resource management is also performed via the 'OpenIaaS' menu, with the same access rights prerequisites.
Add Additional Memory Resources to a Compute Cluster
Memory allocation on compute clusters works as follows:
Memory Allocation Principle:
- All compute blades are delivered with the maximum physical memory installed
- A software-level limitation is applied at the VMware cluster level to match the billed RAM
- Each blade has access to the full amount of activated physical memory within the cluster
Cluster-Scale Memory Sizing:
- Minimum: number of hosts × 128 GB of memory
- Maximum: number of hosts × physical memory capacity per blade
Example: For a three-host cluster using STANDARD v3 hosts (384 GB physical memory per blade)
- Total available memory: 3 × 384 GB = 1152 GB
Important Recommendations:
- Do not exceed 85% average memory utilization per blade to avoid VMware ballooning and compression
- Ensure sufficient disk space is available for swap files (.VSWP) created at each VM startup (size = VM memory allocation)
To add additional RAM to a cluster, navigate to the cluster configuration (as previously described for adding a compute host) and click on 'Modify Memory'.
Note:
- Machines are delivered with the full physical memory installed. Memory resource unblocking is purely a software-level activation at the cluster level.
- It is not possible to modify the physical memory capacity of a blade type. Always consider the maximum memory capacity of a blade when creating a cluster.
- The 'order' and 'compute' permissions are required on the account to perform this action.