Skip to main content

Terraform Cloud Temple Tutorials

This page gathers practical tutorials for using the Terraform Cloud Temple provider with various services.

Summary

IaaS VMware

Create an empty VM

Objective: Create a basic VMware virtual machine without an operating system.

Prerequisites:

  • Access to a Cloud Temple datacenter
  • API credentials configured
  • Required permissions
    • compute_iaas_vmware_read
    • compute_iaas_vmware_management
    • compute_iaas_vmware_virtual_machine_power
    • compute_iaas_vmware_infrastructure_read
    • backup_iaas_vmware_read
    • backup_iaas_vmware_write
    • activity_read
    • tag_read
    • tag_write

Code:

# Retrieving Required Resources
data "cloudtemple_compute_virtual_datacenter" "dc" {
name = "DC-EQX6"
}

data "cloudtemple_compute_host_cluster" "cluster" {
name = "clu001-ucs01"
}

data "cloudtemple_compute_datastore_cluster" "datastore" {
name = "sdrs001-LIVE"
}

# Creating an empty VM
resource "cloudtemple_compute_virtual_machine" "empty_vm" {
name = "vm-empty-01"

# Hardware configuration
memory = 4 * 1024 * 1024 * 1024 # 4 GB
cpu = 2
num_cores_per_socket = 1

# Hot-add enabled
cpu_hot_add_enabled = true
memory_hot_add_enabled = true

# Location
datacenter_id = data.cloudtemple_compute_virtual_datacenter.dc.id
host_cluster_id = data.cloudtemple_compute_host_cluster.cluster.id
datastore_cluster_id = data.cloudtemple_compute_datastore_cluster.datastore.id

# Guest operating system
guest_operating_system_moref = "ubuntu64Guest"

tags = {
environment = "demo"
created_by = "terraform"
}
}

**Explanations**:
- `guest_operating_system_moref`: Defines the OS type for VMware Tools drivers
- The VM is created without disk or network (to be added separately)
- Hot-add options allow adding CPU/RAM on-the-fly

### Create a VM from the Marketplace

**Objective**: Deploy a VM from a Cloud Temple Marketplace image.

**Code**:

```hcl
# Retrieving an item from the Marketplace
data "cloudtemple_marketplace_item" "ubuntu_2404" {
name = "Ubuntu 24.04 LTS"
}

data "cloudtemple_compute_virtual_datacenter" "dc" {
name = "DC-EQX6"
}

data "cloudtemple_compute_host_cluster" "cluster" {
name = "clu001-ucs01"
}

data "cloudtemple_compute_datastore" "ds" {
name = "ds001-data01"
}

data "cloudtemple_backup_sla_policy" "daily" {
name = "sla001-daily-par7s"
}

# Deployment from the Marketplace

```hcl
resource "cloudtemple_compute_virtual_machine" "marketplace_vm" {
name = "ubuntu-marketplace-01"

# Marketplace Source
marketplace_item_id = data.cloudtemple_marketplace_item.ubuntu_2404.id

# Configuration
memory = 8 * 1024 * 1024 * 1024 # 8 GB
cpu = 4
num_cores_per_socket = 2

datacenter_id = data.cloudtemple_compute_virtual_datacenter.dc.id
host_cluster_id = data.cloudtemple_compute_host_cluster.cluster.id
datastore_id = data.cloudtemple_compute_datastore.ds.id

power_state = "on"

backup_sla_policies = [
data.cloudtemple_backup_sla_policy.daily.id
]

tags = {
source = "marketplace"
}
}

Explanations:

  • marketplace_item_id: References a ready-to-use image
  • datastore_id: Specific datastore required for Marketplace deployment
  • The image already includes a pre-configured operating system

Create a VM from Content Library

Objective: Deploy a VM from a template in the VMware Content Library.

Code:

# Retrieving the Content Library
data "cloudtemple_compute_content_library" "public" {
name = "PUBLIC"
}

# Retrieving a specific item
data "cloudtemple_compute_content_library_item" "centos" {
content_library_id = data.cloudtemple_compute_content_library.public.id
name = "centos-8-template"
}

data "cloudtemple_compute_virtual_datacenter" "dc" {
name = "DC-EQX6"
}

data "cloudtemple_compute_host_cluster" "cluster" {
name = "clu001-ucs01"
}

data "cloudtemple_compute_datastore_cluster" "sdrs" {
name = "sdrs001-LIVE"
}

data "cloudtemple_compute_datastore" "ds" {
name = "ds001-data01"
}

data "cloudtemple_compute_network" "vlan" {
name = "VLAN_201"
}

# Deployment from Content Library
```hcl
resource "cloudtemple_compute_virtual_machine" "content_library_vm" {
name = "centos-from-cl-01"

# Source Content Library
content_library_id = data.cloudtemple_compute_content_library.public.id
content_library_item_id = data.cloudtemple_compute_content_library_item.centos.id

datacenter_id = data.cloudtemple_compute_virtual_datacenter.dc.id
host_cluster_id = data.cloudtemple_compute_host_cluster.cluster.id
datastore_cluster_id = data.cloudtemple_compute_datastore_cluster.sdrs.id
datastore_id = data.cloudtemple_compute_datastore.ds.id

# OS Disk Configuration
os_disk {
capacity = 50 * 1024 * 1024 * 1024 # 50 GB
}

# OS Network Adapter Configuration
os_network_adapter {
network_id = data.cloudtemple_compute_network.vlan.id
}

tags = {
source = "content-library"
}
}

Explanations:

  • The os_disk and os_network_adapter blocks configure the template's resources
  • These blocks can only be used at creation (see dedicated section)

Configure Cloud-Init VMware

Objective: Automate VM configuration at first boot using Cloud-Init.

Prerequisites: Use a Cloud-Init compatible image (e.g., Ubuntu Cloud Image in OVF format).

Cloud-Init Files:

Create cloud-init/user-data.yml:

#cloud-config
hostname: my-server
fqdn: my-server.example.com

users:
- name: admin
sudo: ALL=(ALL) NOPASSWD:ALL
groups: sudo
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2E... your-key-here

packages:
- nginx
- git
- curl

runcmd:
- systemctl enable nginx
- systemctl start nginx

Create cloud-init/network-config.yml:

version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- 192.168.1.10/24
gateway4: 192.168.1.1
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4

Terraform Code:

data "cloudtemple_compute_content_library" "local" {
name = "local-content-library"
}

data "cloudtemple_compute_content_library_item" "ubuntu_cloudimg" {
content_library_id = data.cloudtemple_compute_content_library.local.id
name = "ubuntu-jammy-22.04-cloudimg"
}

resource "cloudtemple_compute_virtual_machine" "cloudinit_vm" {
name = "ubuntu-cloudinit-01"

memory = 8 * 1024 * 1024 * 1024
cpu = 4
num_cores_per_socket = 2

datacenter_id = data.cloudtemple_compute_virtual_datacenter.dc.id
host_cluster_id = data.cloudtemple_compute_host_cluster.cluster.id
datastore_id = data.cloudtemple_compute_datastore.ds.id

content_library_id = data.cloudtemple_compute_content_library.local.id
content_library_item_id = data.cloudtemple_compute_content_library_item.ubuntu_cloudimg.id

power_state = "on"

# Cloud-Init Configuration (VMware OVF datasource)
cloud_init = {
user-data = filebase64("./cloud-init/user-data.yml")
network-config = filebase64("./cloud-init/network-config.yml")
hostname = "my-server"
password = "RANDOM"
}
}

Supported Cloud-Init Keys (VMware):

  • user-data: Main configuration (base64)
  • network-config: Network configuration (base64)
  • public-keys: Public SSH keys
  • hostname: Hostname
  • password: Password (or "RANDOM")
  • instance-id: Unique instance identifier
  • seedfrom: URL source for configuration
Limitation

Cloud-Init runs only during the first boot of the VM.

Create a virtual disk and attach it to a VM

Objective: Add additional storage to an existing virtual machine.

Code:

# Reference to an existing VM
data "cloudtemple_compute_virtual_machine" "existing_vm" {
name = "my-existing-vm"
}

# Creating a Virtual Disk
```hcl
resource "cloudtemple_compute_virtual_disk" "data_disk" {
name = "data-disk-01"

# Attachment to the VM
virtual_machine_id = data.cloudtemple_compute_virtual_machine.existing_vm.id

# Disk size
capacity = 100 * 1024 * 1024 * 1024 # 100 GB

# Disk mode
disk_mode = "persistent"

# Provisioning type
provisioning_type = "dynamic"
}

Available Disk Modes:

  • persistent: Changes are immediately and permanently written to the virtual disk.
  • independent_nonpersistent: Changes made to the virtual disk are recorded in a rollback journal and discarded upon shutdown.
  • independent_persistent: Changes are immediately and permanently written to the virtual disk. Unaffected by snapshots.

Provisioning Types:

  • dynamic: Saves storage space by allocating space dynamically as needed. Creation is fast.
  • staticImmediate: Allocates all disk space at creation time, but blocks are zeroed out during the first write.
  • staticDiffered: Allocates and zeros out all disk space at creation time.

Create a network interface and attach it to a VM

Objective: Add a network card to a virtual machine.

Code:

# Network Retrieval
data "cloudtemple_compute_network" "production_vlan" {
name = "PROD-VLAN-100"
}

# Reference to the VM
data "cloudtemple_compute_virtual_machine" "vm" {
name = "my-vm"
}

# Creating a Network Adapter
```terraform
resource "cloudtemple_compute_network_adapter" "eth1" {
name = "Network adapter 2"

# Target VM
virtual_machine_id = data.cloudtemple_compute_virtual_machine.vm.id

# Network
network_id = data.cloudtemple_compute_network.production_vlan.id

# Adapter type
type = "VMXNET3"

# Connect automatically on power on
connect_on_power_on = true

# MAC address (optional, automatically generated if omitted)
# mac_address = "00:50:56:xx:xx:xx"
}
Supported Network Adapter Types

The supported adapter types depend on the operating system running on the virtual machine as well as the version of VMware.

Create a virtual controller and attach it to a VM

Objective: Add a disk controller to a virtual machine.

Code:

# Reference to the VM
data "cloudtemple_compute_virtual_machine" "vm" {
name = "my-vm"
}

# Creating a SCSI Controller
```hcl
resource "cloudtemple_compute_virtual_controller" "scsi_controller" {
name = "SCSI controller 1"

# Target VM
virtual_machine_id = data.cloudtemple_compute_virtual_machine.vm.id

# Controller type
type = "SCSI"
}

Controller Types:

  • USB2
  • USB3
  • SCSI
  • CD/DVD
  • NVME
  • PCI

IaaS Open Source

Create a VM from a template

Objective: Deploy a virtual machine from a template in the catalog.

Prerequisites:

  • Access to the OpenSource Cloud Temple infrastructure
  • Required permissions:
    • compute_iaas_opensource_read
    • compute_iaas_opensource_management
    • compute_iaas_opensource_virtual_machine_power
    • compute_iaas_opensource_infrastructure_read
    • backup_iaas_opensource_read
    • backup_iaas_opensource_write
    • activity_read
    • tag_read
    • tag_write

Code:

# Retrieving a template
data "cloudtemple_compute_iaas_opensource_template" "almalinux" {
name = "AlmaLinux 8"
}

# Host Retrieval
data "cloudtemple_compute_iaas_opensource_host" "host" {
name = "host-01"
}

# Retrieving the storage repository
data "cloudtemple_compute_iaas_opensource_storage_repository" "sr" {
name = "sr001-local-storage"
}

# Network Retrieval
data "cloudtemple_compute_iaas_opensource_network" "network" {
name = "VLAN-100"
}

# Retrieval of the backup policy
data "cloudtemple_backup_iaas_opensource_policy" "daily" {
name = "daily-backup"
}

# VM Creation

```hcl
resource "cloudtemple_compute_iaas_opensource_virtual_machine" "openstack_vm" {
name = "almalinux-vm-01"
power_state = "on"

# Source
template_id = data.cloudtemple_compute_iaas_opensource_template.almalinux.id
host_id = data.cloudtemple_compute_iaas_opensource_host.host.id

# Hardware Configuration
memory = 8 * 1024 * 1024 * 1024 # 8 GB
cpu = 4
num_cores_per_socket = 2

# Options
boot_firmware = "uefi"
secure_boot = false
auto_power_on = true
high_availability = "best-effort"

# OS Disk (must match the template)
os_disk {
name = "os-disk"
connected = true
size = 20 * 1024 * 1024 * 1024 # 20 GB
storage_repository_id = data.cloudtemple_compute_iaas_opensource_storage_repository.sr.id
}

# OS Network Adapter
os_network_adapter {
network_id = data.cloudtemple_compute_iaas_opensource_network.network.id
tx_checksumming = true
attached = true
}

# Backup
backup_sla_policies = [
data.cloudtemple_backup_iaas_opensource_policy.daily.id
]

# Boot Order
boot_order = [
"Hard-Drive",
"DVD-Drive",
]

tags = {
environment = "production"
os = "almalinux"
}
}

Explanations:

  • high_availability: Available options are disabled, restart, best-effort (See documentation on High Availability)
  • boot_firmware: bios or uefi
  • secure_boot: Only available with UEFI

Create a VM from the Marketplace

Objective: Deploy a VM from the Cloud Temple Marketplace on the OpenSource IaaS.

Code:

# Retrieving a Marketplace item
data "cloudtemple_marketplace_item" "ubuntu_2404" {
name = "Ubuntu 24.04 LTS"
}

data "cloudtemple_compute_iaas_opensource_storage_repository" "sr" {
name = "sr001-shared-storage"
}

data "cloudtemple_compute_iaas_opensource_network" "network" {
name = "PROD-NETWORK"
}

data "cloudtemple_backup_iaas_opensource_policy" "nobackup" {
name = "nobackup"
}

# Deployment from Marketplace
```hcl
resource "cloudtemple_compute_iaas_opensource_virtual_machine" "marketplace_vm" {
name = "ubuntu-marketplace-01"
power_state = "on"

# Marketplace Source
marketplace_item_id = data.cloudtemple_marketplace_item.ubuntu_2404.id
storage_repository_id = data.cloudtemple_compute_iaas_opensource_storage_repository.sr.id

memory = 6 * 1024 * 1024 * 1024
cpu = 4
num_cores_per_socket = 4
boot_firmware = "uefi"
secure_boot = false

auto_power_on = true
high_availability = "best-effort"

os_network_adapter {
network_id = data.cloudtemple_compute_iaas_opensource_network.network.id
tx_checksumming = true
attached = true
}

os_disk {
connected = true
storage_repository_id = data.cloudtemple_compute_iaas_opensource_storage_repository.sr.id
}

backup_sla_policies = [
data.cloudtemple_backup_iaas_opensource_policy.nobackup.id
]

boot_order = [
"Hard-Drive",
"DVD-Drive",
]

tags = {
source = "marketplace"
}
}

Configure Replication

Objective: Set up a replication policy for a VM.

Code:

data "cloudtemple_compute_iaas_opensource_storage_repository" "replication_target" {
name = "target_storage_repository_name"
machine_manager_id = "availability_zone_id"
}

Creating a replication policy

resource "cloudtemple_compute_iaas_opensource_replication_policy" "policy_hourly" {
name = "replication-policy-6h"
storage_repository_id = data.cloudtemple_compute_iaas_opensource_storage_repository.replication_target.id

interval {
hours = 1
}
}

# Association to a VM

```hcl
resource "cloudtemple_compute_iaas_opensource_virtual_machine" "replicated_vm" {
name = "replicated-vm-01"

# ... standard configuration ...

# Assignment of the replication policy
replication_policy_id = cloudtemple_compute_iaas_opensource_replication_policy.policy_hourly.id
}

Explanations:

  • interval: Replication interval. Can be specified in minutes or hours.
  • storage_repository_id: Storage Repository to which the VM's disks will be replicated. Must be located in a different Availability Zone (AZ) than the original VM.

Configure Backup

Objective: Apply a backup policy to a VM.

Code:

# Retrieving backup policies
data "cloudtemple_backup_iaas_opensource_policy" "daily" {
name = "daily-backup"
}

data "cloudtemple_backup_iaas_opensource_policy" "weekly" {
name = "weekly-backup"
}

# VM with Multiple Backup Policies

```hcl
resource "cloudtemple_compute_iaas_opensource_virtual_machine" "backup_vm" {
name = "important-vm-01"

# ... standard configuration ...

# Multiple policies can be applied
backup_sla_policies = [
data.cloudtemple_backup_iaas_opensource_policy.daily.id,
data.cloudtemple_backup_iaas_opensource_policy.weekly.id,
]
}
Mandatory Backup

In a SecNumCloud environment, at least one backup policy must be defined in order to start the VM.

Configure High Availability

Objective: Set up the HA behavior for a virtual machine.

Code:

# VM with HA disabled
resource "cloudtemple_compute_iaas_opensource_virtual_machine" "no_ha" {
name = "dev-vm-01"
high_availability = "disabled"
# ...
}

# VM with priority restart
resource "cloudtemple_compute_iaas_opensource_virtual_machine" "priority_ha" {
name = "prod-vm-01"
high_availability = "restart"
# ...
}

# VM with best-effort

```hcl
resource "cloudtemple_compute_iaas_opensource_virtual_machine" "besteff_ha" {
name = "test-vm-01"
high_availability = "best-effort"
# ...
}

Available HA modes:

See documentation on High Availability in the OpenSource infrastructure

ModeDescriptionUsage
disabledNo HADevelopment environments
restartHigh-priority restartCritical production
best-effortRestart if resources are availableStandard production

Configure OpenSource Cloud-Init

Objective: Automate configuration using Cloud-Init (NoCloud datasource).

Prerequisites: Cloud-Init NoCloud-compatible image.

Cloud-Init Files:

Create cloud-init/cloud-config.yml:

#cloud-config
hostname: openiaas-server

users:
- name: cloudadmin
sudo: ALL=(ALL) NOPASSWD:ALL
groups: sudo, docker
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2E... your-key

packages:
- docker.io
- docker-compose
- htop

runcmd:
- systemctl enable docker
- systemctl start docker
- usermod -aG docker cloudadmin

Create cloud-init/network-config.yml:

version: 2
ethernets:
ens160:
dhcp4: false
addresses:
- 0.0.0.0/24
routes:
- to: default
via: 0.0.0.0
nameservers:
addresses:
- 0.0.0.0
Note

Adapt the Cloud-Init configuration to your needs and the Cloud-Init version installed on your machine. The format and syntax may vary depending on the version.

Terraform Code:

resource "cloudtemple_compute_iaas_opensource_virtual_machine" "cloudinit_vm" {
name = "ubuntu-cloudinit-01"
power_state = "on"

template_id = data.cloudtemple_compute_iaas_opensource_template.ubuntu_cloud.id
host_id = data.cloudtemple_compute_iaas_opensource_host.host.id

memory = 4 * 1024 * 1024 * 1024
cpu = 2
num_cores_per_socket = 2

auto_power_on = true
high_availability = "best-effort"

os_disk {
connected = true
size = 30 * 1024 * 1024 * 1024
storage_repository_id = data.cloudtemple_compute_iaas_opensource_storage_repository.sr.id
}

os_network_adapter {
network_id = data.cloudtemple_compute_iaas_opensource_network.network.id
attached = true
}

# Cloud-Init Configuration (NoCloud datasource)
cloud_init = {
cloud_config = file("./cloud-init/cloud-config.yml")
network_config = file("./cloud-init/network-config.yml")
}

backup_sla_policies = [
data.cloudtemple_backup_iaas_opensource_policy.daily.id
]

boot_order = ["Hard-Drive"]
}

Difference with VMware:

  • OpenSource uses the NoCloud datasource
  • Supported keys: cloud_config and network_config
  • No need for filebase64(), use file() directly

Understanding os_disk and os_network_adapter

The os_disk and os_network_adapter blocks are special blocks that can be used only during the creation of a virtual machine from:

  • Content Library
  • Template
  • Marketplace Cloud Template
  • Clone of an existing VM
info

They are used to reference virtual disks and network adapters deployed by the template, allowing their parameters to be modified later without manually importing them. These blocks do not create any new resources.

Important characteristics:

  1. Creation only: These blocks can only be defined during the initial terraform apply
  2. Alternative: Use the terraform import command to manually import them

Use os_disk

VMware IaaS:

resource "cloudtemple_compute_virtual_machine" "vm_with_os_disk" {
name = "vm-content-library"

content_library_id = data.cloudtemple_compute_content_library.cl.id
content_library_item_id = data.cloudtemple_compute_content_library_item.item.id

datacenter_id = data.cloudtemple_compute_virtual_datacenter.dc.id
host_cluster_id = data.cloudtemple_compute_host_cluster.cluster.id
datastore_id = data.cloudtemple_compute_datastore.ds.id

# Configuration of the existing OS disk in the template
os_disk {
capacity = 100 * 1024 * 1024 * 1024 # Resize to 100 GB
disk_mode = "persistent"
}
}

OpenSource IaaS:

resource "cloudtemple_compute_iaas_opensource_virtual_machine" "vm_with_os_disk" {
name = "openiaas-vm"

template_id = data.cloudtemple_compute_iaas_opensource_template.template.id
host_id = data.cloudtemple_compute_iaas_opensource_host.host.id

memory = 8 * 1024 * 1024 * 1024
cpu = 4
num_cores_per_socket = 2
power_state = "on"

# OS disk configuration
os_disk {
name = "os-disk"
connected = true
size = 50 * 1024 * 1024 * 1024 # 50 GB
storage_repository_id = data.cloudtemple_compute_iaas_opensource_storage_repository.sr.id
}

# ... other configurations
}

Using os_network_adapter

VMware IaaS:

resource "cloudtemple_compute_virtual_machine" "vm_with_network" {
name = "vm-with-network"

content_library_id = data.cloudtemple_compute_content_library.cl.id
content_library_item_id = data.cloudtemple_compute_content_library_item.item.id

datacenter_id = data.cloudtemple_compute_virtual_datacenter.dc.id
host_cluster_id = data.cloudtemple_compute_host_cluster.cluster.id
datastore_id = data.cloudtemple_compute_datastore.ds.id

# Network adapter configuration from the template
os_network_adapter {
network_id = data.cloudtemple_compute_network.vlan.id
auto_connect = true
connected = true
mac_address = "00:50:56:12:34:56" # Optional
}
}

OpenSource IaaS:

resource "cloudtemple_compute_iaas_opensource_virtual_machine" "vm_with_network" {
name = "openiaas-vm-network"

template_id = data.cloudtemple_compute_iaas_opensource_template.template.id
host_id = data.cloudtemple_compute_iaas_opensource_host.host.id

memory = 4 * 1024 * 1024 * 1024
cpu = 2
num_cores_per_socket = 2
power_state = "on"

# Network adapter configuration
os_network_adapter {
network_id = data.cloudtemple_compute_iaas_opensource_network.network.id
mac_address = "c2:db:4f:15:41:3e" # Optional
tx_checksumming = true
attached = true
}

# ... other configurations
}
Note

You can combine both approaches by referencing disks and/or network adapters from a VM and adding additional ones via the cloudtemple_compute_iaas_vmware/opensource_virtual_disk and cloudtemple_compute_iaas_vmware/opensource_network_adapter resources.


Best practices:

  1. Use os_disk and os_network_adapter for initial template configuration
  2. Use dedicated resources to add additional resources

Object Storage

Create a bucket

Objective: Create an S3-compatible object storage bucket.

Prerequisites: object-storage_write permissions

Code:

# Private Bucket

resource "cloudtemple_object_storage_bucket" "private_bucket" {
name = "my-private-bucket"
access_type = "private"
}

# Public Bucket

resource "cloudtemple_object_storage_bucket" "public_bucket" {
name = "my-public-bucket"
access_type = "public"
}

# Bucket with Custom Access (IP Whitelist)

```hcl
resource "cloudtemple_object_storage_bucket" "custom_bucket" {
name = "my-custom-bucket"
access_type = "custom"

# IP/CIDR whitelist
whitelist = [
"10.0.0.0/8",
"192.168.1.0/24",
"203.0.113.42/32"
]
}

Bucket with versioning enabled

resource "cloudtemple_object_storage_bucket" "versioned_bucket" {
name = "my-versioned-bucket"
access_type = "private"
versioning = "Enabled"
}

# Useful Outputs
output "bucket_endpoint" {
value = cloudtemple_object_storage_bucket.private_bucket.endpoint
}

output "bucket_namespace" {
value = cloudtemple_object_storage_bucket.private_bucket.namespace
}

Access Types:

  • private: Restricted access to tenant IP addresses
  • public: Public read access
  • custom: Limited access to IPs on the whitelist

Versioning:

  • Enabled: Enables object versioning
  • Suspended: Suspends versioning (preserves existing versions)

Create a storage account

Objective: Create a storage account with S3 credentials.

Code:

# Creating a storage account
resource "cloudtemple_object_storage_storage_account" "app_account" {
name = "application-storage-account"
}

# Outputs to use the credentials

```hcl
output "s3_access_key" {
value = cloudtemple_object_storage_storage_account.app_account.access_key_id
}

output "s3_secret_key" {
value = cloudtemple_object_storage_storage_account.app_account.access_secret_key
sensitive = true
}

output "s3_endpoint" {
value = "https://${cloudtemple_object_storage_bucket.my_bucket.namespace}.s3.fr1.cloud-temple.com"
}
Sensitive information

Credentials are displayed only once. Store them securely (e.g., HashiCorp Vault, AWS Secrets Manager).

Create ACLs via dedicated resource

Objective : Manage access permissions to buckets using ACLs.

Code :

# Retrieving available roles
data "cloudtemple_object_storage_role" "read_only" {
name = "read_only"
}

data "cloudtemple_object_storage_role" "maintainer" {
name = "maintainer"
}

data "cloudtemple_object_storage_role" "admin" {
name = "admin"
}

# Retrieving existing storage accounts
data "cloudtemple_object_storage_storage_account" "dev_account" {
name = "dev-team-account"
}

data "cloudtemple_object_storage_storage_account" "ops_account" {
name = "ops-team-account"
}

# Bucket
resource "cloudtemple_object_storage_bucket" "shared_bucket" {
name = "shared-bucket"
access_type = "private"
}

# ACL for dev team (read-only)

```hcl
resource "cloudtemple_object_storage_acl_entry" "dev_acl" {
bucket = cloudtemple_object_storage_bucket.shared_bucket.name
storage_account = data.cloudtemple_object_storage_storage_account.dev_account.name
role = data.cloudtemple_object_storage_role.read_only.name
}

# ACL for ops team (maintainer)
resource "cloudtemple_object_storage_acl_entry" "ops_acl" {
bucket = cloudtemple_object_storage_bucket.shared_bucket.name
storage_account = data.cloudtemple_object_storage_storage_account.ops_account.name
role = data.cloudtemple_object_storage_role.maintainer.name
}

**Available roles**:
- `read_write`: Read and write
- `write_only`: Write only
- `read_only`: Read only
- `maintainer`: Full access

### Configure ACLs directly in the bucket

**Objective** : Set ACLs when creating the bucket.

**Code** :

```hcl
# Retrieving resources
data "cloudtemple_object_storage_storage_account" "account1" {
name = "storage-account-1"
}

data "cloudtemple_object_storage_storage_account" "account2" {
name = "storage-account-2"
}

data "cloudtemple_object_storage_role" "read_only" {
name = "read_only"
}

data "cloudtemple_object_storage_role" "maintainer" {
name = "maintainer"
}

# Bucket with Inline ACLs

```hcl
resource "cloudtemple_object_storage_bucket" "bucket_with_acl" {
name = "bucket-with-inline-acl"
access_type = "private"

# Define ACLs directly within the bucket
acl_entry {
storage_account = data.cloudtemple_object_storage_storage_account.account1.name
role = data.cloudtemple_object_storage_role.read_only.name
}

acl_entry {
storage_account = data.cloudtemple_object_storage_storage_account.account2.name
role = data.cloudtemple_object_storage_role.maintainer.name
}
}

Difference with dedicated ACL resources:

  • Inline: ACLs defined directly inside the bucket (simpler for static configurations)
  • Dedicated resource: ACLs managed separately (more flexible, allows independent modifications)

Using data sources

Objective: Query the metadata of buckets and list the files.

Code:

# Datasource to list files in a bucket

```hcl
data "cloudtemple_object_storage_bucket_files" "my_bucket_files" {
bucket_name = cloudtemple_object_storage_bucket.my_bucket.name
}

# Display all files
output "all_files" {
value = data.cloudtemple_object_storage_bucket_files.my_bucket_files.files
}

# Filter a specific file
output "specific_file" {
value = [
for file in data.cloudtemple_object_storage_bucket_files.my_bucket_files.files :
file if file.key == "config.json"
]
}

# Retrieving an existing storage account
data "cloudtemple_object_storage_storage_account" "existing_account" {
name = "production-account"
}

output "account_access_key" {
value = data.cloudtemple_object_storage_storage_account.existing_account.access_key_id
sensitive = true
}

### S3 Integration with the AWS Provider

**Objective**: Use the AWS provider to upload files to the Cloud Temple object storage.

**Code**:

```hcl
# Creating the Account and Bucket
data "cloudtemple_object_storage_role" "maintainer" {
name = "maintainer"
}

resource "cloudtemple_object_storage_storage_account" "upload_account" {
name = "upload-storage-account"
}

resource "cloudtemple_object_storage_bucket" "upload_bucket" {
name = "upload-bucket"
access_type = "private"

acl_entry {
storage_account = cloudtemple_object_storage_storage_account.upload_account.name
role = data.cloudtemple_object_storage_role.maintainer.name
}
}

# AWS Provider Configuration for Cloud Temple S3

```hcl
provider "aws" {
alias = "cloudtemple_s3"
region = "eu-west-3"

# Use Cloud Temple credentials
access_key = cloudtemple_object_storage_storage_account.upload_account.access_key_id
secret_key = cloudtemple_object_storage_storage_account.upload_account.access_secret_key

# Cloud Temple endpoint
endpoints {
s3 = "https://${cloudtemple_object_storage_bucket.upload_bucket.namespace}.s3.fr1.cloud-temple.com"
}

# Configuration to skip AWS validation
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
}

File upload

resource "aws_s3_object" "config_file" {
provider = aws.cloudtemple_s3

bucket = cloudtemple_object_storage_bucket.upload_bucket.name
key = "config/app-config.json"
source = "./files/app-config.json"
etag = filemd5("./files/app-config.json")
}

# Multiple file upload

```hcl
resource "aws_s3_object" "static_files" {
provider = aws.cloudtemple_s3

for_each = fileset("./static/", "**/*")

bucket = cloudtemple_object_storage_bucket.upload_bucket.name
key = each.value
source = "./static/${each.value}"
etag = filemd5("./static/${each.value}")
}

# Uploaded Files Verification

```hcl
data "cloudtemple_object_storage_bucket_files" "uploaded_files" {
depends_on = [aws_s3_object.config_file]
bucket_name = cloudtemple_object_storage_bucket.upload_bucket.name
}

output "uploaded_files_list" {
value = data.cloudtemple_object_storage_bucket_files.uploaded_files.files
}

## Conclusion

This documentation covers the main use cases of the Terraform Cloud Temple provider. To go further:

- Refer to the [official provider documentation](https://registry.terraform.io/providers/Cloud-Temple/cloudtemple/latest/docs)
- Explore the [examples on GitHub](https://github.com/Cloud-Temple/terraform-provider-cloudtemple/tree/main/examples)
- Use the [Cloud Temple Console](https://shiva.cloud-temple.com) to discover available resources

:::info[Need help?]
For any questions or issues, check the [Issues section on GitHub](https://github.com/Cloud-Temple/terraform-provider-cloudtemple/issues) or contact Cloud Temple support.
:::