Installing a Cluster with Agent-based Installer Using Terraform

Learn how to provision the OCI infrastructure for the Red Hat OpenShift Container Platform using a Terraform script with the Agent-based installer.

Use the Red Hat OpenShift plugin to automatically provision infrastructure with Terraform. The latest version of the Terraform stack is available on the Stack information page within the Red Hat OpenShift plugin.

You can access the earlier versions of the Terraform from the OpenShift on OCI Releases page on GitHub. Download your required version of the create-cluster.zip file from the Assets section.

Important

  • Run the create-resource-attribution-tags stack before running the create-cluster stack to avoid installation failure. See the Prerequisites topic for more information.
  • The create-resource-attribution-tags stack only needs to be run once. If the tag namespace and defined-tags already exist, you can skip this step for future installations.
  • OpenShift requires manage permissions to perform operations on instances, volumes, and networking resources. Deploy OpenShift in a dedicated compartment to avoid conflicts with other applications that might be running in the same compartment.

See Terraform Defined Resources for OpenShift for a list of the resources created by the stack.

  1. Open the navigation menu  and select Developer Services, then select Red Hat OpenShift.
  2. On the Stack information page, enter an optional Name and Description.
    • The latest version of the Terraform stack is automatically uploaded for you.
    • The Create in compartment and Terraform version fields are prepopulated.
  3. On the Configure variables page, review and configure the required variables for the infrastructure resources that the stack creates when you run the Apply job for the first time.
  4. In the OpenShift Cluster Configuration section:
    • Change the Installation Method to Agent-based.
    • Configure the compute and control-plane counts. These values are used when generating the install-config.yaml file for the Agent-based Installer.
    • Clear the Create OpenShift Image and Instances checkbox.

    OpenShift Cluster Configuration

    Field Description
    Tenancy OCID

    The OCID of the current tenancy.

    Default value: Current tenancy

    Compartment

    The OCID of the compartment where you want to create the OpenShift cluster.

    Default value: Current compartment

    Cluster Name

    The OpenShift cluster name.

    Note: Use the same value for cluster name that you specified in your agent-config.yaml and install-config.yaml files and ensure that it's DNS compatible.

    Installation Method

    The installation method you want to use for installing the cluster.

    Note: For Agent-based installation, select Agent-based.

    Default value: Assisted Installer

    Create OpenShift Image and Instances

    Enables the creation of OpenShift image and instances.

    Default value: True

    OpenShift Image Source URI

    The pre-authenticated Request URL created in the previous task, Uploading Red Hat ISO Image to Object Storage.

    Note: This field is visible only when the Create OpenShift Image and Instances checkbox is selected later.

  5. In the OpenShift Resource Attribution Tags section, review and configure the following fields:

    OpenShift Resource Attribution Tags
    Field Description
    Tag Namespace Compartment For OpenShift Resource Attribution Tags

    The tag namespace compartment for OpenShift resource attribution tags.

    Note: Ensure that the specified tag exists before applying the Terraform stack. The compartment where the tag namespace for resource tagging is created defaults to the current compartment. For OpenShift Attribution on OCI resources, the tag namespace and defined tags are set to openshift-tags and openshift-resource. If the openshift-tags namespace already exists, ensure it's correctly defined and applied. For example: defined-tags" - {"openshift-tags"- {"openshift-resource" - "openshift-resource-infra"} }

  6. In the Agent-based Installation Advanced Configurations section, review and configure the following fields:
    Note

    This section appears only when the Installation Method is set to Agent-Based Installer.

    Agent-based Installation Advanced Configurations
    Field Description
    Rendezvous IP

    The IP used to bootstrap the cluster using the Agent-based Installer. Must be in private_ocp subnet.

    Is Disconnected Agent-based Installation

    Indicates whether the cluster is being be installed in a disconnected (air-gapped) environment. Checking this box also enables the creation of a webserver, which can be used to host the agent.x86_64-rootfs.img file and facilitate the Agent-based installation.

    Default value: False

    Set OpenShift Installer Version

    The OpenShift installer version you want to use. Specify a specific supported version if you don't want the latest version. For example, 4.19.1.

    Default value: False

    OpenShift Installer Version

    The version of OpenShift installer. You can find a list of published versions on the OpenShift Client Downloads page.

    Default value: Latest

    Public SSH

    The public SSH key for the webserver to use.

    Note: This key is also added to the install-config.yaml file and is required to provide access to the Openshift instances.

    Red Hat Pull Secret

    The pull secret for authentication when downloading container images for OpenShift Container Platform components and services from sources such as Quay.io. For more information, see Install OpenShift Container Platform 4.

    Object Storage Namespace The OCI Storage namespace for the tenancy.

    You can find this value in the Bucket Details page for your bukcet. For more information, see Object Storage Namespaces.

    Object Storage Bucket The OCI Object Storage bucket where the OpenShift installation files are stored.
    Webserver Private IP The private IP of the server where you want to upload the rootfs image. This parameter is required only for disconnected environments. This IP must be included in the bootArtifactsBaseURL value in the agent-config.yamlfile.
    Webserver Shape The Compute instance shape of the the webserver instance. For more information, see Supported Shapes.
    Webserver Image

    The source_id of the image to use for the webserver instance. If an image isn't preselected, select from a list of available Oracle Linux images.

    Webserver OCPU

    The number of OCPUs available for the webserver shape instance.

    Default value: 2

    Webserver Memory

    The amount of memory available, in gigabytes, for the webserver shape instance.

    Default value: 8

    Config Proxy Settings Provide additional information about the proxy if hosts are behind a firewall that requires the use of a proxy.
    HTTP Proxy URL Specify the HTTP proxy URL. For example, http://my.proxy.server.
    HTPPS Proxy URL Specify the HTTPs proxy URL. For example, https://my.proxy.server.
    No Proxy Domains Specify the No Proxy Domains. For example, localhost,127.0.0.1,{basedomain}.
  7. In the Network Configuration section, review and configure the following fields:

    Network Configuration
    Field Description
    Create Public DNS

    Create a public DNS zone based on the Base domain specified in the Zone DNS field.

    Note: If you don't want to create a public DNS, create a private DNS zone unless you use your own DNS solution. To resolve cluster hostnames without DNS, add entries to /etc/hosts that map cluster hostnames to the IP address of the api_apps Load Balancer using the etc_hosts_entry output.

    Default value: True

    Create Private DNS

    Creates a private DNS zone based on the Base domain, specified in the Zone DNS field, to support hostname resolution within the VCN.

    Note: The private DNS zone contains the same records as the public DNS zone. If using an unregistered Base domain, we recommend to create a private DNS zone, otherwise, you might need alternative methods to resolve cluster hostnames. For more information, see Private DNS.

    Default Value: False

    Enable Public API Load Balancer

    Creates a Load Balancer for the OpenShift API endpoint in the public subnet with a public IP address for external access.

    Note: If unselected, the Load Balancer is created in a private subnet with limited access within the VCN or connected private networks. In on-premises environments (for example, C3/PCA), public IPs might be internal (RFC 1918). Public access is helpful for remote management, automation, and CI/CD. Contact your network administrator if needed.

    Default Value: False

    Enable Public Apps Load Balancer

    Creates a Load Balancer for OpenShift applications in the public subnet with a public IP address for external access to workloads. For example, console-openshift-console.apps.mycluster.mydomain.com or console-openshift-console.apps.devcluster.openshift.com

    Note: If unselected, the Load Balancer is created in a private subnet, limiting access to within the VCN or over a VPN/private network. Public access is useful for internet-facing apps, customer services, or multitenant workloads. In on-premises setups (for example, C3/PCA), public IPs might be internal (RFC 1918). Contact your network team to ensure proper exposure.

    Default value: True

    Zone DNS The base domain for your cluster's DNS records (for example, devcluster.openshift.com), which is used to create public or private (or both) DNS Zones. This value must match what is specified for baseDomain in the install-config.yaml file.
    VCN DNS Label

    A DNS label for the VCN, used with the VNIC's hostname and subnet's DNS label to form a Fully Qualified Domain Name (FQDN) for each VNIC within this subnet. (for example, bminstance1.subnet123.vcn1.oraclevcn.com).

    Default value: openshiftvcn

    Use Existing Networking Infrastructure Specify whether you want to use the existing infrastructure.

    Default Value: False

    Networking Compartment Select the compartment where the existing networking resources are located. This might be different or same from the main compartment where the OpenShift resources are created.

    Default Value: Current compartment

    Existing VCN The OCID of the existing VCN to use when use_existing_networkis true.
    Existing Private Subnet for OCP The OCID of the existing private subnet for OCP to use when use_existing_network is true.
    Existing Private Subnet for Bare Metal The OCID of the existing private subnet for bare metal to use when use_existing_network is true. This must be different from the private_ocp subnet or else you might experience issues.
    VCN CIDR

    The IPv4 CIDR blocks for the VCN of your OpenShift Cluster.

    Default value: 10.0.0.0/16

    Public Subnet CIDR

    The IPv4 CIDR blocks for the public subnet of your OpenShift Cluster.

    Default value: 10.0.0.0/20

    Private Subnet CIDR for OCP

    The IPv4 CIDR blocks for the private subnet of your OpenShift Cluster.

    Default value: 10.0.16.0/20

    Reserved Private Subnet CIDR for Bare Metal

    The IPv4 CIDR blocks for the private subnet of OpenShift Bare metal Clusters.

    Default value: 10.0.32.0/20

    Load Balancer Maximum Bandwidth

    Bandwidth (Mbps) that decides the maximum bandwidth (ingress plus egress) that the Load Balancer can achieve. The values must be between minimumBandwidthInMbps and 8000.

    Default value: 500

    Load Balancer Minimum Bandwidth

    Bandwidth (Mbps that decides the total pre-provisioned bandwidth (ingress plus egress). The values must be between 10 and the maximumBandwidthInMbps.

    Default value: 10

  8. (Optional) In the Advanced Configuration section, review and configure the following fields:

    Advanced Configurations
    Field Description
    OCI CCM and CSI Driver Version

    The OCI CCM and CSI driver version to deploy. For more information see the list of driver versions.

    Default value: v1.32.0

    Use Existing Instance Role Tags

    Indicates whether to reuse existing instance role tag namespace and defined tags when tagging OCI resources. By default, a new set of instance role tagging resources are created and destroyed with the rest of the cluster resources.

    If required, create instance role tag resources separately that can be reused with the Terraform stack from the create-instance-role-tags page in the oci-openshift GitHub repo. The existing instance role tagging resources aren't destroyed when the cluster is deleted.

    Default Value: False

    (Optional) Instance Role Tag Namespace Name

    The name of the instance role tag namespace to reuse.

    Note: This field is visible only when the Use Existing Instance Role Tags checkbox is selected.

    Default value: openshift-{cluster_name}

    Instance Role Tag Namespace Compartment OCID

    The compartment containing existing instance role tag namespace.

    Note: This field is visible only when the Use Existing Instance Role Tags checkbox is selected.

    Default value: Current compartment

  9. Select Next.
  10. On the Review page, review the stack information and variables.
  11. Select Create to create the stack. The Console redirects to the Resource Manager stack details page for the new stack.
  12. On the stack details page, select Apply to create an apply job and provision the infrastructure for the cluster. After running an apply job, get the job's details to check its status. Succeeded (SUCCEEDED) indicates that the job has completed. When it completes, all the OpenShift resources except the ISO image and the Compute instances are created.
  13. On the Stacks page in Resource Manager, select the name of the stack to see its details. If the list view of jobs isn't displayed, select Jobs under the Resources section to see the list of jobs.
  14. Select the job for the stack creation. The Job details page is displayed in the Console.
  15. Select Outputs under the Resources section to see the list of outputs for the job.
  16. (Optional) Select show to view and verify the contents of the relevant Agent-based outputs: agent_config, install_config, dynamic_custom_manifest.
  17. Select copy to copy the contents of Agent-based outputs.
    Note

    • We recommend that you don't manually select and copy the text, as this can cause problems with indentation when you paste this output in the following step.
    • For disconnected installations with a webserver, the output files are already available on the webserver ready to use. They're also available in your Object Storage bucket.
  18. Using a text or code editor, save the copied outputs to the following files: agent-config.yaml, install-config.yaml, dynamic-custom-manifest.yaml file. Or, you can download them from the Object Storage bucket.
  19. Create configuration files and a bootable ISO image for installing your cluster. See Creating configuration files for installing a cluster on OCI (Red Hat documentation) for instructions on:
    • Preparing and customizing your installation files.
    • Downloading an Agent-based Installer.
    • Generating the ISO image.
    • (For disconnected environments) Moving rootfs image to /var/www/html on your webserver.
    • Saving the kubeconfig and kubeadmin-password from the auth folder.
    Return to this documentation after you create the configuration files and ISO image, then continue the installation.
  20. Upload the discovery ISO image file to a bucket in OCI Object Storage. For Putting Data into Object Storage if you need instructions.
  21. Navigate to the Resource Manager service and access the stack details page for the stack you created to install OpenShift.
  22. Under Resources, select Variables.
  23. Select Edit variables.
  24. In the OpenShift Cluster Configuration section:
    • Select the Create OpenShift Image and Instances checkbox.
    • Paste the pre-authenticated request string into the OpenShift Image Source URI field.
  25. Review the values in the Control Plane Node Configuration and Compute Node Configuration sections.
    Note

    Ensure that the Control Plane Node Count and Compute Node Count values match the values in the install-config.yaml file when the image was created.

    Control Plane Node Configuration

    Note: This section is visible only when the Create OpenShift Image and Instances checkbox is selected.

    Field Description
    Control Plane Shape

    The compute instance shape of the control plane nodes. For more information, see Compute Shapes.

    Default value: VM.Standard.E5.Flex

    Control Plane Node Count

    The number of control plane nodes in the cluster.

    Default value: 3

    Control Plane Node OCPU

    The number of OCPUs available each control plane node shape.

    Default value: 4

    Control Plane Node Memory

    The amount of memory available, in gigabytes (GB), for each control plane node shape.

    Default value: 24

    Control Plane Node Boot Volume

    The boot volume size (GB), of each control plane node.

    Default value: 1024

    Control Plane Node VPU

    The number of Volume Performance Units (VPUs) applied to the volume, per 1 GB, of each control plane node.

    Default value: 100

    Distribute Control Plane Instances Across ADs

    Enables automatic distribution of control-plane instances across Availability domains (ADs) in a round-robin sequence, starting from the selected AD. If unselected, all nodes are created in the selected starting AD.

    Default value: True

    (Optional) Starting AD

    Availability domain (AD) used for initial node placement. Additional nodes are automatically distributed across ADs in a round-robin sequence, starting from the selected AD.

    Distribute Control Plane Instances Across FDs

    Distributes control plane instances across fault domains in a round-robin sequence. If not selected, the OCI Compute service distributes them based on shape availability and other criteria.

    Default value: True

    Compute Node Configuration

    Note: This section is visible only when the Create OpenShift Image and Instances checkbox is selected.

    Field Description
    Compute Shape

    The instance shape of the compute nodes. For more information, see Compute Shapes.

    Default value: VM.Standard.E5.Flex

    Compute Node Count

    The number of compute nodes in the cluster.

    Default value: 3

    Compute Node OCPU

    The number of OCPUs available each compute node shape.

    Default value: 6

    Compute Node Memory

    The amount of memory available, in gigabytes (GB), for each control plane node shape.

    Default value: 16

    Compute Node Boot Volume

    The boot volume size (GB) of each control plane node.

    Default value: 100

    Compute Node VPU

    The number of Volume Performance Units (VPUs) applied to the volume, per 1 GB, of each compute node.

    Default value: 30

    Distribute Compute Instances Across ADs

    Enables automatic distribution of compute instances across Availability domains (ADs) in a round-robin sequence, starting from the selected AD. If unselected, all nodes are created in the selected starting AD.

    Default value: True

    (Optional) Starting AD

    Availability domain (AD) used for initial node placement. Additional nodes are automatically distributed across ADs in a round-robin sequence, starting from the selected AD.

    Distribute Compute Instances Across FDs

    Distributes compute instances across Fault Domains in a round-robin sequence. If not selected, the OCI Compute service distributes them based on shape availability and other criteria.

    Default value: True

  26. In the Agent-based Installation Advanced Configuration section, note the value in the Rendezvous IP field. This IP address is assigned to one of the control plane instances, which then acts as the bootstrap node for the cluster. You can access the bootstrap cluster node from the webserver to monitor the installation process.
    ssh -i id_rsa core@<rendezvous_ip>
    Once connected, run the following command to monitor the logs on the bootstrap node:
    journalctl -f

    Before running the SSH command, upload your private SSH key id_rsa to the webserver.

    scp ~/.ssh/id_rsa opc@<webserver_public_ip>:/home/opc
    Note

    • The value in Rendezvous IP field must match the rendezvousIP in theagent-config.yaml.
    • Don't change any configurations in the Agent-based Installation Advanced Configuration section during this step.
  27. Select Next to review and save the changes.
  28. On the stack details page, select Apply to run another apply job for the stack. The job creates the custom software image used by the Compute instances in the cluster, and it provisions the Compute instances. After the compute instances are provisioned by the stack, the cluster installation begins automatically.

    What's Next? Follow the instructions in Verifying that your Agent-based cluster installation runs on OCI (Red Hat documentation)to verify your cluster is running. This step is performed in the OpenShift Container Platform CLI.