Configure the OSC operator

Now that the OSC operator is installed, we need to set it up.

Prerequisites

To complete this section, it is mandatory to have all ARO credentials at hand defined in the introduction, because they will have to be inserted in the various resources that we are going to create.

In order to create the peer-pods configmap, it is necessary to have Trustee installed and configured.

Enable Confidential Containers feature gate

Create and apply cc-fg.yaml ConfigMap

cat > cc-fg.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: osc-feature-gates
  namespace: openshift-sandboxed-containers-operator
data:
  confidential: "true"
EOF

cat cc-fg.yaml
oc apply -f cc-fg.yaml

Create the peer-pods configmap

  1. Get the necessary credentials. In case you didn’t do it at the beginning, run AZURE_RESOURCE_GROUP=insert_your_rg_here.

    Follow the comments in the code if this workshop is being run in an Azure self managed cluster.

    echo ""
    
    # Get the ARO created RG
    ARO_RESOURCE_GROUP=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.azure.resourceGroupName}')
    
    # If the cluster is Azure self managed, run
    # AZURE_RESOURCE_GROUP=$ARO_RESOURCE_GROUP
    
    # Get the ARO region
    ARO_REGION=$(oc get secret -n kube-system azure-credentials -o jsonpath="{.data.azure_region}" | base64 -d)
    
    # Get VNET name used by ARO. This exists in the admin created RG.
    # In this ARO infrastructure, there are 2 VNETs: pick the one starting with "aro-".
    # The other is used internally by this workshop
    # If the cluster is Azure self managed, change
    # contains(Name, 'aro')
    # with
    # contains(Name, '')
    ARO_VNET_NAME=$(az network vnet list --resource-group $AZURE_RESOURCE_GROUP --query "[].{Name:name} | [? contains(Name, 'aro')]" --output tsv)
    
    # Get the Openshift worker subnet ip address cidr. This exists in the admin created RG
    ARO_WORKER_SUBNET_ID=$(az network vnet subnet list --resource-group $AZURE_RESOURCE_GROUP --vnet-name $ARO_VNET_NAME --query "[].{Id:id} | [? contains(Id, 'worker')]" --output tsv)
    
    ARO_NSG_ID=$(az network nsg list --resource-group $ARO_RESOURCE_GROUP --query "[].{Id:id}" --output tsv)
    
    # Necessary otherwise the CoCo pods won't be able to connect with the OCP cluster (OSC and Trustee)
    PEERPOD_NAT_GW=peerpod-nat-gw
    PEERPOD_NAT_GW_IP=peerpod-nat-gw-ip
    
    az network public-ip create -g "${AZURE_RESOURCE_GROUP}" \
        -n "${PEERPOD_NAT_GW_IP}" -l "${ARO_REGION}" --sku Standard
    
    az network nat gateway create -g "${AZURE_RESOURCE_GROUP}" \
        -l "${ARO_REGION}" --public-ip-addresses "${PEERPOD_NAT_GW_IP}" \
        -n "${PEERPOD_NAT_GW}"
    
    az network vnet subnet update --nat-gateway "${PEERPOD_NAT_GW}" \
        --ids "${ARO_WORKER_SUBNET_ID}"
    
    ARO_NAT_ID=$(az network vnet subnet show --ids "${ARO_WORKER_SUBNET_ID}" \
        --query "natGateway.id" -o tsv)
    
    echo "ARO_REGION: \"$ARO_REGION\""
    echo "ARO_RESOURCE_GROUP: \"$ARO_RESOURCE_GROUP\""
    echo "ARO_SUBNET_ID: \"$ARO_WORKER_SUBNET_ID\""
    echo "ARO_NSG_ID: \"$ARO_NSG_ID\""
    echo "ARO_NAT_ID: \"$ARO_NAT_ID\""
    echo ""
  2. Create and apply the peer-pods-configmap.yaml ConfigMap. Note that at this point you must have already configured initdata and got ${INITDATA}.

    cat > pp-cm.yaml <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: peer-pods-cm
      namespace: openshift-sandboxed-containers-operator
    data:
      CLOUD_PROVIDER: "azure"
      VXLAN_PORT: "9000"
      AZURE_INSTANCE_SIZES: "Standard_DC8as_v5,Standard_DC8ads_v5,Standard_DC8es_v5,Standard_DC8eds_v5"
      AZURE_INSTANCE_SIZE: "Standard_DC8es_v5"
      AZURE_RESOURCE_GROUP: "${ARO_RESOURCE_GROUP}"
      AZURE_REGION: "${ARO_REGION}"
      AZURE_SUBNET_ID: "${ARO_WORKER_SUBNET_ID}"
      AZURE_NSG_ID: "${ARO_NSG_ID}"
      PROXY_TIMEOUT: "5m"
      DISABLECVM: "false"
      INITDATA: "${INITDATA}"
      PEERPODS_LIMIT_PER_NODE: "10"
      TAGS: "key1=value1,key2=value2"
      ROOT_VOLUME_SIZE: "6"
      AZURE_IMAGE_ID: ""
    EOF
    
    cat pp-cm.yaml
    Settings that accept multiple, comma separated parameters like AZURE_IMAGE_SIZES and TAGS should not have spaces between the parameters. For example, use TAGS: "key1=value1,key2=value2" and not TAGS: "key1=value1, key2=value2".

    Explanation:

    1. AZURE_INSTANCE_SIZE: default instance size, if not specified in the io.katacontainers.config.hypervisor.machine_type pod annotation.

      Note the azure terminology for instances: the pattern is usually Standard_X{C}{num_cpus}{a or e}xx_vx where C stands for Confidential, a for CPUs using AMD SEV/SNP technology, and e for CPUs using Intel TDX technology. Therefore Standard_DC8_es_v5 is a confidential instance with 8 Intel CPUs using TDX to provide data in use confidentiality.

      For this example, we are going to default the instance deployment AMD cpus, because they are available in all regions. If you want to deploy a TDX instance, check the catalog region availability (usually northeurope or westeurope are a good choice) and deploy a new workshop in that region.

    2. AZURE_INSTANCE_SIZES: this is an "allowlist" to restrict the instance types that the pod can actually run. It is especially useful if the OSC setup and pod deployment is done by two different actors, to avoid using extremely expensive instances from being misused.

      Azure instance types are explained and listed here.

      Because of the quota limitations of this workshop, it is unlikely that it will be possible to deploy instances bigger than Standard_DC8*.

    3. PEERPODS_LIMIT_PER_NODE: Specify the maximum number of peer pods that can be created per node. The default value is 10.

    4. TAGS: You can configure custom tags as key:value pairs for pod VM instances to track peer pod costs or to identify peer pods in different clusters.

    5. ROOT_VOLUME_SIZE: Optional: Specify the root volume size in gigabytes for the pod VM. The default and minimum size is 6 GB. Increase this value for pods with larger container images. This volume is encrypted and created at boot time. It will contain all data that the CoCo workload will store on disk.

    6. AZURE_IMAGE_ID: this is purposefully left empty. It will be filled in automatically by a Job created by the operator later. Red Hat builds and ships a default image which already contains all the necessary CoCo components, plus the following security features:

      • Disk integrity protection: the root disk is integrity protected, meaning nobody is able to mount it offline and inject files inside.

      • Data volume encryption: all data stored by the CoCo workload is stored in a separate, encrypted container.

      • Sealed secrets support: A sealed secret is an encapsulated secret available only within a TEE after verifying the integrity of the TEE environment.

oc apply -f pp-cm.yaml

If you later update the peer pods config map, you must restart the osc-caa-ds daemonset to apply the changes. After you update the config map, apply the manifest. Then restart the cloud-api-adaptor pods by running the following command:

oc set env ds/osc-caa-ds -n openshift-sandboxed-containers-operator REBOOT="$(date)"

Keep in mind that restarting the daemonset recreates the peer pods, it does not update the existing pods

Create the peer-pods SSH key

When CoCo mode is disabled (so the VM is not confidential), this key is also useful to enter the pod VM, inspect it and debug. In CoCo, ssh into the VM is disabled by default. We need to create it anyways because an SSH key is required to create Azure VMs, but as we will see, it will be discarded immediately.

  1. Create an ssh key:

    ssh-keygen -f ./id_rsa -N ""
  2. Upload id_rsa.pub as Secret into Openshift.

    oc create secret generic ssh-key-secret -n openshift-sandboxed-containers-operator --from-file=id_rsa.pub=./id_rsa.pub
  3. Once the public key is uploaded, delete both private and public from the local setup.

    shred --remove id_rsa.pub id_rsa

Create the peer-pods KataConfig

You must create a KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes. This is a core operation that enables the worker nodes to create VMs.

Creating the KataConfig CR triggers the Openshift sandboxed containers Operator to create a RuntimeClass CR named kata-remote with a default configuration. This enables users to configure workloads to use kata-remote as the runtime by referencing the CR in the RuntimeClassName field. This CR also specifies the resource overhead for the runtime.

Openshift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime.

Creating the KataConfig CR automatically reboots the worker nodes. According with the documentation, the reboot can take from 10 to more than 60 minutes. In this ARO workshop, it should take around 15 minutes. Factors that impede reboot time are as follows:

  • A larger Openshift Container Platform deployment with a greater number of worker nodes.

  • Activation of the BIOS and Diagnostics utility.

  • Deployment on a hard disk drive rather than an SSD.

  • Deployment on physical nodes such as bare metal, rather than on virtual nodes.

  • A slow CPU and network.

  1. Create a KataConfig CDR and apply it. By default all worker nodes will be configured to run CoCo workloads. If you want to restrict it to specific worker nodes, then add any specific label to those worker does and update the kataconfigPoolSelector. For this workshop, it is not needed to add any label.

    cat > kataconfig.yaml <<EOF
    apiVersion: kataconfiguration.openshift.io/v1
    kind: KataConfig
    metadata:
     name: example-kataconfig
    spec:
      enablePeerPods: true
    #  kataConfigPoolSelector:
    #    matchLabels:
    #      <label_key>: '<label_value>'  # Fill with your node labels
    EOF
    
    cat kataconfig.yaml
    oc apply -f kataconfig.yaml
  2. Wait for kata-oc MachineConfigPool (MCP) to be in UPDATED state (once UPDATEDMACHINECOUNT equals MACHINECOUNT). In this ARO setup with 3 worker nodes, it should take around 15 minutes.

    watch oc get mcp/kata-oc

    Expected output after all nodes have been updated:

    NAME      CONFIG                                              UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    kata-oc   rendered-kata-oc-894630a1c9cdf3ebef8bd98c72e26608   True      False      False      3              3                   3                     0                      13m

Verification

  1. Make sure that the AZURE_IMAGE_ID in the ConfigMap is populated. If it isn’t, make sure there is a job running called osc-podvm-image-creation-<random-letters>.

    oc get configmap peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml

    If data: AZURE_IMAGE_ID: is still empty, check the job:

    watch oc get pods -n openshift-sandboxed-containers-operator

    Wait till the job STATUS doesn’t change to Completed. In this ARO setup, it should take around 15 minutes.

  2. Make sure that the required daemonset is created.

    oc get -n openshift-sandboxed-containers-operator ds/osc-caa-ds

    Expected output:

    NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                      AGE
    osc-caa-ds   3         3         3       3            3           node-role.kubernetes.io/kata-oc=   22m
  3. Make sure the RuntimeClass are created.

    oc get runtimeclass

    Expected output:

    NAME             HANDLER          AGE
    kata             kata             152m
    kata-remote      kata-remote      152m

This is the expected output when looking at the OSC pods (note the random character ending will change):

oc get pods -n openshift-sandboxed-containers-operator
NAME                                           READY   STATUS      RESTARTS   AGE
controller-manager-5dd87698b7-9cqbn            2/2     Running     0          17m
openshift-sandboxed-containers-monitor-m9ffw   1/1     Running     0          30m
openshift-sandboxed-containers-monitor-sdlz4   1/1     Running     0          30m
openshift-sandboxed-containers-monitor-z8zh5   1/1     Running     0          30m
operator-metrics-server-857fb654c4-z24f4       1/1     Running     0          20m
osc-podvm-image-creation-fltm8                 0/1     Completed   0          17m
peer-pods-webhook-65cffdd499-2nh9q             1/1     Running     0          2m59s
peer-pods-webhook-65cffdd499-8x684             1/1     Running     0          2m59s
osc-caa-ds-hl7fb            1/1     Running     0          2m59s
osc-caa-ds-s6xkk            1/1     Running     0          2m59s
osc-caa-ds-vkfm5            1/1     Running     0          2m59s

This is it! Now the cluster is ready to run workloads with kata-remote RuntimeClass!