Configure the OSC operator

PERSONA: Untrusted cluster admin

This whole section can be automated with the following command:

curl -L https://raw.githubusercontent.com/confidential-devhub/workshop-on-ARO-showroom/refs/heads/main/helpers/configure-osc.sh -o configure-osc.sh

chmod +x configure-osc.sh

./configure-osc.sh

Note that the CoCo CVM root disk size is set to 20. Refer here (ROOT_VOLUME_SIZE) to change it. Minimum has to be 6. Remember that after the change you need to restart the OSC daemonset.

If you are following this guide in an Azure/ARO environment deployed outside the RH demo environment, and your cluster version is older than 4.18.30, you are expected to upgrade to 4.18.30+ before installing OSC.

Now that the OSC operator is installed, we need to set it up.

Enable Confidential Containers feature gate

Create and apply cc-fg.yaml ConfigMap

mkdir -p ~/osc
cd ~/osc

cat > cc-fg.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: osc-feature-gates
  namespace: openshift-sandboxed-containers-operator
data:
  confidential: "true"
EOF

echo ""
cat cc-fg.yaml
echo ""

oc apply -f cc-fg.yaml

Create the peer-pods configmap

  1. Get the necessary informations.

    CLOUD_CONF=$(oc get configmap cloud-conf \
      -n openshift-cloud-controller-manager \
      -o jsonpath='{.data.cloud\.conf}')
    
    # Parse required fields
    SUBSCRIPTION_ID=$(echo "$CLOUD_CONF" | jq -r '.subscriptionId')
    LOCATION=$(echo "$CLOUD_CONF" | jq -r '.location')
    USER_RESOURCE_GROUP=$(echo "$CLOUD_CONF" | jq -r '.vnetResourceGroup')
    VNET_NAME=$(echo "$CLOUD_CONF" | jq -r '.vnetName')
    SUBNET_NAME=$(echo "$CLOUD_CONF" | jq -r '.subnetName')
    CLUSTER_RESOURCE_GROUP=$(echo "$CLOUD_CONF" | jq -r '.resourceGroup')
    SECURITY_GROUP_NAME=$(echo "$CLOUD_CONF" | jq -r '.securityGroupName')
    
    # Construct resource IDs
    AZURE_REGION="$LOCATION"
    AZURE_SUBNET_ID="/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${USER_RESOURCE_GROUP}/providers/Microsoft.Network/virtualNetworks/${VNET_NAME}/subnets/${SUBNET_NAME}"
    AZURE_NSG_ID="/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.Network/networkSecurityGroups/${SECURITY_GROUP_NAME}"
    
    echo ""
    echo "AZURE_REGION: \"$AZURE_REGION\""
    echo "CLUSTER_RESOURCE_GROUP: \"$CLUSTER_RESOURCE_GROUP\""
    echo "USER_RESOURCE_GROUP: \"$USER_RESOURCE_GROUP\""
    echo "AZURE_SUBNET_ID: \"$AZURE_SUBNET_ID\""
    echo "AZURE_NSG_ID: \"$AZURE_NSG_ID\""
    echo ""
  1. Create the necessary ip and gateway to enable connection between CoCo and the worker node.

    PEERPOD_NAT_GW=peerpod-nat-gw
    PEERPOD_NAT_GW_IP=peerpod-nat-gw-ip
    
    az network public-ip create -g "${USER_RESOURCE_GROUP}" \
        -n "${PEERPOD_NAT_GW_IP}" -l "${AZURE_REGION}" --sku Standard
    
    az network nat gateway create -g "${USER_RESOURCE_GROUP}" \
        -l "${AZURE_REGION}" --public-ip-addresses "${PEERPOD_NAT_GW_IP}" \
        -n "${PEERPOD_NAT_GW}"
    
    az network vnet subnet update --nat-gateway "${PEERPOD_NAT_GW}" \
        --ids "${AZURE_SUBNET_ID}"
    
    AZURE_NAT_ID=$(az network vnet subnet show --ids "${AZURE_SUBNET_ID}" \
        --query "natGateway.id" -o tsv)
    
    echo "AZURE_NAT_ID: \"$AZURE_NAT_ID\""
  2. Create and apply the peer-pods-configmap.yaml ConfigMap. Note that at this point you must have already configured initdata and exported $INITDATA.

    cat > pp-cm.yaml <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: peer-pods-cm
      namespace: openshift-sandboxed-containers-operator
    data:
      CLOUD_PROVIDER: "azure"
      VXLAN_PORT: "9000"
      AZURE_INSTANCE_SIZES: "Standard_DC2as_v5,Standard_DC2es_v5,Standard_DC4as_v5,Standard_DC4es_v5,Standard_DC8es_v5,Standard_DC8as_v5"
      AZURE_INSTANCE_SIZE: "Standard_DC4as_v5"
      AZURE_RESOURCE_GROUP: "${CLUSTER_RESOURCE_GROUP}"
      AZURE_REGION: "${AZURE_REGION}"
      AZURE_SUBNET_ID: "${AZURE_SUBNET_ID}"
      AZURE_NSG_ID: "${AZURE_NSG_ID}"
      PROXY_TIMEOUT: "5m"
      INITDATA: "${INITDATA}"
      PEERPODS_LIMIT_PER_NODE: "10"
      TAGS: "key1=value1,key2=value2"
      ROOT_VOLUME_SIZE: "20"
      AZURE_IMAGE_ID: ""
    EOF
    
    echo ""
    cat pp-cm.yaml
    echo ""
    
    oc apply -f pp-cm.yaml
    Settings that accept multiple, comma separated parameters like AZURE_IMAGE_SIZES and TAGS should not have spaces between the parameters. For example, use TAGS: "key1=value1,key2=value2" and not TAGS: "key1=value1, key2=value2".

    Explanation:

    1. AZURE_INSTANCE_SIZE: default instance size, if not specified in the io.katacontainers.config.hypervisor.machine_type pod annotation.

      Note the azure terminology for instances: the pattern is usually Standard_X[C][num_cpus][a or e]xx_vx where C stands for Confidential, a for CPUs using AMD SEV/SNP technology, and e for CPUs using Intel TDX technology. Therefore Standard_DC4_es_v5 is a confidential instance with 4 Intel CPUs using TDX to provide data in use confidentiality.

      For this example, we are going to default the instance deployment AMD cpus, because they are available in all regions. If you want to deploy a TDX instance, check the catalog region availability (usually northeurope or westeurope are a good choice) and deploy a new workshop in that region.

    2. AZURE_INSTANCE_SIZES: this is an "allowlist" to restrict the instance types that the pod can actually run. It is especially useful if the OSC setup and pod deployment is done by two different actors, to avoid using extremely expensive instances from being misused.

      Azure instance types are explained and listed here.

      Because of the quota limitations of this workshop, it is unlikely that it will be possible to deploy instances bigger than Standard_DC4*.

    3. PEERPODS_LIMIT_PER_NODE: Specify the maximum number of peer pods that can be created per node. The default value is 10.

    4. TAGS: You can configure custom tags as key:value pairs for pod VM instances to track peer pod costs or to identify peer pods in different clusters.

    5. ROOT_VOLUME_SIZE: Optional: Specify the root volume size in gigabytes for the pod VM. The default and minimum size is 6 GB. Increase this value for pods with larger container images. This volume is encrypted and created at boot time. It will contain all data that the CoCo workload will store on disk.

    6. AZURE_IMAGE_ID: this is purposefully left empty. It will be filled in automatically by a Job created by the operator later. Red Hat builds and ships a default image which already contains all the necessary CoCo components, plus the following security features:

      • Disk integrity protection: the root disk is integrity protected, meaning nobody is able to mount it offline and inject files inside.

      • Data volume encryption: all data stored by the CoCo workload is stored in a separate, encrypted container.

      • Sealed secrets support: A sealed secret is an encapsulated secret available only within a TEE after verifying the integrity of the TEE environment.

If you later update the peer pods config map, you must also restart the osc-caa-ds daemonset to apply the changes:

oc set env ds/osc-caa-ds -n openshift-sandboxed-containers-operator REBOOT="$(date)"

Create the peer-pods KataConfig

You must create a KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes. This is a core operation that enables the worker nodes to create VMs.

Creating the KataConfig CR triggers the Openshift sandboxed containers Operator to create a RuntimeClass CR named kata-remote with a default configuration. This enables users to configure workloads to use kata-remote as the runtime by referencing the CR in the RuntimeClassName field. This CR also specifies the resource overhead for the runtime.

Openshift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime.

Creating the KataConfig CR automatically reboots the worker nodes. In the ARO workshop, it should take around 15 minutes to install everything (job included).

  1. Create a KataConfig CDR and apply it. By default all worker nodes will be configured to run CoCo workloads. If you want to restrict it to specific worker nodes, then add any specific label to those worker does and update the kataconfigPoolSelector. For this workshop, we will apply a label workerType=kataWorker to a single worker node and install kata binaries only in there.

    oc label node $(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[0].metadata.name}') workerType=kataWorker
    
    cat > kataconfig.yaml <<EOF
    apiVersion: kataconfiguration.openshift.io/v1
    kind: KataConfig
    metadata:
     name: example-kataconfig
    spec:
      enablePeerPods: true
      kataConfigPoolSelector:
        matchLabels:
          workerType: 'kataWorker'
    EOF
    
    echo ""
    cat kataconfig.yaml
    echo ""
    
    
    oc apply -f kataconfig.yaml
  2. [Skip this step on Azure SNO deployments] Wait for kata-oc MachineConfigPool (MCP) to be in UPDATED state (once UPDATEDMACHINECOUNT equals MACHINECOUNT). In the ARO standard setup with 3 worker nodes, it should take around 15 minutes.

    watch oc get mcp/kata-oc

    Expected output after all nodes have been updated:

    NAME      CONFIG                                              UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    kata-oc   rendered-kata-oc-894630a1c9cdf3ebef8bd98c72e26608   True      False      False      1              1                   1                     0                      13m

Verification

  1. Make sure that the AZURE_IMAGE_ID in the ConfigMap is populated. If it isn’t, make sure there is a job running called osc-podvm-image-creation-<random-letters>.

    oc get configmap peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml | grep -m 1 AZURE_IMAGE_ID

    If AZURE_IMAGE_ID: is still empty, check the job:

    watch oc get jobs -n openshift-sandboxed-containers-operator

    Wait till the job COMPLETIONS doesn’t change to 1/1. In this ARO setup, it should take around 20 minutes.

  2. Make sure that the required daemonset is created.

    oc get -n openshift-sandboxed-containers-operator ds/osc-caa-ds

    Expected output:

    NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                      AGE
    osc-caa-ds   1         1         1       1            1           node-role.kubernetes.io/kata-oc=   22m
  3. Make sure the RuntimeClass are created.

    oc get runtimeclass

    Expected output:

    NAME             HANDLER          AGE
    kata             kata             152m
    kata-remote      kata-remote      152m
  4. This is the expected output when looking at the OSC pods (note the random character ending will change):

    oc get pods -n openshift-sandboxed-containers-operator
    NAME                                           READY   STATUS      RESTARTS   AGE
    controller-manager-5dd87698b7-9cqbn            2/2     Running     0          17m
    openshift-sandboxed-containers-monitor-m9ffw   1/1     Running     0          30m
    operator-metrics-server-857fb654c4-z24f4       1/1     Running     0          20m
    osc-podvm-image-creation-fltm8                 0/1     Completed   0          17m
    peer-pods-webhook-65cffdd499-2nh9q             1/1     Running     0          2m59s
    peer-pods-webhook-65cffdd499-8x684             1/1     Running     0          2m59s
    osc-caa-ds-vkfm5                               1/1     Running     0          2m59s

This is it! Now the cluster is ready to run workloads with kata-remote RuntimeClass!