Configure the OSC operator
Now that the OSC operator is installed, we need to set it up.
Prerequisites
To complete this section, it is mandatory to have all ARO credentials at hand defined in the introduction, because they will have to be inserted in the various resources that we are going to create.
In order to create the peer-pods configmap, it is necessary to have Trustee installed and configured.
Enable Confidential Containers feature gate
Create and apply cc-fg.yaml ConfigMap
mkdir -p ~/osc
cd ~/osc
cat > cc-fg.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: osc-feature-gates
namespace: openshift-sandboxed-containers-operator
data:
confidential: "true"
EOF
clear
cat cc-fg.yaml
oc apply -f cc-fg.yaml
Create the peer-pods configmap
-
Get the necessary credentials. In case you didn’t do it at the beginning, run
AZURE_RESOURCE_GROUP=insert_your_rg_here.Follow the comments in the code if this workshop is being run in an Azure self managed cluster.
# Get the ARO created RG ARO_RESOURCE_GROUP=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.azure.resourceGroupName}') # If the cluster is Azure self managed, run # AZURE_RESOURCE_GROUP=$ARO_RESOURCE_GROUP # Get the ARO region ARO_REGION=$(oc get secret -n kube-system azure-credentials -o jsonpath="{.data.azure_region}" | base64 -d) # Get VNET name used by ARO. This exists in the admin created RG. # In this ARO infrastructure, there are 2 VNETs: pick the one starting with "aro-". # The other is used internally by this workshop # If the cluster is Azure self managed, change # contains(Name, 'aro') # with # contains(Name, '') ARO_VNET_NAME=$(az network vnet list --resource-group $AZURE_RESOURCE_GROUP --query "[].{Name:name} | [? contains(Name, 'aro')]" --output tsv) # Get the Openshift worker subnet ip address cidr. This exists in the admin created RG ARO_WORKER_SUBNET_ID=$(az network vnet subnet list --resource-group $AZURE_RESOURCE_GROUP --vnet-name $ARO_VNET_NAME --query "[].{Id:id} | [? contains(Id, 'worker')]" --output tsv) ARO_NSG_ID=$(az network nsg list --resource-group $ARO_RESOURCE_GROUP --query "[].{Id:id}" --output tsv) # Necessary otherwise the CoCo pods won't be able to connect with the OCP cluster (OSC and Trustee) PEERPOD_NAT_GW=peerpod-nat-gw PEERPOD_NAT_GW_IP=peerpod-nat-gw-ip az network public-ip create -g "${AZURE_RESOURCE_GROUP}" \ -n "${PEERPOD_NAT_GW_IP}" -l "${ARO_REGION}" --sku Standard az network nat gateway create -g "${AZURE_RESOURCE_GROUP}" \ -l "${ARO_REGION}" --public-ip-addresses "${PEERPOD_NAT_GW_IP}" \ -n "${PEERPOD_NAT_GW}" az network vnet subnet update --nat-gateway "${PEERPOD_NAT_GW}" \ --ids "${ARO_WORKER_SUBNET_ID}" ARO_NAT_ID=$(az network vnet subnet show --ids "${ARO_WORKER_SUBNET_ID}" \ --query "natGateway.id" -o tsv) clear echo "ARO_REGION: \"$ARO_REGION\"" echo "ARO_RESOURCE_GROUP: \"$ARO_RESOURCE_GROUP\"" echo "ARO_SUBNET_ID: \"$ARO_WORKER_SUBNET_ID\"" echo "ARO_NSG_ID: \"$ARO_NSG_ID\"" echo "ARO_NAT_ID: \"$ARO_NAT_ID\"" -
Create and apply the
peer-pods-configmap.yamlConfigMap. Note that at this point you must have already configured initdata and got$INITDATA.cat > pp-cm.yaml <<EOF apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "azure" VXLAN_PORT: "9000" AZURE_INSTANCE_SIZES: "Standard_DC4as_v5,Standard_DC4es_v5" AZURE_INSTANCE_SIZE: "Standard_DC4es_v5" AZURE_RESOURCE_GROUP: "${ARO_RESOURCE_GROUP}" AZURE_REGION: "${ARO_REGION}" AZURE_SUBNET_ID: "${ARO_WORKER_SUBNET_ID}" AZURE_NSG_ID: "${ARO_NSG_ID}" PROXY_TIMEOUT: "5m" DISABLECVM: "false" INITDATA: "${INITDATA}" PEERPODS_LIMIT_PER_NODE: "10" TAGS: "key1=value1,key2=value2" ROOT_VOLUME_SIZE: "10" AZURE_IMAGE_ID: "" EOF clear cat pp-cm.yamlSettings that accept multiple, comma separated parameters like AZURE_IMAGE_SIZESandTAGSshould not have spaces between the parameters. For example, useTAGS: "key1=value1,key2=value2"and notTAGS: "key1=value1, key2=value2".Explanation:
-
AZURE_INSTANCE_SIZE: default instance size, if not specified in theio.katacontainers.config.hypervisor.machine_typepod annotation.Note the azure terminology for instances: the pattern is usually
Standard_X[C][num_cpus][a or e]xx_vxwhereCstands for Confidential,afor CPUs using AMD SEV/SNP technology, andefor CPUs using Intel TDX technology. ThereforeStandard_DC4_es_v5is a confidential instance with 4 Intel CPUs using TDX to provide data in use confidentiality.For this example, we are going to default the instance deployment AMD cpus, because they are available in all regions. If you want to deploy a TDX instance, check the catalog region availability (usually
northeuropeorwesteuropeare a good choice) and deploy a new workshop in that region. -
AZURE_INSTANCE_SIZES: this is an "allowlist" to restrict the instance types that the pod can actually run. It is especially useful if the OSC setup and pod deployment is done by two different actors, to avoid using extremely expensive instances from being misused.Azure instance types are explained and listed here.
Because of the quota limitations of this workshop, it is unlikely that it will be possible to deploy instances bigger than
Standard_DC4*. -
PEERPODS_LIMIT_PER_NODE: Specify the maximum number of peer pods that can be created per node. The default value is10. -
TAGS: You can configure custom tags as key:value pairs for pod VM instances to track peer pod costs or to identify peer pods in different clusters. -
ROOT_VOLUME_SIZE: Optional: Specify the root volume size in gigabytes for the pod VM. The default and minimum size is 6 GB. Increase this value for pods with larger container images. This volume is encrypted and created at boot time. It will contain all data that the CoCo workload will store on disk. -
AZURE_IMAGE_ID: this is purposefully left empty. It will be filled in automatically by a Job created by the operator later. Red Hat builds and ships a default image which already contains all the necessary CoCo components, plus the following security features:-
Disk integrity protection: the root disk is integrity protected, meaning nobody is able to mount it offline and inject files inside.
-
Data volume encryption: all data stored by the CoCo workload is stored in a separate, encrypted container.
-
Sealed secrets support: A sealed secret is an encapsulated secret available only within a TEE after verifying the integrity of the TEE environment.
-
-
oc apply -f pp-cm.yaml
|
If you later update the peer pods config map, you must also restart the
|
Create the peer-pods SSH key
When CoCo mode is disabled (so the VM is not confidential), this key is also useful to enter the pod VM, inspect it and debug. In CoCo, ssh into the VM is disabled by default. We need to create it anyways because an SSH key is required to create Azure VMs, but as we will see, it will be discarded immediately.
# Create key
ssh-keygen -f ./id_rsa -N ""
# Upload it into openshift as secret
oc create secret generic ssh-key-secret -n openshift-sandboxed-containers-operator --from-file=id_rsa.pub=./id_rsa.pub
# Destroy the key, it's not needed
shred --remove id_rsa.pub id_rsa
Create the peer-pods KataConfig
You must create a KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes. This is a core operation that enables the worker nodes to create VMs.
Creating the KataConfig CR triggers the Openshift sandboxed containers Operator to create a RuntimeClass CR named kata-remote with a default configuration. This enables users to configure workloads to use kata-remote as the runtime by referencing the CR in the RuntimeClassName field. This CR also specifies the resource overhead for the runtime.
Openshift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime.
|
Creating the KataConfig CR automatically reboots the worker nodes. In this ARO workshop, it should take around 15 minutes to install everything (job included). |
-
Create a KataConfig CDR and apply it. By default all worker nodes will be configured to run CoCo workloads. If you want to restrict it to specific worker nodes, then add any specific label to those worker does and update the
kataconfigPoolSelector. For this workshop, we will apply a labelworkerType=kataWorkerto a single worker node and install kata binaries only in there.oc label node $(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[0].metadata.name}') workerType=kataWorker cat > kataconfig.yaml <<EOF apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true kataConfigPoolSelector: matchLabels: workerType: 'kataWorker' EOF clear cat kataconfig.yamloc apply -f kataconfig.yaml -
Wait for kata-oc
MachineConfigPool(MCP) to be inUPDATEDstate (onceUPDATEDMACHINECOUNTequalsMACHINECOUNT). In this ARO setup with 3 worker nodes, it should take around 15 minutes.watch oc get mcp/kata-ocExpected output after all nodes have been updated:
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE kata-oc rendered-kata-oc-894630a1c9cdf3ebef8bd98c72e26608 True False False 1 1 1 0 13m
Verification
-
Make sure that the
AZURE_IMAGE_IDin theConfigMapis populated. If it isn’t, make sure there is a job running calledosc-podvm-image-creation-<random-letters>.oc get configmap peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml | grep -m 1 AZURE_IMAGE_IDIf
data: AZURE_IMAGE_ID:is still empty, check the job:watch oc get jobs -n openshift-sandboxed-containers-operatorWait till the job
COMPLETIONSdoesn’t change to1/1. In this ARO setup, it should take around 20 minutes. -
Make sure that the required daemonset is created.
oc get -n openshift-sandboxed-containers-operator ds/osc-caa-dsExpected output:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE osc-caa-ds 1 1 1 1 1 node-role.kubernetes.io/kata-oc= 22m -
Make sure the
RuntimeClassare created.oc get runtimeclassExpected output:
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
This is the expected output when looking at the OSC pods (note the random character ending will change):
oc get pods -n openshift-sandboxed-containers-operator
NAME READY STATUS RESTARTS AGE
controller-manager-5dd87698b7-9cqbn 2/2 Running 0 17m
openshift-sandboxed-containers-monitor-m9ffw 1/1 Running 0 30m
operator-metrics-server-857fb654c4-z24f4 1/1 Running 0 20m
osc-podvm-image-creation-fltm8 0/1 Completed 0 17m
peer-pods-webhook-65cffdd499-2nh9q 1/1 Running 0 2m59s
peer-pods-webhook-65cffdd499-8x684 1/1 Running 0 2m59s
osc-caa-ds-vkfm5 1/1 Running 0 2m59s
This is it! Now the cluster is ready to run workloads with kata-remote RuntimeClass!