Deploy a sample pod
Now that the everything is ready, we can run a sample workload. Let’s first see what we can and must add into the pod yaml to make it run in a VM.
Because of CPU Quota limitations of this ARO subscription, and since each CoCo runs in a Confidential VM, you are just allowed to get one single CoCo pod at time. Trying to deploy a second one will result in failure and container stuck in ContainerCreating state.
|
In this section, we will show you two simple examples of how to respectively enable CoCo in a traditional pod and how to perform attestation. Please refer to the conclusion to learn about more complex examples with AI, gpus, and so on.
Available options
Mandatory options
In order to run a pod in a VM, it is mandatory to specify the runtimeClassName
field in the pod spec
. For peer-pods, the runtime class is called kata-remote
.
apiVersion: v1
kind: <Pod>
# ...
spec:
runtimeClassName: kata-remote
# ...
Optionals
-
Change the image to be used for the given pod manifest:
apiVersion: v1 kind: <Pod> metadata: annotations: io.katacontainers.config.hypervisor.image: /your/custom/image/path/here # ...
This overrides
AZURE_IMAGE_ID
in peer-pods configmap, and it’s simply a path to the Azure image gallery/definition/version containing the custom image. Note that the image has to be accessible by the Openshift cluster resource group, otherwise it won’t be able to pull it. -
Change the instance size to be used for the given pod manifest:
apiVersion: v1 kind: <Pod> metadata: annotations: io.katacontainers.config.hypervisor.machine_type: Standard_DC8as_v5 # ...
Note that the
machine_type
must be one of the one specified inAZURE_INSTANCE_SIZES
in the OSC ConfigMap. If the defined instance size is allowed byAZURE_INSTANCE_SIZES
, it will overrideAZURE_INSTANCE_SIZE
defined in the peer-pods ConfigMap. Otherwise, it will default toAZURE_INSTANCE_SIZE
. -
Change the initdata policy to be used for the given pod manifest. As explained in the initdata section, it is also possible to change the initdata policy by adding the base64 encoded policy as
io.katacontainers.config.runtime.cc_init_data
undermetadata:annotations
in the podspec.you will need to also update the PCR8
value in the Trustee reference values.
Hello world example
In this example we will show how easy is to modify an existing pod to make it running in CoCo, i.e. specifying runtimeClassName
in the podspec. No other action is necessary w.r.t the pod itself, and the confidential VM is completely transparent to it.
This is a sample yaml that runs an hello-openshift
pod in the default
namespace. The pod application is not developed by the CoCo team, nor was modified purposefully for this example. It was built from here, signed and pushed into the quay.io/confidential-devhub/signed-hello-openshift
repo. This pod creates a server and outputs "Hello Openshift!"
every time it is reached. The difference between this pod deployed as Confidential Container and traditional pod is just that the former has spec.runtimeClassName: kata-remote
defined in the pod spec.
In order to use the Sealed Secret support feature, we will also attach a volume that will be use to load the required key if attestation is successful.
-
Switch to the
default
namespace if not done alreadyoc project default
-
Create and apply the yaml file.
cat > sample-openshift.yaml << EOF apiVersion: v1 kind: Pod metadata: name: hello-openshift namespace: default labels: app: hello-openshift spec: runtimeClassName: kata-remote containers: - name: hello-openshift image: quay.io/confidential-devhub/signed-hello-openshift:latest ports: - containerPort: 8888 securityContext: privileged: false allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL seccompProfile: type: RuntimeDefault volumeMounts: - name: sealed-secret-volume mountPath: "/sealed/secret-value" volumes: - name: sealed-secret-volume secret: secretName: sealed-secret --- kind: Service apiVersion: v1 metadata: name: hello-openshift-service namespace: default labels: app: hello-openshift spec: selector: app: hello-openshift ports: - port: 8888 EOF clear cat sample-openshift.yaml
oc apply -f sample-openshift.yaml
-
Wait that the pod is created.
watch oc get pods/hello-openshift
The pod is ready when the
STATUS
is inRunning
. -
Now expose the pod to make it reachable:
oc expose service hello-openshift-service -l app=hello-openshift APP_URL=$(oc get routes/hello-openshift-service -o jsonpath='{.spec.host}')
-
And try to connect to the pod. It should print
Hello Openshift!
.curl ${APP_URL}
Verify that the pod is running in a VM
How to be sure that all what we did so far is actually running in a VM? There are several ways to check this.
Let’s check it via command line using az
.
az vm list --query "[].{Name:name, VMSize:hardwareProfile.vmSize}" --output table
Example output:
Name VMSize
-------------------------------------------- ----------------
aro-cluster-q5hqf-xs7zb-master-0 Standard_D8s_v3
aro-cluster-q5hqf-xs7zb-master-1 Standard_D8s_v3
aro-cluster-q5hqf-xs7zb-master-2 Standard_D8s_v3
aro-cluster-q5hqf-xs7zb-worker-eastus1-6rlsl Standard_D4s_v3
aro-cluster-q5hqf-xs7zb-worker-eastus2-vt87j Standard_D4s_v3
aro-cluster-q5hqf-xs7zb-worker-eastus3-6dzt4 Standard_D4s_v3
podvm-hello-openshift-c0311387 Standard_DC4as_v5
bastion-q5hqf Standard_DS1_v2
Look at the various VMs. You will see there are:
-
3 masters VM (called aro-cluster-insert-your-guid-here-<random chars>-master-0/1/2)
-
3 workers VM (called aro-cluster-insert-your-guid-here-<random chars>-worker-<region>-<random chars>)
-
1 bastion-insert-your-guid-here VM, used internally by the workshop infrastructure. The console on the right is actually connected to this VM, and all commands are being performed from here.
-
1 podvm-hello-openshift-<random chars>. This is where the
hello-openshift
pod is actually running! Note also how the instance tyoe underSize
column at the right side is not the same as the other VMs. It is indeedStandard_DC4as_v5
, as specified in the OSC ConfigMap.
Retrieve a secret from Trustee (attestation)
This hello-openshift
test pod runs under the previously configured OSC initdata policy, therefore if you followed the workshop initdata, logs are enabled and it will be possible to exec to retrieve key1
.
-
Check that logs are enabled
oc logs pods/hello-openshift
And notice how the workload log (
serving on 8888
) is printed. -
Check that pod exec is disabled
oc exec -it pods/hello-openshift -- bash
And notice how an error is returned:
error: Internal error occurred: error executing command in container: cannot enter container 8c0001fb69f7b8e728a3ccc8ad51d362f284f17450765f895db91dce7fc00413, with err rpc error: code = PermissionDenied desc = "ExecProcessRequest is blocked by policy: ": unknown
-
Since this is one of the only commands allowed,
exec
to get the Trusteekey1
secret into the pod. This key was added in Trustee when configuring it. If you followed the exact instructions,key1
was configured to containConfidential_Secret!
.oc exec -it pods/hello-openshift -- curl -s http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1 && echo ""
And as expected, the secret is returned successfully.
[azure@bastion ~]# oc exec -it pods/hello-openshift -- curl -s http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1 && echo "" Confidential_Secret!
Notice how the curl
call is connecting withhttp://127.0.0.1
. This is done on purpose, because the CoCo technology is designed to avoid hardcoding any special logic into the pod application. This means that a Confidential Container doesn’t have to know where the Trustee lives, what is its ip, or even care about the attestation report. This is provided in the OSCINITDATA
given in the peer-pods configmap or via the annotation. Such url is then forwarded to the localTrustee agent
running in side the CoCo Confidential VM automatically, so all the CoCo pod application has to do is communicate locally (thereforehttp
is enough) with the localTrustee agent
and ask for the path representing the secret it would like to get, in this casekbsres1/key1
. TheTrustee agent
will then take care of collecting hardware & software attestation proofs, create an attestation report, enstablish anhttps
connection with the remote attesterTrustee operator
, and then perform the attestation process. -
Let’s also check if
key2
is automatically loaded into the sealed secret.oc exec -it pods/hello-openshift -- cat /sealed/secret-value/key2 && echo ""
The output will be the actual content of the
key2
.[azure@bastion ~]# oc exec -it pods/hello-openshift -- /sealed/secret-value/key2 && echo "" This is my super secret key!
-
Trying any other command in
exec
will fail.[azure@bastion ~]# oc exec -it pods/hello-openshift -- bash error: Internal error occurred: error executing command in container: cannot enter container d60d9d18412d0e4d9bb2e29975b420e4535bac9d966452bc58775ba847cb940c, with err rpc error: code = PermissionDenied desc = "ExecProcessRequest is blocked by policy: ": unknown
-
It is also possible to inspect Trustee logs to understand how the process worked.
POD_NAME=$(oc get pods -l app=kbs -o jsonpath='{.items[0].metadata.name}' -n trustee-operator-system) clear oc logs -n trustee-operator-system $POD_NAME
Expected output (filtering the important logs only):
... [INFO api_server::http::attest] Attest API called. [INFO attestation_service] AzSnpVtpm Verifier/endorsement check passed. [INFO attestation_service] Policy check passed. ... [INFO api_server::http::resource] Get resource from kbs:///default/kbsres1/key1 [INFO api_server::http::resource] Resource access request passes policy check. [INFO actix_web::middleware::logger] 10.131.0.9 "GET /kbs/v0/resource/default/kbsres1/key1 HTTP/1.1" 200 514 "-" "attestation-agent-kbs-client/0.1.0" 0.001004
In this redacted log, we can see how the
AzSnpVtpm
Verifier check passed, how the policy and resource check passed, and eventually the key is sent to the CoCo pod.
Destroy the example pods
The pods created in this example section are no different from any other pod, therefore it can be destroyed just as the others (via command line, web ui, etc.). Behind the scenes, the operator will make sure that the created VM will also be completely deallocated.
oc delete pods/hello-openshift -n default