Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

treefmt: add vale #281

Merged
merged 4 commits into from
Mar 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .vale.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
StylesPath = docs/styles
Vocab = edgeless

[*.md]
BasedOnStyles = Vale, Microsoft, Google

# Disable rules
Vale.Terms = NO

# decrease to suggestion
Microsoft.HeadingAcronyms = suggestion # doesn't consider well-known ones
Microsoft.GeneralURL = suggestion # ok for technical users

# increase to warning
Microsoft.OxfordComma = warning
Microsoft.SentenceLength = warning
16 changes: 8 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
1. [Install Nix](https://zero-to-nix.com/concepts/nix-installer)

2. (Optional) configure Nix to allow use of extra substituters, and profit from our
cachix remote cache. To allow using additional substituters from the flake.nix,
add yourself (or the wheel group) as trusted-user in your nix config.
cachix remote cache. To allow using additional substituters from the `flake.nix`,
add your user name (or the wheel group) as trusted-user in your nix config.

On NixOS (in your config):

Expand All @@ -30,10 +30,10 @@
nix develop .#
```

Or activate [direnv](https://direnv.net/) to automatically enter the nix shell.
It is recommended to use [nix-direnv](https://github.com/nix-community/nix-direnv).
If your system ships outdated bash, [install direnv](https://direnv.net/docs/installation.html) via package manager.
Additionally, you may want to add the [vscode extension](https://github.com/direnv/direnv-vscode).
Or activate [`direnv`](https://direnv.net/) to automatically enter the nix shell.
It's recommended to use [`nix-direnv`](https://github.com/nix-community/nix-direnv).
If your system ships outdated bash, [install `direnv`](https://direnv.net/docs/installation.html) via package manager.
Additionally, you may want to add the [VSCode extension](https://github.com/direnv/direnv-vscode).

```sh
direnv allow
Expand All @@ -60,7 +60,7 @@

### Deploy

The ususally dev flow is available as a single target to execute:
The usually developer flow is available as a single target to execute:

```sh
just [default <deployment-name>]
Expand All @@ -71,7 +71,7 @@ Ensure the pushed container images are accessible to your cluster.
The manifest will the be generated (`contrast generate`).

Further the flow will deploy the selected deployment and wait for components to come up.
The manifest will automatically be set (`contrast set`) and the Coordinator will will be verified
The manifest will automatically be set (`contrast set`) and the Coordinator will be verified
(`contrast verify`). The flow will also wait for the workload to get ready.

This target is idempotent and will delete an existing deployment before re-deploying.
Expand Down
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Contrast currently targets the [CoCo preview on AKS](https://learn.microsoft.com

## Goal

Contrast is designed to keep all data always encrypted and to prevent access from the infrastructure layer, i.e., remove the infrastructure from the trusted computing base (TCB). This includes access from datacenter employees, privileged cloud admins, own cluster administrators, and attackers coming through the infrastructure, e.g., malicious co-tenants escalating their privileges.
Contrast is designed to keep all data always encrypted and to prevent access from the infrastructure layer. It removes the infrastructure provider from the trusted computing base (TCB). This includes access from datacenter employees, privileged cloud admins, own cluster administrators, and attackers coming through the infrastructure, for example, malicious co-tenants escalating their privileges.

Contrast integrates fluently with the existing Kubernetes workflows. It's compatible with managed Kubernetes, can be installed as a day-2 operation and imposes only minimal changes to your deployment flow.

Expand Down Expand Up @@ -71,20 +71,20 @@ A third party can use this to verify the integrity of your distributed app, maki
### The Manifest

The manifest is the configuration file for the Coordinator, defining your confidential deployment.
It is automatically generated from your deployment by the Contrast CLI.
It's automatically generated from your deployment by the Contrast CLI.
It currently consists of the following parts:

* *Policies*: The identities of your Pods, represented by the hashes of their respective runtime policies.
* *Reference Values*: The remote attestation reference values for the Kata confidential micro-VM that is the runtime environment of your Pods.
* *Reference Values*: The remote attestation reference values for the Kata confidential micro-VM that's the runtime environment of your Pods.
* *WorkloadOwnerKeyDigest*: The workload owner's public key digest. Used for authenticating subsequent manifest updates.

### Runtime Policies

Runtime Policies are a mechanism to enable the use of the (untrusted) Kubernetes API for orchestration while ensuring the confidentiality and integrity of your confidential containers.
Runtime Policies are a mechanism to enable the use of the untrusted Kubernetes API for orchestration while ensuring the confidentiality and integrity of your confidential containers.
They allow us to enforce the integrity of your containers' runtime environment as defined in your deployment files.
The runtime policy mechanism is based on the Open Policy Agent (OPA) and translates the Kubernetes deployment YAMLs into OPA's Rego policy language.
The runtime policy mechanism is based on the Open Policy Agent (OPA) and translates the Kubernetes deployment YAML into the Rego policy language of OPA.
The Kata Agent inside the confidential micro-VM then enforces the policy by only acting on permitted requests.
The Contrast CLI provides the tooling for automatically translating Kubernetes deployment YAMLs into OPA's Rego policy language.
The Contrast CLI provides the tooling for automatically translating Kubernetes deployment YAML into the Rego policy language of OPA.

The trust chain goes as follows:

Expand All @@ -97,7 +97,7 @@ The trust chain goes as follows:
7. The CLI sets a manifest in the Contrast Coordinator, including a list of permitted policies.
8. The Contrast Coordinator verifies that the started pod has a permitted policy hash in its `HOSTDATA` field.

After the last step, we know that the policy has not been tampered with and, thus, that the workload is as intended.
After the last step, we know that the policy hasn't been tampered with and, thus, that the workload is as intended.

### The Contrast Initializer

Expand Down Expand Up @@ -196,7 +196,7 @@ helm template release-name chart-name > resources/all.yml
To specify that a workload (pod, deployment, etc.) should be deployed as confidential containers,
add `runtimeClassName: kata-cc-isolation` to the pod spec (pod definition or template).
In addition, add the Contrast Initializer as `initContainers` to these workloads and configure the
workload to use the certificates written to the `tls-certs` volumeMount.
workload to use the certificates written to a `volumeMount` named `tls-certs`.

```yaml
spec: # v1.PodSpec
Expand Down Expand Up @@ -243,8 +243,8 @@ coordinator=$(kubectl get svc coordinator -o=jsonpath='{.status.loadBalancer.ing
```

> [!NOTE]
> `kubectl port-forward` uses a CRI method that is not supported by the Kata shim. If you
> cannot use a public load balancer, you can deploy a [deployments/simple/portforwarder.yml] and
> `kubectl port-forward` uses a CRI method that isn't supported by the Kata shim. If you
> can't use a public load balancer, you can deploy a [deployments/simple/portforwarder.yml] and
> expose that with `kubectl port-forward` instead.
>
> Tracking issue: <https://github.com/kata-containers/kata-containers/issues/1693>.
Expand Down Expand Up @@ -284,7 +284,7 @@ lbip=$(kubectl get svc ${MY_SERVICE} -o=jsonpath='{.status.loadBalancer.ingress[
echo $lbip
```

Note: All workload certificates are created with a wildcard DNS entry. Since we are accessing the load balancer via IP, the SAN checks the certificate for IP entries in the SAN field. Since the certificate doesn't contain any IP entries as SAN, the validation fails.
Note: All workload certificates are created with a wildcard DNS entry. Since we're accessing the load balancer via IP, the SAN checks the certificate for IP entries in the SAN field. Since the certificate doesn't contain any IP entries as SAN, the validation fails.
Hence, with curl you need to skip the validation:

```sh
Expand All @@ -308,7 +308,7 @@ As a result, there are currently certain limitations from which we try to docume
- Persistent volumes currently not supported in CoCo
- While workload policies are functional in general, but [not covering all edge cases](https://github.com/microsoft/kata-containers/releases/tag/genpolicy-0.6.2-5)
- Port-forwarding isn't supported by Kata Containers yet
- CLI is only available for Linux (mostly because upstream dependencies are not available for other platforms)
- CLI is only available for Linux (mostly because upstream dependencies aren't available for other platforms)

## Upcoming Contrast features

Expand Down
24 changes: 12 additions & 12 deletions dev-docs/aks/nested-virt-internals.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The instance types used are from the [DCas_cc_v5 and DCads_cc_v5-series](https:/

## Using nested CVM-in-VM outside of AKS

The instance types and images can be used for regular VMs (and scalesets):
The instance types and images can be used for regular VMs (and scale sets):

```sh
LOCATION=westeurope
Expand All @@ -21,13 +21,13 @@ az vm create \
az ssh vm --resource-group "${RG}" --vm-name nested-virt-test
```

The VM has access to hyperv as a paravisor via `/dev/mshv` (and *not* `/dev/kvm`, which would be expected for nested virt on Linux). The userspace component for spawning nested VMs on mshv is part of [rust-vmm](https://github.com/rust-vmm/mshv).
This feature is enabled by kernel patches which are not yet upstream. At the time of writing, the VM image uses a kernel with a uname of `5.15.126.mshv9-2.cm2` and this additional kernel cmdline parameter: `hyperv_resvd_new=0x1000!0x933000,0x17400000!0x104e00000,0x1000!0x1000,0x4e00000!0x100000000`.
The VM has access to hyperv as a paravisor via `/dev/mshv` (and *not* `/dev/kvm`, which would be expected for nested virtualization on Linux). The user space component for spawning nested VMs on `mshv` is part of [rust-vmm](https://github.com/rust-vmm/mshv).
This feature is enabled by kernel patches which aren't yet upstream. At the time of writing, the VM image uses a kernel with a `uname` of `5.15.126.mshv9-2.cm2` and this additional kernel cmdline parameter: `hyperv_resvd_new=0x1000!0x933000,0x17400000!0x104e00000,0x1000!0x1000,0x4e00000!0x100000000`.

The Kernel is built from a CBL Mariner [spec file](https://github.com/microsoft/CBL-Mariner/blob/2.0/SPECS/kernel-mshv/kernel-mshv.spec) and the Kernel source is [publicly accessible](https://cblmarinerstorage.blob.core.windows.net/sources/core/kernel-mshv-5.15.126.mshv9.tar.gz).
MicroVM guests use a newer Kernel ([spec](https://github.com/microsoft/CBL-Mariner/tree/2.0/SPECS/kernel-uvm-cvm)).

The image also ships parts of the confidential-containers tools, the kata-runtime, cloud-hypervisor and related tools (kata-runtime, cloud-hypervisor, [containerd](https://github.com/microsoft/confidential-containers-containerd)), some of which contain patches that are not yet upstream, as well as a custom guest image under `/opt`:
The image also ships parts of the confidential-containers tools, the kata-runtime, cloud-hypervisor and related tools (kata-runtime, cloud-hypervisor, [containerd](https://github.com/microsoft/confidential-containers-containerd)), some of which contain patches that aren't yet upstream, as well as a custom guest image under `/opt`:

<details>
<summary><code>$ find /opt/confidential-containers/</code></summary>
Expand Down Expand Up @@ -56,7 +56,7 @@ The image also ships parts of the confidential-containers tools, the kata-runtim
```
</details>

With those components, it is possible to use the `containerd-shim-kata-cc-v2` runtime for containerd (or add it as a runtime class in k8s).
With those components, it's possible to use the `containerd-shim-kata-cc-v2` runtime for containerd (or add it as a runtime class in k8s).

<details>
<summary><code>/etc/containerd/config.toml</code></summary>
Expand Down Expand Up @@ -116,7 +116,7 @@ oom_score = 0
</details>

<details>
<summary>Runtime options extraced by <code>crictl inspect CONTAINER_ID | jq .info</code> (truncated)</summary>
<summary>Runtime options extracted by <code>crictl inspect CONTAINER_ID | jq .info</code> (truncated)</summary>

```json
{
Expand All @@ -131,7 +131,7 @@ oom_score = 0
</details>

With the containerd configuration from above, the following commands can be used to start the [tardev-snapshotter](https://github.com/kata-containers/tardev-snapshotter) and spawn a container using the `kata-cc` runtime class.
Please note that while this example almost works, it does not currently result in a working container outside of AKS.
Please note that while this example almost works, it doesn't currently result in a working container outside of AKS.
This is probably due to a mismatch of exact arguments or configuration files.
Maybe a policy annotation is required for booting.

Expand Down Expand Up @@ -171,15 +171,15 @@ EOF

### Resource Management

There's [AKS documentation for resource managment] which explains the basics of how CPU and
There's [AKS documentation for resource management] which explains the basics of how CPU and
memory are allocated for a Kata VM.
The default memory allocation is quite high at 2GiB, which fills up the node pretty quickly.
It's unclear why this deafult is chosen, given that the container limit is added on top of this
The default memory allocation is quite high at 2GiB, which fills up the node fast.
It's unclear why this default is chosen, given that the container limit's added on top of this
value. Forcing a size with the pod annotation
`io.katacontainers.config.hypervisor.default_memory` would be possible, but the annotation would
need to be allowlisted in the config setting `enable_annotations`.
need to be allow-listed in the config setting `enable_annotations`.

[AKS documentation for resource managment]: https://learn.microsoft.com/en-us/azure/aks/confidential-containers-overview#resource-allocation-overview
[AKS documentation for resource management]: https://learn.microsoft.com/en-us/azure/aks/confidential-containers-overview#resource-allocation-overview

<details>
<summary>Relevant config snippet</summary>
Expand Down
12 changes: 6 additions & 6 deletions dev-docs/aks/serial-console.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,38 @@
# Obtain a serial console inside the podvm

Get a shell on the AKS node. If in doubt, use [nsenter-node.sh](https://github.com/alexei-led/nsenter/blob/master/nsenter-node.sh).
Now run the following commands to use a debug [igvm](https://docs.google.com/presentation/d/1uWeyqtYV53Vtxd3ayYWWTLbxasYNr35a/) and enable debugging for the kata runtime.
Now run the following commands to use a debug [IGVM](https://docs.google.com/presentation/d/1uWeyqtYV53Vtxd3ayYWWTLbxasYNr35a/) and enable debugging for the Kata runtime.

```sh
sed -i -e 's#^igvm = "\(.*\)"#igvm = "/opt/confidential-containers/share/kata-containers/kata-containers-igvm-debug.img"#g' /opt/confidential-containers/share/defaults/kata-containers/configuration-clh-snp.toml
sed -i -e 's/^#enable_debug = true/enable_debug = true/g' /opt/confidential-containers/share/defaults/kata-containers/configuration-clh-snp.toml
systemctl restart containerd
```

Now you need to reconnect to the host. Use the following commands to print the sandbox ids of kata VMs.
Now you need to reconnect to the host. Use the following commands to print the sandbox ids of Kata VMs.
Please note that only the pause container of every pod has the `clh.sock`. Other containers are part of the same VM.

```shell-session
$ ctr --namespace k8s.io container ls "runtime.name==io.containerd.kata-cc.v2"
$ sandbox_id=ENTER_SANDBOX_ID_HERE
```

And attach to the serial console using socat. You need to type `CONNECT 1026<ENTER>` to get a shell.
And attach to the serial console using `socat`. You need to type `CONNECT 1026<ENTER>` to get a shell.

```shell-session
$ cd /var/run/vc/vm/${sandbox_id}/ && socat stdin unix-connect:clh.sock
CONNECT 1026
```

Alternatively, you can use `kata-runtime exec`, which gives you proper tty behavior including tab completion.
First ensure the kata-monitor is running, then connect.
Alternatively, you can use `kata-runtime exec`, which gives you proper TTY behavior including tab completion.
First ensure the `kata-monitor` is running, then connect.

```
$ kata-monitor &
$ kata-runtime exec ${sandbox_id}
```

If you are done, use the following commands to go back to a release igvm.
If you are done, use the following commands to go back to a release IGVM.

```sh
sed -i -e 's#^igvm = "\(.*\)"#igvm = "/opt/confidential-containers/share/kata-containers/kata-containers-igvm.img"#g' /opt/confidential-containers/share/defaults/kata-containers/configuration-clh-snp.toml
Expand Down
Loading