Skip to main content
Version: Next

Proxy configuration

The continuum-proxy is a service that must be deployed by users of Continuum. Once started, the proxy will serve as your API endpoint, handling all the heavy lifting to guarantee end-to-end encryption for you.

The proxy does two things:

  1. Verifies the Continuum deployment at https://api.ai.confidential.cloud/. This is where your encrypted prompts are processed by the GenAI. The verification process is described in the attestation section.
  2. Transparently encrypts user prompts and decrypts responses from the Continuum API.

The proxy is published as a Docker container on GitHub.

Running the container

The following command starts the proxy and exposes it on host port 8080:

docker run -p 8080:8080 ghcr.io/edgelesssys/continuum/continuum-proxy:latest
info

Supply chain security best practices recommend pinning containers by their hash. This means specifying the exact cryptographic digest (hash) of the container image, rather than relying on tags like latest or version labels. By doing so, you ensure that the exact, verified version of the container is used, which helps prevent issues like unexpected updates or potential compromise.

CLI flags

To see all available CLI option flags, use:

docker run ghcr.io/edgelesssys/continuum/continuum-proxy:latest --help

Options

      --apiEndpoint string    The endpoint for the Continuum API (default "api.ai.confidential.cloud:443")
--apiKey string The API key for the Continuum API. If no key is set, the verifier will not authenticate with the API.
--asEndpoint string The endpoint of the attestation service. (default "attestation.ai.confidential.cloud:3000")
--disableUpdate By default the manifest is auto-updated when the Continuum manifest changes. The update behavior can be disabled with this flag.
-h, --help help for continuum-proxy
-l, --log-level string set logging level (debug, info, warn, error, or a number) (default "info")
--manifestPath string The path for the manifest file. If not provided, the manifest will be read from the remote source.
--port string The port on which the verifier listens for incoming API requests. (default "8080")
--tlsCertPath string The path to the TLS certificate. If not provided, the server will start without TLS.
--tlsKeyPath string The path to the TLS key. If not provided, the server will start without TLS.
--workspace string The path into which the binary writes files. This includes the manifest log data in the 'manifests' subdirectory. (default ".")

Extract a static binary

If you want to run the proxy as a binary, you can extract it from the container image. Depending on your architecture (arm64 or amd64), insert the <arch> variable below to obtain a static Linux binary by extracting it from the container like this:

containerID=$(docker create --platform linux/\<arch\> ghcr.io/edgelesssys/continuum/continuum-proxy:latest)
docker cp -L "${containerID}":/bin/continuum-proxy ./continuum-proxy
docker rm "${containerID}"

Outbound network traffic

When running the proxy in an environment with restricted firewall settings, you might need to whitelist the following domains:

  • weu.service.attest.azure.net: for attestation verification
  • kdsintf.amd.com: for verifying the certificate chain of the signed CPU attestation report
  • attestation.ai.confidential.cloud: for communication and verification of the attestation service
  • cdn.confidential.cloud: for fetching the latest manifest
  • api.ai.confidential.cloud: for sending encrypted LLM requests

Setting up TLS

To ensure secured communication between your application and the proxy, configuring TLS is highly recommended.

Use the flags --tlsCertPath and --tlsKeyPath to provide a valid TLS certificate and private key, respectively.

  • --tlsCertPath should point to the path of the TLS certificate file (e.g., .crt or .pem) used to identify the server.
  • --tlsKeyPath should point to the private key file that corresponds to the certificate.

By providing these, continuum-proxy will serve traffic from an to your application client via HTTPS, ensuring secure communication with the proxy. If these flags aren't set, the proxy will fall back to serving traffic over HTTP.

Authorization token

Since you aren't directly communicating with the GenAI backend but are instead using the proxy to establish an end-to-end encrypted connection, you should use the --apiKey flag to provide a valid API key to the continuum-proxy. The proxy will handle authentication with the GenAI backend on your behalf, making integration easier.

Think of the proxy as your backend—it manages the secure communication and authentication for you, so you can focus on making requests without needing to worry about the underlying details.

Proxy upgrades

It’s possible that an upgrade to the Continuum API introduces a new manifest (essentially the expected configuration and state of the endpoints your proxy communicates with) which is incompatible with your current version of continuum-proxy. In such cases, you may encounter issues where the updated manifest can't be properly read or processed by the proxy—this is known as an "unmarshalling" error.

When this happens, you will need to update your proxy (the Docker container) to a compatible version.

In the future, we will provide documentation on how to implement automatic updates, which will help mitigate these types of issues.

Manifest management

Whenever the continuum-proxy verifies the Continuum deployment, it relies on a manifest to determine whether the services should be trusted. The manifest contains fingerprints of expected configurations and states of trusted endpoints. If they differ from the actual configurations and states, the services aren't to be trusted. You can find more details on this process in the Manifest section.

By default, the manifest is managed automatically. Manual control is possible, but it's not recommended in production since updates to the Continuum API are rolled out continuously.

Automatically

By default the proxy fetches a manifest from a CDN controlled by Edgeless Systems (you can get it here). Whenever validation of the cluster fails, the proxy will try to fetch a new manifest from the CDN and retry validation. This allows the proxy to continue working without manual intervention, even if the deployment changes.

To ensure auditability of the enforced manifests over time, changes to the manifest are logged to the local file system. These logs serve as a transparency log, recording which manifest was used at what point in time to verify the Continuum deployment.

The proxy writes a file called log.txt. For each manifest that's enforced by the proxy, log.txt contains a new line with the timestamp at which enforcement began and the filename of the manifest that was enforced. log.txt and the corresponding manifests are stored in a folder continuum-manifest. The CLI flag --workspace can be used to control where the folder continuum-manifest is stored.

You should mount the workspace to the Docker host to ensure this transparency log isn't lost when the container is removed:

docker run -p 8080:8080 -v proxy-logs:/app/continuum-proxy ghcr.io/edgelesssys/continuum/continuum-proxy:latest --workspace /app/continuum-proxy

Manually

You can write a manifest manually, save it to disk and provide the file path to the continuum-proxy via its --manifestPath CLI flag.

warning

This approach isn’t recommended for production because updates to the Continuum API are continuously rolled out. Each update includes a new manifest, which invalidates the current manifest and prevents successful validation through the proxy. As a result, the manifest needs to be manually updated with each Continuum API update.

Helm chart

You can use the continuum-proxy Helm chart for easy deployment to Kubernetes.

Prerequisites

  • Kubernetes 1.16+
  • Helm 3+
  • (Optional) Persistent Volume for workspace
  • (Optional) ConfigMap for manifest file
  • (Optional) TLS secret for certificates

Installation

helm repo add edgeless https://helm.edgeless.systems/stable
helm repo update

helm install continuum-proxy edgeless/continuum-proxy

Configuration

API Key

The API key should be stored in a Kubernetes secret. You can create it using:

kubectl create secret generic continuum-api-key --from-literal=apiKey=your-api-key

Persistent volumes

To ensure that the application’s data is persisted beyond the lifetime of the current deployment, you need to configure a Persistent Volume. This is particularly important when relying on automatic manifest updates as described in the manifest management section.

Where previously the container data was just mounted to the host, a Persistent volume is required in Kubernetes. Without a Persistent Volume, the transparency log will be lost if the pod is deleted or restarted, hindering the ability to track and verify historical deployment data.

Create the PersistentVolumeClaim:

kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: continuum-proxy-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF

Then, configure these values for your chart:

config:
workspace:
enabled: true
volumeClaimName: "continuum-proxy-pvc"

TLS Configuration

To enable TLS for communication between your application and the Continuum proxy, provide the TLS certificate and key through a Kubernetes secret:

You can create the TLS secret manually or use cert-manager to manage it:

kubectl create secret tls continuum-proxy-tls \
--cert=<path-to-cert> --key=<path-to-key>

Then, configure these values for your chart:

config:
tls:
enabled: true
secretName: "continuum-proxy-tls"

Manifest File Configuration

While manually managing manifests isn't recommended (see Manifest management), you can pass in the manifest via a ConfigMap:

Create the ConfigMap from your manifest file:

kubectl create configmap continuum-proxy-config --from-file=manifest.toml=/path/to/your/manifest.toml

Then, configure these values for your chart:

config:
manifest:
enabled: true
configMapName: "continuum-proxy-config"
fileName: "manifest.toml"
mountPath: "/etc/config/manifest.toml"

Accessing the proxy

Once the deployment is complete, you can configure your application to access the API through the continuum-proxy service’s domain.

By default, the Continuum proxy can be accessed at the following URL:

http://continuum-proxy-continuum-proxy.default.svc.cluster.local:8080/v1

This URL is constructed as follows:

http://{helm-release}-continuum-proxy.{namespace}.svc.cluster.local:{port}/v1
  • {helm-release}: The name of your Helm release.
  • {namespace}: The Kubernetes namespace where the continuum-proxy is deployed.
  • {port}: The port configured for the continuum-proxy service (default is 8080).

If you have configured a custom DNS entry in your Kubernetes cluster, you will need to adjust the URL accordingly. Replace the default service domain with your custom domain, ensuring that your application can correctly resolve and communicate with the continuum-proxy service.

Uninstallation

To uninstall the chart:

helm uninstall continuum-proxy