Skip to main content
Version: Next

Quickstart

Continuum is an end-to-end encrypted, privacy-preserving GenAI service which is consumed via an API. Setting it up is a breeze. It shouldn't take more than 10 minutes of your time.

1. Install Docker

Follow these instructions to install Docker.

2. Run the proxy

Continuum comes with its own proxy. The proxy takes care of client-side encryption and verifies the integrity and identity of the entire Continuum service using remote attestation. Use the following command to run the proxy:

docker run -p 8080:8080 ghcr.io/edgelesssys/continuum/continuum-proxy:latest --apiKey <your-api-token>

This opens an endpoint on your localhost on port 8080. Afterward, you can simply send prompts to this endpoint. Of course you can run the proxy wherever you want and configure TLS.

info

To get a free API key for testing, just fill out this form.

3. Send prompts

Now you're all set to use the API. The proxy handles all the security (and confidential computing) intricacies for you. Let's start by sending our first prompt:

Example request

curl localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4",
"messages": [
{
"role": "system",
"content": "Hello Continuum!"
}
]
}''

Example response

    "id": "chat-c87bdd75d1394dcc886556de3db5d0c9",
"object": "chat.completion",
"created": 1727429032,
"model": "hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello. I'm here to help you in any way I can.",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": null
}
],
"usage": {
"prompt_tokens": 34,
"total_tokens": 49,
"completion_tokens": 15
},
"prompt_logprobs": null
}

The code performs the following three simple steps:

  1. Construct a prompt request following the OpenAI Chat API specifications.
  2. Send the prompt request to the Continuum proxy. The proxy handles end-to-end encryption and verifies the integrity of the Continuum backend that serves the endpoint.
  3. Receive and print the response.
info

We don't use any OpenAI services. We only adhere to the same interface definitions to provide a great development experience and ensure easy code portability.