Quickstart
Continuum is an end-to-end encrypted, privacy-preserving GenAI service which is consumed via an API. Setting it up is a breeze. It shouldn't take more than 10 minutes of your time.
1. Install Docker
Follow these instructions to install Docker.
2. Run the proxy
Continuum comes with its own proxy. The proxy takes care of client-side encryption and verifies the integrity and identity of the entire Continuum service using remote attestation. Use the following command to run the proxy:
docker run -p 8080:8080 ghcr.io/edgelesssys/continuum/continuum-proxy:latest --apiKey <your-api-token>
Alternatively to Docker, you may also run the native binary on Linux amd64/ arm64 as described here.
This opens an endpoint on your localhost on port 8080. Afterward, you can simply send prompts to this endpoint. Of course you can run the proxy wherever you want and configure TLS.
To get a free API key for testing, just fill out this form.
3. Send prompts
Now you're all set to use the API. The proxy handles all the security (and confidential computing) intricacies for you. Let's start by sending our first prompt:
- Bash
- Python
- Javascript
Example request
curl localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4",
"messages": [
{
"role": "system",
"content": "Hello Continuum!"
}
]
}'
Example response
"id": "chat-c87bdd75d1394dcc886556de3db5d0c9",
"object": "chat.completion",
"created": 1727429032,
"model": "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello. I'm here to help you in any way I can.",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": null
}
],
"usage": {
"prompt_tokens": 34,
"total_tokens": 49,
"completion_tokens": 15
},
"prompt_logprobs": null
}
Example request
import requests
import json
# A wrapper class for the API for convenience
class Continuum:
def __init__(self, url, port):
self.endpoint = f"http://{url}:{port}/v1/chat/completions" # Adjust path if necessary
self.model = "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4"
def run(self, prompt):
# JSON payload with the necessary parameters
payload = {
"model": self.model,
"messages": [{
"role": "system",
"content": "Hello Continuum!"
}],
}
headers = {"Content-Type": "application/json"}
# Sending the request to your proxy
response = requests.post(self.endpoint, headers=headers, data=json.dumps(payload))
# Error handling in case of a bad response
if response.status_code != 200:
raise Exception(f"Error {response.status_code}: {response.text}")
# Return the response JSON
return response.json()
# Example usage
if __name__ == "__main__":
# Initialize the wrapper with the URL and port of proxy
model = Continuum(url="localhost", port=8080)
# Run a prompt through the API
try:
response = model.run("Hello Continuum")
print(response.get('choices')[0].get('message').get('content')) # Print the API response
except Exception as e:
print(f"An error occurred: {e}")
Example response
It's nice to meet you. Is there something I can help you with or would you like to chat?
Example request
import fetch from "node-fetch";
// A wrapper class for the API for convenience
class Continuum {
constructor(url, port) {
this.endpoint = `http://${url}:${port}/v1/chat/completions`; // Adjust path if necessary
this.model = "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4";
}
async run(prompt) {
// JSON payload with the necessary parameters
const payload = {
model: this.model,
messages: [
{
role: "system",
content: "Hello Continuum!",
},
],
};
const headers = {
"Content-Type": "application/json",
};
try {
// Sending the request to your proxy
const response = await fetch(this.endpoint, {
method: "POST",
headers: headers,
body: JSON.stringify(payload),
});
// Error handling in case of a bad response
if (!response.ok) {
throw new Error(`Error ${response.status}: ${response.statusText}`);
}
// Return the response JSON
const data = await response.json();
return data;
} catch (error) {
// Handle errors
throw new Error(`Request failed: ${error.message}`);
}
}
}
// Example usage
(async () => {
// Initialize the wrapper with the URL and port of the proxy
const model = new Continuum("localhost", 8080);
// Run a prompt through the API
try {
const response = await model.run("Hello Continuum");
console.log(response.choices[0].message.content); // Print the API response
} catch (error) {
console.log(`An error occurred: ${error.message}`);
}
})();
Example response
It's nice to meet you. Is there something I can help you with or would you like to chat?
The code performs the following three simple steps:
- Construct a prompt request following the OpenAI Chat API specifications.
- Send the prompt request to the Continuum proxy. The proxy handles end-to-end encryption and verifies the integrity of the Continuum backend that serves the endpoint.
- Receive and print the response.
We don't use any OpenAI services. We only adhere to the same interface definitions to provide a great development experience and ensure easy code portability.