Skip to main content
Version: 1.2

Encrypted OpenAI API

Continuum offers a subset of the OpenAI Chat API enabling secure, encrypted interactions with AI models. You should always verify the deployment before using the API to send your encrypted prompts. Alternatively, you can rely on the continuum-proxy to do this.

Below you will find details on how to encrypt your data before sending it to the Continuum API.

Chat

Completions

POST /v1/chat/completions

This endpoint generates a response to an encrypted prompt.

Request body

  • model string: The offered LLM as listed here.
  • message string: The encrypted prompt for which a response is generated.
  • Additional parameters: These mirror the OpenAI API and are supported based on the model server's capabilities. However, options requiring internet access, such as image_url, aren't supported due to the sandboxed environment.

Returns

The response returns a chat completion or chat completion chunk object with the following modification:

Other parameters remain consistent with the OpenAI API specifications.

Example request

curl https://api.ai.confidential.cloud/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "<model-name>",
"messages": "<encrypted-payload>"
}'

Example response

{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "/model",
"system_fingerprint": "fp_44709d6fcb",
"choices": "<encrypted-response>",
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}

Models

List models

GET /v1/models

This endpoint lists all currently available models.

Returns

The response is a model object. Since the information retrieved from this endpoint isn't confidential, no encryption is applied to the response.

For detailed information, refer to the OpenAI API documentation.