Skip to main content
Version: 1.3

End-to-End Prompt Encryption

Continuum uses end-to-end encryption to protect user data from being accessed by the service provider. Prompts are encrypted on the client side, decrypted within runtime-encrypted Continuum workers, and re-encrypted before being returned to the client. This encryption uses a symmetric key with Authenticated Encryption implemented through AES-GCM.

Encryption Workflow

As detailed in the workflow section, users interact with the AI service via an encryption proxy hosted on the Continuum worker. The steps for the encryption between the client and the proxy are:

  1. Key Exchange: The proxy and user exchange a symmetric AES key, facilitated by the attestation service (AS). The AES keys are also referred to as inference secrets in this documentation. The AS API handles key uploads from clients and grants access to verified worker nodes. Each key has a corresponding ID which is encoded in the encrypted data to let the decoder know which key was used.
  2. Key Synchronization: The proxy monitors for key updates to stay in sync with the client.
  3. Request Encryption: The client only encrypts the request field with the prompt text, keeping other details like token length accessible to the service provider. The encrypted field encodes the key ID which maps to the used key.
  4. Request Decryption: The proxy decodes the prompt field with the key that maps to the encoded key ID.
  5. Sandbox Forwarding: The decrypted prompt is securely transmitted to the sandboxed inference server via a UNIX domain socket.
  6. Response Encryption: The response from the inference server is returned through the same socket. The proxy then encrypts the response and sends it back to the user.

Future Enhancements

Future enhancements will include support for Retrieval Augmented Generation (RAG) use cases, expanding Continuum's encryption framework capabilities.