Skip to main content
Version: 1.4

Security properties

This page provides an overview of Continuum's security properties and explains which parties typically have access to your data when using other GenAI API services.

Continuum's highest priority is ensuring the confidentiality of your prompts and responses.

If user privacy and data protection is non-negotiable for you, Continuum is the right fit.

Different roles within an API supply chain

Sketch of entities

To help you understand who can typically access user data in conventional GenAI API services, we provide an overview of the usual parties involved in the supply chain and explain why they often have access to your data.

In most GenAI API services, the following four relevant entities are involved and have direct or indirect access to certain types of sensitive data:

  • The infrastructure provider: Provides the compute infrastructure to run the model and inference code, such as AWS or CoreWeave.
  • The platform provider: Supplies the software environment that runs the AI model, such as Hugging Face.
  • The model provider: Develops and/or supplies the actual AI model, such as Mistral or Anthropic.
  • The service provider: Integrates all components and offers the SaaS to the end user.

In many scenarios, one organization may have different roles at the same time. The following table gives three examples.

Website / SaaSService providerPlatform providerModel providerInfrastructure provider
ChatGPTOpenAIOpenAIOpenAIMicrosoft Azure
HuggingChatHuggingFaceHuggingFaceCohere, Mistral, and othersAWS, GCP, and others
ContinuumEdgeless SystemsvLLMMetaMicrosoft Azure

In the case of the well-known ChatGPT, OpenAI is the service provider, the platform provider, and the model provider, while Microsoft Azure provides the infrastructure.

HuggingChat is a service like ChatGPT, which allows the user to choose between AI models. The company HuggingFace acts both as the service provider and the platform provider.

Continuum is run by us (Edgeless Systems). The service runs on Microsoft Azure and uses the open-source framework vLLM to serve a Meta AI model.

How these parties usually can access your prompts and replies

Let's examine how these entities can access relevant data within widespread AI applications like ChatGPT or HuggingChat.

The infrastructure provider is highly privileged and controls hardware components and system software like the hypervisor. With this control, the infrastructure provider can typically access all data that's being processed. In the case of a GenAI API service, this includes the user data and the AI model.

On top of the infrastructure runs the software provided by the platform provider. This software has access to both the AI model and the user data. The software may leak data through implementation mistakes, logging interfaces, remote-access capabilities, or even backdoors.

The service provider typically has privileged access to the platform software and the software (e.g., a web frontend) that receives user data. Correspondingly, the service provider can access both the AI model and the user data. In particular, the service provider may decide to re-train or fine-tune the AI model using the user data. This is oftentimes a concern among users, as it may leak data to other users through the AI model's answer. For example, such a case has been reported for ChatGPT.

In the simplest case, the model provider only provides the raw weights (i.e., numbers) that make up the AI model. In this case, the model provider can't, directly or indirectly, access user data. However, in cases where the model provider provides additional software, leaks similar to those discussed for the platform provider may happen for user data.

How Continuum is different

In contrast to other GenAI API services, Continuum thoroughly protects against data access by these four parties across the entire supply chain. No one can access your data—not the infrastructure provider, the platform provider, the model provider, or us as the service provider.