Skip to main content
Version: 1.0

Security goals of Continuum

Continuum is designed to protect user data and AI models from all relevant entities within AI Software-as-a-Service (SaaS) offerings. This page examines these entities and, based on that, specifies Continuum's security goals.

Sketch of entities

Overview

In most AI SaaS offering the following four relevant entities are involved and have direct or indirect access to certain types of relevant data.

  • The service provider
  • The model provider
  • The platform provider
  • The infrastructure provider

The infrastructure provider is the entity that provides the compute infrastructure, for example AWS or CoreWeave. The platform provider is the entity that provides the software that runs the AI model, for example HuggingFace. The model provider is the entity that provides the actual AI model, for example Mistral or Anthropic. The service provider is the entity that ties it all together and offers the SaaS to the end user.

Examples

In many scenarios, one organization may have different roles at the same time. The following table gives three examples.

Website / SaaSService providerPlatform providerModel providerInfrastructure provider
ChatGPTOpenAIOpenAIOpenAIMicrosoft Azure
HuggingChatHuggingFaceHuggingFaceCohere, Mistral, and othersAWS, GCP, and others
ai.confidential.cloudEdgeless SystemsvLLMMistralMicrosoft Azure

In the case of the well-known ChatGPT, OpenAI is the service provider, the platform provider, and the model provider, while Microsoft Azure provides the infrastructure.

HuggingChat is a service like ChatGPT, which allows the user to choose between AI models. The company HuggingFace acts both as the service provider and the platform provider.

ai.confidential.cloud is run by Edgeless Systems. The service runs on Microsoft Azure and uses the open-source framework vLLM to serve a Mistral AI model. It's protected with Continuum.

Analysis

Let's examine how these entities can access relevant data within widespread AI applications like ChatGPT or HuggingChat.

The infrastructure provider is highly privileged and controls hardware components and system software like the hypervisor. With this control, the infrastructure provider can typically access all data that's being processed. In the case of AI SaaS, this includes the user data and the AI model.

On top of the infrastructure runs the software provided by the platform provider. This software has access to both the AI model and the user data. The software may leak data through implementation mistakes, logging interfaces, remote-access capabilities, or even backdoors.

The service provider typically has privileged access to the platform software and the software (e.g., a web frontend) that receives user data. Correspondingly, the service provider can access both the AI model and the user data. In particular, the service provider may decide to re-train or fine-tune the AI model using the user data. This is oftentimes a concern among users, as it may leak data to other users through the AI model's answer. For example, such a case has been reported for ChatGPT.

In the simplest case, the model provider only provides the raw weights (i.e., numbers) that make up the AI model. In this case, the model provider can't, directly or indirectly, access user data. However, in cases where the model provider provides additional software, leaks similar to those discussed for the platform provider may happen for user data.

Security goals

Continuum is designed to protect user data and the AI model against access from all four entities described here, including the re-training. The four entities described exhaustively cover all actors with access to relevant data in AI SaaS.

To learn how Continuum works from an end-user perspective, see the Overview section. For the full picture, see the architecture section.