Account and Security

Security

Explore the security features available to protect your account.

ORGN is built on the principle that security must be verifiable, configurable, and cryptographically enforced. Every component, model, workflow, context, storage, and computation operates within a protected environment designed to preserve confidentiality and intellectual property.

Security in ORGN is not a policy statement. It is enforced through:

  • Hardware-backed confidential computing
  • Cryptographic attestation
  • Multi-layer encryption
  • Isolated execution environments

Users maintain control over their data and infrastructure choices. Nothing is used for training nor leaves your control.

OLLM: Secure AI Gateway

ORGN supports both standard model deployments and confidential-compute deployments.

When confidential execution is required, inference is routed through OLLM, our proprietary secure AI gateway.

OLLM provides:

  • A unified API layer across model providers
  • Enforcement of security policies per request
  • Isolation of inference workloads
  • Support for cryptographic attestation

When you see OLLM listed as the provider in the model selection interface, those models are deployed on confidential computing chips inside Trusted Execution Environments (TEEs). Models that do not list OLLM as the provider run through their respective standard infrastructure and do not generate confidential compute attestation records.

This means inference is executed inside hardware-backed enclaves rather than standard cloud runtime.

OLLM ensures that:

  • Sensitive workloads can be routed to confidential infrastructure
  • Standard workloads can use non-confidential infrastructure when appropriate
  • Security posture can be chosen per workflow

Model Deployment Types in ORGN

ORGN supports two categories of model deployment:

1. Standard Model Providers

These models run on traditional cloud infrastructure through their respective providers. They are suitable for non-sensitive workflows and general development tasks.

They do not run inside confidential computing environments and are not OLLM-dependent.

2. OLLM Confidential Deployments

These models are deployed on confidential computing chips inside Trusted Execution Environments (TEEs).

They provide:

  • Hardware-backed isolation
  • Cryptographic attestation
  • Encrypted execution

When OLLM appears as the provider in the model selector, it indicates that the model is running on confidential compute infrastructure.

Confidential Computing

ORGN supports model execution inside hardware-backed Trusted Execution Environments (TEEs).

These enclaves provide:

Encrypted Computation

All model operations occur inside isolated, tamper-resistant environments.

Memory-Level Protection

Data remains encrypted in memory and cannot be accessed by the host OS or cloud provider.

No Introspection

Neither ORGN nor the infrastructure provider can view plaintext during inference when running under confidential compute.

Cryptographic Attestation

Execution can be verified through enclave attestation records.

Confidential computing eliminates exposure during model execution.

Attestation Proofs

When using models deployed through OLLM on confidential compute infrastructure, attestation records are generated.

At present:

  • Attestation proofs are available through the OLLM console.
  • Engineers can retrieve enclave verification evidence from OLLM for audit workflows.

These records verify:

  • The enclave identity
  • The integrity of the runtime
  • That the expected code was executed inside a verified environment

This enables compliance teams to validate confidential execution using cryptographic proof rather than relying on trust assumptions.

Confidential Workspace Runtime

Security in ORGN extends beyond model execution to the workspace runtime itself.

All ORGN workspaces run inside a TDX Sandbox with Intel TDX-encrypted CPU and memory.

This ensures that code execution, agent workflows, and runtime processes operate inside hardware-backed confidential infrastructure by default.

Multi-Layer Encryption

ORGN enforces encryption at every stage:

In Transit

All communication between client, backend, models, and storage uses hardened TLS encryption.

At Rest

Files, tasks, context objects, and session artifacts are encrypted using industry-standard algorithms.

In Use

When using confidential compute, data remains encrypted even during model execution.

Plaintext exposure is minimized across all layers of the system.

Zero Data Retention

ORGN does not store or reuse customer data for model training.

  • Prompts, code, and outputs are not used to train models
  • Data is retained only as required to fulfill user workflows
  • Users can clear memory or context at any time
  • Intellectual property remains owned by the user

Only metadata necessary for system operation is retained, and users have full control over their data lifecycle. Except when using Standard Model Providers, in which case data is subject to the provider’s policies. When using OLLM Confidential Deployments, no data leaves the confidential environment, and no training data is collected.

Cryptographic Verification & Auditability

Trust in ORGN is grounded in verifiable computation.

  • Enclave attestation verifies confidential runtime integrity
  • Agent actions are traceable and logged
  • File diffs and reasoning chains are preserved
  • Changes are reversible
  • Execution history is auditable

User-Controlled Security Configuration

ORGN allows teams to tune their security posture:

  • Choose between standard model providers and OLLM-backed confidential deployments depending on workload sensitivity.
  • Configure role-based access controls
  • Align infrastructure choices with internal compliance requirements

Secure Collaboration

All collaboration is encrypted and permission-governed:

  • Chats and agent workflows are encrypted
  • Workspace access is role-controlled
  • Agent authority is scoped
  • Repository operations require explicit user promotion

Autonomy never bypasses human control.

Next Steps

To apply ORGN’s security model in practice:

  • Choose OLLM-based confidential models for sensitive inference workflows.
  • Retrieve attestation records from the OLLM console when audit evidence is required.

Security in ORGN is configurable per project, allowing you to align assurance levels with real-world risk and regulatory requirements.

On this page