Concepts

A complete mental model for how Treza works with the SDK. Keep this open as you build.

KYC & Zero-Knowledge Proofs

What they are

Privacy-preserving identity verification using zero-knowledge cryptography. Users prove they meet KYC requirements (e.g., age verification, identity checks) without revealing underlying personal data. Proofs are cryptographically verifiable, stored in DynamoDB, and optionally anchored on-chain for immutability.

Why it matters

Privacy: Users prove attributes (e.g., "I'm over 18") without sharing passport data, birth dates, or photos.

Compliance: Satisfy regulatory requirements (KYC/AML) while minimizing PII exposure and data breach risk.

Portability: One proof works across multiple services—verify once, use everywhere.

Auditability: Blockchain anchoring provides tamper-proof verification history.

Cost reduction: Eliminate redundant verification flows and reduce storage of sensitive data.

Core ideas

Zero-Knowledge Proofs (ZK)

Cryptographic method proving a statement is true without revealing why. Example: "I'm over 18" without sharing your birth date.

Commitments

64-character hex hash that binds the prover to specific data without revealing it. Acts as a fingerprint for the proof.

Public Inputs

Non-sensitive data visible to verifiers (e.g., commitment hash, proof type). Used to validate the proof without exposing private information.

Proof Lifecycle

  1. Generation: Mobile app or client generates ZK proof from user's credentials

  2. Submission: Proof submitted to Treza API (POST /api/kyc/proof)

  3. Verification: Cryptographic validation + timestamp checks

  4. Storage: Proof stored in DynamoDB with metadata

  5. Blockchain: Optional on-chain submission for immutability

  6. Expiration: Proofs expire after 7 days (configurable)

Key objects & calls

submitProof(request) → Submit new ZK proof for verification

Returns: proofId, blockchainProofId, verificationUrl, expiresAt, chainTxHash

getProof(proofId, includePrivate?) → Retrieve proof details

Returns public data by default; use includePrivate=true for proof signature (requires auth)

verifyProof(proofId) → Verify proof validity and check expiration

Returns: isValid, publicInputs, chainVerified, expiresAt

Proof structure

For Users (Mobile/Client)

  1. User provides credentials (passport, ID, etc.) to mobile app

  2. App generates ZK proof locally using device secure enclave (iOS: Secure Enclave, Android: TEE)

  3. Call submitProof({ userId, proof, deviceInfo })

  4. Receive proofId and verificationUrl

  5. Share proofId with services requiring KYC verification

For Service Providers (Verifiers)

  1. User provides their proofId during onboarding

  2. Call verifyProof(proofId) to check validity

  3. Check isValid, chainVerified, and expiresAt

  4. Optionally verify on blockchain using chainTxHash

  5. Grant access based on verification result

For Developers (Integration)

  1. Generate proof on client-side (mobile/web)

  2. Submit to /api/kyc/proof endpoint

  3. Store returned proofId with user account

  4. Re-verify periodically using /api/kyc/proof/{proofId}/verify

  5. Handle expiration by requesting new proof from user

When to use

Scenario
Why KYC Proofs Help

Age verification

Prove age without revealing birth date or ID documents

Identity checks

Verify user identity without storing PII (GDPR/CCPA compliance)

Financial services

Meet KYC/AML requirements with minimal data exposure

Healthcare

Verify credentials while maintaining HIPAA compliance

Multi-platform access

One proof works across multiple services/platforms

Audit requirements

Blockchain anchoring provides immutable verification trail

International compliance

Satisfy different jurisdictions without data transfer

Validation rules

Commitment

  • Must be exactly 64 characters

  • Hex format only (0-9, a-f, A-F)

  • Unique per user/session

Proof

  • Minimum 65 characters

  • Contains cryptographic signature

  • Generated by trusted client (device secure enclave)

Timestamp

  • Must be within 1 hour of submission

  • Prevents replay attacks

  • Ensures proof freshness

Public Inputs

  • Array of hex strings

  • Contains non-sensitive verification data

  • Minimum 1 input required

States

State
Description
Queryable

verified

Proof passed cryptographic verification

✅ Yes

pending

Proof submitted, awaiting blockchain confirmation

✅ Yes

failed

Verification failed (invalid proof or expired timestamp)

✅ Yes

expired

Proof passed expiration date (7 days)

❌ Returns HTTP 410

Integration examples

  • Privacy-first verification: Prove attributes without revealing sensitive data

  • Blockchain immutability: Optional on-chain anchoring for audit trails

  • Automatic expiration: 7-day validity reduces stale verification risk

  • Device tracking: Platform and version info for security analysis

  • Flexible verification: Public endpoints for third-party verification

  • Compliance ready: GDPR, CCPA, HIPAA-friendly architecture

Providers

What they are

Backends capable of running secure enclaves (e.g., AWS Nitro, GCP, Azure). Providers expose supported regions and a config schema (e.g., dockerImage, cpuCount, memoryMiB).

Why it matters

Each provider has different runtime limits, compliance certifications, and regional availability. You pick one and pass provider-specific providerConfig when creating an enclave.

Key objects & calls

  • getProviders() → list all available providers, regions, and config schemas

  • getProvider(providerId) → fetch details for a specific provider

  • providerConfig → per-provider runtime settings (validated against schema)

Typical flow

  1. List providers → choose one based on your compliance/region needs

  2. Read provider's configSchema → understand required fields

  3. Validate your config against schema

  4. Create enclave with providerId + providerConfig


Enclaves

What they are

Isolated, attested compute environments that run your container image in a hardware-protected enclave. They provide cryptographically verifiable isolation from cloud operators and other tenants.

Core ideas

  • Creation: You provide providerId, region, and providerConfig (e.g., Docker image + resources)

  • Ownership: All resources are scoped to your wallet address

  • Integration: Optionally link a GitHub repo/branch for CI/CD workflows

  • Attestation: Every deployed enclave can generate cryptographic proof of its integrity

Key objects & calls

  • createEnclave(request) → deploy a new enclave

  • getEnclave(enclaveId) → fetch enclave details and status

  • getEnclaves(walletAddress) → list all your enclaves

  • updateEnclave(request) → modify configuration or GitHub connection

  • deleteEnclave(enclaveId, walletAddress) → permanently remove

  • getEnclaveLogs(enclaveId, logType, limit) → fetch logs

What runs inside

Your Docker image (from Docker Hub, private registry, or ECR) plus environment variables you configure through your provider's settings.

When to use

  • Running sensitive computations that require hardware-level isolation

  • Processing confidential data (PII, healthcare, financial)

  • Building zero-trust applications with cryptographic verification

  • Multi-party computation scenarios


Lifecycle

What it is

The operational state machine that manages enclave deployment, operation, and teardown.

States

Deployment Flow:

  • PENDING_DEPLOY → awaiting deployment initiation

  • DEPLOYING → infrastructure provisioning in progress

  • DEPLOYED → enclave is running and ready

Pause/Resume Flow:

  • PAUSING → stopping compute resources

  • PAUSED → enclave suspended, no compute costs

  • RESUMING → restarting from paused state

  • DEPLOYED → back to running state

Termination Flow:

  • PENDING_DESTROY → awaiting termination initiation

  • DESTROYING → infrastructure teardown in progress

  • DESTROYED → enclave removed, resources freed

  • TERMINATED → final state (irreversible)

Error State:

  • FAILED → deployment or operation error (see error_message)

Actions

  • pauseEnclave(enclaveId, walletAddress) → stop compute without destroying

  • resumeEnclave(enclaveId, walletAddress) → restart a paused enclave

  • terminateEnclave(enclaveId, walletAddress) → permanently destroy (irreversible)

When to use what

Action

Use When

Result

Pause

Stop spending temporarily; keep configuration

No compute costs, quick restart

Resume

Bring paused enclave back online

Returns to DEPLOYED state

Terminate

Done with enclave permanently

All data destroyed, cannot undo

Delete

Remove enclave record completely

Enclave must be DESTROYED first


Attestation & Verification

What it is

Cryptographic proof that your enclave is running genuine, unmodified code inside a hardware-protected secure enclave. Uses Platform Configuration Registers (PCRs), certificate chains, and signed attestation documents.

Why it matters

Allows you and third parties to verify:

  • The exact code running in your enclave (via PCR measurements)

  • The enclave is running in genuine AWS Nitro hardware

  • No tampering has occurred since deployment

  • Compliance with security standards (FIPS 140-2, SOC2, HIPAA, etc.)

Key concepts

PCR Measurements

Hardware-generated cryptographic hashes:

  • PCR0: Hash of the enclave image file

  • PCR1: Linux kernel and bootstrap hash

  • PCR2: Application/container hash

  • PCR8: Signing certificate hash

Attestation Document

Contains:

  • PCR measurements

  • X.509 certificate for verification

  • Certificate authority bundle

  • Timestamp and module ID

  • Optional user data and nonce for replay protection

Verification Details

  • Trust Level: HIGH, MEDIUM, LOW, or UNKNOWN

  • Integrity Score: 0-100% confidence rating

  • Verification Status: VERIFIED, PENDING, or FAILED

  • Compliance Checks: SOC2, HIPAA, FIPS 140-2, Common Criteria

  • Risk Score: Lower is better (0-100)

Key objects & calls

  • getAttestation(enclaveId) → retrieve attestation document + verification details

  • getVerificationStatus(enclaveId) → quick status check

  • verifyAttestation(enclaveId, request?) → comprehensive verification with checks

  • generateIntegrationSnippet(enclaveId, language) → code for third-party verification

Typical flow

  1. Deploy enclave → wait for DEPLOYED status

  2. Call getAttestation(enclaveId) → get attestation document

  3. Share verification URL with third parties

  4. Third parties call verification endpoint with optional nonce

  5. Receive trust level, PCR hashes, and compliance status

When to use

  • Before processing sensitive data: Verify enclave integrity first

  • Compliance audits: Provide cryptographic proof of secure execution

  • Multi-party scenarios: Let partners verify your enclave independently

  • Zero-trust architectures: Continuously verify, never trust blindly

  • Integration with external systems: Provide verification endpoints to partners

What you get


Tasks

What they are

Scheduled operations that run inside your enclaves. Tasks use cron expressions for flexible scheduling and can automate recurring workloads.

Core ideas

  • Scheduling: Cron-style expressions (e.g., 0 0 * * * for daily at midnight)

  • Association: Each task is linked to a specific enclave

  • Ownership: Scoped to your wallet address

  • Execution tracking: History of runs with timestamps

States

  • running → task is active and executing on schedule

  • stopped → task is paused, not executing

  • failed → last execution encountered an error

  • pending → task created but not yet started

Key objects & calls

  • createTask(request) → create new scheduled task

  • getTasks(walletAddress) → list all your tasks

  • updateTask(request) → modify schedule, status, or configuration

  • deleteTask(taskId, walletAddress) → remove task

Typical flow

  1. Create enclave → wait for DEPLOYED status

  2. Create task with enclaveId, schedule, and description

  3. Task runs automatically on schedule

  4. Monitor via lastRun timestamp and status

  5. Update status: 'stopped' to pause execution

When to use

  • Batch processing: Run data processing jobs nightly

  • Health checks: Periodic monitoring and alerts

  • Data synchronization: Regular backups or sync operations

  • Scheduled maintenance: Cleanup, archival, or rotation tasks

  • Report generation: Daily/weekly/monthly automated reports

What you need


Logs & Monitoring

What it is

Comprehensive logging system aggregating logs from multiple sources across your enclave lifecycle.

Log sources

Application Logs

Stdout/stderr from your Docker container. View what your application prints.

ECS Deployment Logs

AWS ECS service logs showing infrastructure-level events (task starting, stopping, health checks).

Step Functions Logs

Workflow orchestration logs from deployment/termination state machines.

Lambda Logs

Execution logs from trigger functions, validators, and error handlers.

Error Logs

Aggregated errors from all sources for quick troubleshooting.

Key objects & calls

  • getEnclaveLogs(enclaveId, logType, limit) → fetch logs

  • logType: 'all', 'application', 'ecs', 'stepfunctions', 'lambda', 'errors'

  • limit: max entries to return (default 100)

Log structure

When to use what

Log Type

Use When

application

Debugging your container code

ecs

Infrastructure issues (deployment failures)

stepfunctions

Understanding workflow state transitions

lambda

Troubleshooting triggers or validators

errors

Quick overview of all problems

all

Comprehensive investigation across all sources

Typical flow

  1. Enclave enters FAILED or unexpected state

  2. Call getEnclaveLogs(enclaveId, 'errors', 50)

  3. Identify error source

  4. Call specific log type for detailed context

  5. Fix issue and redeploy


API Keys & Authentication

What they are

Programmatic access credentials for using the Treza SDK. API keys provide scoped permissions and are tied to your wallet address.

Why it matters

Enables CI/CD pipelines, automation scripts, and third-party integrations to manage your enclaves without manual UI interaction.

Permission scopes

  • enclaves:read → list and view enclave details

  • enclaves:write → create, update, delete, pause, resume, terminate enclaves

  • tasks:read → view tasks and execution history

  • tasks:write → create, update, delete tasks

  • logs:read → access logs from all sources

Key objects & calls

  • createApiKey(request) → generate new API key (key shown once!)

  • getApiKeys(walletAddress) → list your API keys

  • updateApiKey(request) → change permissions or status

  • deleteApiKey(apiKeyId, walletAddress) → revoke access

States

  • active → key is valid and can authenticate requests

  • inactive → key is disabled but not deleted (can be reactivated)

When to use

  • CI/CD pipelines: Automate deployments from GitHub Actions, GitLab CI

  • Infrastructure as Code: Manage enclaves with Terraform or Pulumi

  • Monitoring systems: Automated health checks and log retrieval

  • Multi-user scenarios: Different keys for different team members/systems

  • Security rotation: Regular key rotation without affecting other systems

Best practices


GitHub Integration

What it is

OAuth-based connection linking your enclaves to GitHub repositories and branches, enabling automated deployments triggered by Git pushes.

Core ideas

  • OAuth flow: Secure authentication with GitHub

  • Repository linking: Connect specific repos to enclaves

  • Branch selection: Choose which branch triggers updates

  • Token management: Encrypted storage of access tokens

Key objects & calls

  • getGitHubAuthUrl(state?) → start OAuth flow, get authorization URL

  • exchangeGitHubCode(request) → exchange OAuth code for access token

  • getGitHubRepositories(accessToken) → list user's repos

  • getRepositoryBranches(request) → list branches for a repo

  • updateEnclave({ githubConnection }) → link GitHub to enclave

GitHub connection object

Typical flow

  1. User initiates GitHub connection in UI

  2. Call getGitHubAuthUrl() → redirect user to GitHub

  3. User authorizes → GitHub redirects with code

  4. Call exchangeGitHubCode({ code }) → get access_token and user info

  5. Call getGitHubRepositories(access_token) → show user their repos

  6. User selects repo → call getRepositoryBranches() → show branches

  7. User selects branch → call updateEnclave() with githubConnection

When to use

  • Automated deployments: Push to main → enclave updates automatically

  • Development workflows: Feature branch → ephemeral test enclave

  • Team collaboration: Share enclave configs via Git

  • Version tracking: GitHub history = deployment history

  • Rollback capability: Revert Git commit = revert enclave state


Docker Images

What they are

Container images that define your enclave's runtime environment. Treza supports public Docker Hub images, private registries, and ECR.

Why it matters

Your enclave runs whatever's in the Docker image. The image contains your application code, dependencies, runtime, and configuration.

Key concepts

  • Image names: library/hello-world, nginx:latest, myorg/myapp:v1.2.3

  • Tags: Version identifiers (:latest, :v1.0, :sha-abc123)

  • Registries: Docker Hub (public), ECR (private AWS), custom registries

Key objects & calls

  • searchDockerImages(query) → search Docker Hub for images

  • getDockerTags(repository) → list available tags for an image

In provider config

When to use what

Image Type

Use When

Example

Public Docker Hub

Testing, demos, open-source tools

nginx:alpine

Private Docker Hub

Your proprietary apps (small teams)

yourorg/app:latest

AWS ECR

Production workloads, enterprise

123.dkr.ecr.region.amazonaws.com/app:v1

Custom registry

On-prem, air-gapped environments

registry.internal.com/app

Best practices


See Also


Last updated