Skip to main content
Prompt Storage allows enterprises to automatically back up Cline conversation history to cloud storage (AWS S3 or Cloudflare R2). This provides a centralized repository for compliance, audit trails, and usage analysis while maintaining local storage as the primary source of truth.

Overview

Every Cline task conversation is stored locally in ~/.cline/data/tasks/<taskId>/api_conversation_history.json. When prompt storage is enabled, a background sync worker automatically uploads these conversation files to your configured S3 or R2 bucket.

Compliance Ready

Maintain conversation records for regulatory requirements and internal policies.

Audit Trail

Track AI interactions across your organization with timestamped conversation logs.

Usage Analysis

Analyze conversation patterns, token usage, and model performance at scale.

Disaster Recovery

Backup conversation history independent of local storage for business continuity.

How It Works

  1. Local Storage First: All conversations are written to local disk immediately
  2. Background Sync: A worker process queues conversation files for upload
  3. Reliable Upload: Automatic retry logic with configurable batch sizes
  4. Cloud Backup: Files are stored in your S3/R2 bucket with the same path structure

Storage Architecture

What Gets Stored

Prompt storage uploads the following files from each task:
FileContentPurpose
api_conversation_history.jsonFull conversation in Anthropic MessageParam formatCore conversation data for analysis
Task metadataTask ID, timestamps, model infoCorrelation and indexing

What’s NOT Stored

Prompt storage does not include:
  • ❌ Workspace files not accessed by Cline
  • ❌ API keys or secrets
  • ❌ User credentials or authentication tokens
Conversation history includes all tool inputs and outputs. This means code written via write_to_file, file contents read via read_file, and command outputs are included in the uploaded data. Review your compliance and data classification requirements before enabling.

Storage Path Pattern

Files are uploaded to your bucket following this structure:
s3://your-bucket/tasks/{taskId}/api_conversation_history.json
This mirrors the local storage structure, making it easy to correlate local and cloud data.

Configuration

Prompt storage is configured through Remote Configuration in the enterpriseTelemetry.promptUploading section.

Schema

{
  "enterpriseTelemetry": {
    "promptUploading": {
      "enabled": true,
      "type": "s3_access_keys",
      "s3AccessSettings": {
        "bucket": "your-cline-prompts",
        "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
        "region": "us-east-1",
        "intervalMs": 30000,
        "maxRetries": 5,
        "batchSize": 10,
        "maxQueueSize": 1000,
        "maxFailedAgeMs": 604800000,
        "backfillEnabled": false
      }
    }
  }
}

Configuration Fields

Core Settings

FieldTypeRequiredDescription
enabledbooleanYesEnable/disable prompt storage
typestringYesStorage type: "s3_access_keys" or "r2_access_keys"

Access Settings (S3/R2)

FieldTypeRequiredDescriptionDefault
bucketstringYesS3/R2 bucket name-
accessKeyIdstringYesAWS/Cloudflare access key ID-
secretAccessKeystringYesAWS/Cloudflare secret access key-
regionstringS3 onlyAWS region (e.g., us-east-1)-
endpointstringR2 onlyCloudflare R2 endpoint URL-
accountIdstringR2 onlyCloudflare account ID-

Sync Worker Settings

FieldTypeDescriptionDefault
intervalMsnumberMilliseconds between sync attempts30000 (30s)
maxRetriesnumberMaximum retries before giving up5
batchSizenumberItems to process per interval10
maxQueueSizenumberMaximum queue size before eviction1000
maxFailedAgeMsnumberTime before discarding failed items604800000 (7 days)
backfillEnabledbooleanSync existing tasks on startupfalse

Setup Guides

AWS S3 Configuration

1

Create S3 Bucket

Create a dedicated S3 bucket for Cline conversation storage:
aws s3 mb s3://your-cline-prompts --region us-east-1
Enable versioning and encryption:
aws s3api put-bucket-versioning \
  --bucket your-cline-prompts \
  --versioning-configuration Status=Enabled

aws s3api put-bucket-encryption \
  --bucket your-cline-prompts \
  --server-side-encryption-configuration '{
    "Rules": [{
      "ApplyServerSideEncryptionByDefault": {
        "SSEAlgorithm": "AES256"
      }
    }]
  }'
2

Create IAM Policy

Create an IAM policy with minimal required permissions:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": "arn:aws:s3:::your-cline-prompts/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": "arn:aws:s3:::your-cline-prompts"
    }
  ]
}
Save this as cline-prompt-storage-policy.json and create the policy:
aws iam create-policy \
  --policy-name ClinePromptStorage \
  --policy-document file://cline-prompt-storage-policy.json
3

Create IAM User

Create a dedicated IAM user and attach the policy:
aws iam create-user --user-name cline-prompt-uploader

aws iam attach-user-policy \
  --user-name cline-prompt-uploader \
  --policy-arn arn:aws:iam::YOUR_ACCOUNT_ID:policy/ClinePromptStorage

aws iam create-access-key --user-name cline-prompt-uploader
Save the AccessKeyId and SecretAccessKey from the output.
4

Configure in Cline Dashboard

In the Cline admin console at app.cline.bot:
  1. Navigate to SettingsEnterprise Telemetry
  2. Enable Prompt Uploading
  3. Select S3 as the storage type
  4. Enter your bucket name, access key ID, secret key, and region
  5. Configure sync worker settings (or use defaults)
  6. Save configuration
5

Test Connection

Use the “Test Connection” button in the admin console to verify:
  • Bucket access
  • Write permissions
  • Credential validity
A test file will be uploaded and deleted from your bucket.

Optional: Lifecycle Policies

Configure retention policies for cost management:
{
  "Rules": [
    {
      "Id": "ArchiveOldPrompts",
      "Status": "Enabled",
      "Transitions": [
        {
          "Days": 90,
          "StorageClass": "GLACIER"
        }
      ]
    },
    {
      "Id": "DeleteOldPrompts",
      "Status": "Enabled",
      "Expiration": {
        "Days": 2555
      }
    }
  ]
}

Sync Worker Behavior

The background sync worker manages the upload queue with these characteristics:

Queue Management

  • FIFO ordering: Files are uploaded in the order they were created
  • Automatic batching: Processes up to batchSize items per interval
  • Queue size limits: Evicts oldest items when maxQueueSize is exceeded
  • Retry logic: Failed uploads are retried up to maxRetries times

Failure Handling

When an upload fails:
  1. Immediate retry: Item stays in queue for next sync interval
  2. Exponential backoff: Retry attempts are spaced out
  3. Maximum retries: After maxRetries attempts, item is marked as permanently failed
  4. Age-based cleanup: Failed items older than maxFailedAgeMs are discarded
  5. No data loss: Local files remain intact regardless of sync status

Backfill Mode

When backfillEnabled is set to true:
  • On first startup, scans all existing tasks in ~/.cline/data/tasks/
  • Queues conversation files that haven’t been uploaded
  • Useful for enabling prompt storage on an existing Cline deployment
  • Can generate significant upload volume — monitor queue size
Enable backfill carefully on large deployments. Consider starting with backfillEnabled: false and monitoring the steady-state queue before enabling backfill.

Monitoring & Observability

Integration with OpenTelemetry

While prompt storage operates independently, it integrates with Cline’s observability system:
  • Task lifecycle events: task.created, task.completed track when conversations are generated
  • Conversation events: task.conversation_turn, task.tokens provide usage metrics
  • Local monitoring: Sync worker status is logged but not yet exported as OTel events
See OpenTelemetry for configuring metrics export.

CloudWatch Monitoring (S3)

Monitor S3 upload activity with CloudWatch:
# View PutObject requests (uploads)
aws cloudwatch get-metric-statistics \
  --namespace AWS/S3 \
  --metric-name NumberOfObjects \
  --dimensions Name=BucketName,Value=your-cline-prompts \
  --start-time 2026-03-01T00:00:00Z \
  --end-time 2026-03-08T00:00:00Z \
  --period 3600 \
  --statistics Sum

R2 Analytics

Cloudflare R2 provides built-in analytics in the dashboard:
  • Request counts and rates
  • Storage usage over time
  • Bandwidth utilization
  • Error rates

Security & Compliance

Encryption

At Rest:
  • S3: Enable server-side encryption (SSE-S3 or SSE-KMS)
  • R2: Encryption enabled by default
In Transit:
  • All uploads use HTTPS/TLS
  • Credentials are never logged or exposed

Access Control

Recommended IAM policies:
  • Use dedicated IAM users/roles
  • Limit permissions to write-only if read access isn’t needed
  • Enable MFA for credential generation
  • Rotate access keys regularly
Bucket policies:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::your-cline-prompts/*",
        "arn:aws:s3:::your-cline-prompts"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}

Audit Logging

S3 Server Access Logging:
aws s3api put-bucket-logging \
  --bucket your-cline-prompts \
  --bucket-logging-status '{
    "LoggingEnabled": {
      "TargetBucket": "your-log-bucket",
      "TargetPrefix": "cline-prompts-access/"
    }
  }'
CloudTrail for API Calls: Enable CloudTrail to track all S3 API operations on your bucket.

Data Retention

Implement retention policies based on your compliance requirements:
  • GDPR: Consider right to erasure
  • SOC 2: Maintain audit trails for required period
  • HIPAA: Ensure appropriate retention and disposal

Troubleshooting

Common Issues

Symptoms: maxQueueSize limit reached, oldest items being evictedCauses:
  • Upload rate slower than conversation creation rate
  • Network connectivity issues
  • Insufficient batch size or interval
Solutions:
  1. Increase batchSize to process more items per interval
  2. Decrease intervalMs to sync more frequently
  3. Check network connectivity and credentials
  4. Temporarily increase maxQueueSize while investigating
Symptoms: Repeated upload failures, items reaching maxRetriesCauses:
  • Invalid or expired credentials
  • Insufficient IAM permissions
  • Bucket policy denying access
Solutions:
  1. Verify credentials are correct in remote config
  2. Check IAM policy includes s3:PutObject permission
  3. Review bucket policies for deny rules
  4. Test with AWS CLI: aws s3 cp test.txt s3://your-bucket/
Symptoms: Connection timeouts, failed uploadsCauses:
  • Incorrect endpoint URL
  • Firewall blocking Cloudflare IPs
  • Invalid account ID
Solutions:
  1. Verify endpoint format: https://<ACCOUNT_ID>.r2.cloudflarestorage.com
  2. Check firewall rules allow HTTPS to Cloudflare IPs
  3. Confirm account ID in Cloudflare dashboard
  4. Test with curl: curl -I https://<ACCOUNT_ID>.r2.cloudflarestorage.com
Symptoms: Queue at max size immediately after enabling backfillCauses:
  • Large number of existing tasks
  • Backfill queuing faster than upload processing
Solutions:
  1. Disable backfill temporarily: "backfillEnabled": false
  2. Let steady-state queue drain first
  3. Increase batchSize and decrease intervalMs
  4. Consider maxQueueSize increase during backfill period
  5. Re-enable backfill once queue is stable

Debug Logging

Enable debug logging to diagnose sync issues:
  1. Check extension developer console (Help → Toggle Developer Tools)
  2. Look for [ClineBlobStorage] and [SyncWorker] log entries
  3. Failed uploads log error messages with details

Testing Configuration

Use the built-in test connection feature:
// Programmatic test (for custom integrations)
import { testPromptUploading } from '@/core/controller/state/testPromptUploading'

await testPromptUploading(controller)
// Returns: { success: boolean, message: string }

Data Format Reference

Conversation File Schema

Uploaded api_conversation_history.json files contain an array of messages:
[
  {
    "role": "user",
    "content": [
      {
        "type": "text",
        "text": "Create a React component for a todo list"
      }
    ]
  },
  {
    "role": "assistant",
    "content": [
      {
        "type": "text",
        "text": "I'll create a todo list component..."
      },
      {
        "type": "tool_use",
        "id": "toolu_123",
        "name": "write_to_file",
        "input": {
          "path": "TodoList.tsx",
          "content": "..."
        }
      }
    ]
  }
]
This follows the Anthropic Messages API format.

Metadata Schema

Task metadata includes:
{
  "taskId": "1234567890",
  "createdAt": "2026-03-05T10:30:00Z",
  "lastModified": "2026-03-05T11:45:00Z",
  "modelInfo": {
    "id": "claude-sonnet-4",
    "provider": "anthropic"
  },
  "tokensUsed": {
    "input": 1250,
    "output": 3400
  }
}

Best Practices

Start Small

Test with a single team or project before rolling out organization-wide.

Monitor Costs

Set up billing alerts and review storage usage monthly.

Secure Credentials

Use dedicated IAM users with minimal permissions and rotate keys regularly.

Plan Retention

Define and implement data retention policies based on compliance needs.

See Also

OpenTelemetry

Configure metrics and logs export for comprehensive observability

Telemetry

Learn about Cline’s built-in anonymous usage tracking

Remote Configuration

Understand the remote configuration system