Overview
Every Cline task conversation is stored locally in~/.cline/data/tasks/<taskId>/api_conversation_history.json. When prompt storage is enabled, a background sync worker automatically uploads these conversation files to your configured S3 or R2 bucket.
Compliance Ready
Maintain conversation records for regulatory requirements and internal policies.
Audit Trail
Track AI interactions across your organization with timestamped conversation logs.
Usage Analysis
Analyze conversation patterns, token usage, and model performance at scale.
Disaster Recovery
Backup conversation history independent of local storage for business continuity.
How It Works
- Local Storage First: All conversations are written to local disk immediately
- Background Sync: A worker process queues conversation files for upload
- Reliable Upload: Automatic retry logic with configurable batch sizes
- Cloud Backup: Files are stored in your S3/R2 bucket with the same path structure
Storage Architecture
What Gets Stored
Prompt storage uploads the following files from each task:| File | Content | Purpose |
|---|---|---|
api_conversation_history.json | Full conversation in Anthropic MessageParam format | Core conversation data for analysis |
| Task metadata | Task ID, timestamps, model info | Correlation and indexing |
What’s NOT Stored
Prompt storage does not include:- ❌ Workspace files not accessed by Cline
- ❌ API keys or secrets
- ❌ User credentials or authentication tokens
Storage Path Pattern
Files are uploaded to your bucket following this structure:Configuration
Prompt storage is configured through Remote Configuration in theenterpriseTelemetry.promptUploading section.
Schema
Configuration Fields
Core Settings
| Field | Type | Required | Description |
|---|---|---|---|
enabled | boolean | Yes | Enable/disable prompt storage |
type | string | Yes | Storage type: "s3_access_keys" or "r2_access_keys" |
Access Settings (S3/R2)
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
bucket | string | Yes | S3/R2 bucket name | - |
accessKeyId | string | Yes | AWS/Cloudflare access key ID | - |
secretAccessKey | string | Yes | AWS/Cloudflare secret access key | - |
region | string | S3 only | AWS region (e.g., us-east-1) | - |
endpoint | string | R2 only | Cloudflare R2 endpoint URL | - |
accountId | string | R2 only | Cloudflare account ID | - |
Sync Worker Settings
| Field | Type | Description | Default |
|---|---|---|---|
intervalMs | number | Milliseconds between sync attempts | 30000 (30s) |
maxRetries | number | Maximum retries before giving up | 5 |
batchSize | number | Items to process per interval | 10 |
maxQueueSize | number | Maximum queue size before eviction | 1000 |
maxFailedAgeMs | number | Time before discarding failed items | 604800000 (7 days) |
backfillEnabled | boolean | Sync existing tasks on startup | false |
Setup Guides
- AWS S3
- Cloudflare R2
AWS S3 Configuration
Create S3 Bucket
Create a dedicated S3 bucket for Cline conversation storage:Enable versioning and encryption:
Create IAM Policy
Create an IAM policy with minimal required permissions:Save this as
cline-prompt-storage-policy.json and create the policy:Create IAM User
Create a dedicated IAM user and attach the policy:Save the
AccessKeyId and SecretAccessKey from the output.Configure in Cline Dashboard
In the Cline admin console at app.cline.bot:
- Navigate to Settings → Enterprise Telemetry
- Enable Prompt Uploading
- Select S3 as the storage type
- Enter your bucket name, access key ID, secret key, and region
- Configure sync worker settings (or use defaults)
- Save configuration
Optional: Lifecycle Policies
Configure retention policies for cost management:Sync Worker Behavior
The background sync worker manages the upload queue with these characteristics:Queue Management
- FIFO ordering: Files are uploaded in the order they were created
- Automatic batching: Processes up to
batchSizeitems per interval - Queue size limits: Evicts oldest items when
maxQueueSizeis exceeded - Retry logic: Failed uploads are retried up to
maxRetriestimes
Failure Handling
When an upload fails:- Immediate retry: Item stays in queue for next sync interval
- Exponential backoff: Retry attempts are spaced out
- Maximum retries: After
maxRetriesattempts, item is marked as permanently failed - Age-based cleanup: Failed items older than
maxFailedAgeMsare discarded - No data loss: Local files remain intact regardless of sync status
Backfill Mode
WhenbackfillEnabled is set to true:
- On first startup, scans all existing tasks in
~/.cline/data/tasks/ - Queues conversation files that haven’t been uploaded
- Useful for enabling prompt storage on an existing Cline deployment
- Can generate significant upload volume — monitor queue size
Monitoring & Observability
Integration with OpenTelemetry
While prompt storage operates independently, it integrates with Cline’s observability system:- Task lifecycle events:
task.created,task.completedtrack when conversations are generated - Conversation events:
task.conversation_turn,task.tokensprovide usage metrics - Local monitoring: Sync worker status is logged but not yet exported as OTel events
CloudWatch Monitoring (S3)
Monitor S3 upload activity with CloudWatch:R2 Analytics
Cloudflare R2 provides built-in analytics in the dashboard:- Request counts and rates
- Storage usage over time
- Bandwidth utilization
- Error rates
Security & Compliance
Encryption
At Rest:- S3: Enable server-side encryption (SSE-S3 or SSE-KMS)
- R2: Encryption enabled by default
- All uploads use HTTPS/TLS
- Credentials are never logged or exposed
Access Control
Recommended IAM policies:- Use dedicated IAM users/roles
- Limit permissions to write-only if read access isn’t needed
- Enable MFA for credential generation
- Rotate access keys regularly
Audit Logging
S3 Server Access Logging:Data Retention
Implement retention policies based on your compliance requirements:- GDPR: Consider right to erasure
- SOC 2: Maintain audit trails for required period
- HIPAA: Ensure appropriate retention and disposal
Troubleshooting
Common Issues
Queue size growing continuously
Queue size growing continuously
Symptoms:
maxQueueSize limit reached, oldest items being evictedCauses:- Upload rate slower than conversation creation rate
- Network connectivity issues
- Insufficient batch size or interval
- Increase
batchSizeto process more items per interval - Decrease
intervalMsto sync more frequently - Check network connectivity and credentials
- Temporarily increase
maxQueueSizewhile investigating
Uploads failing with 403 Forbidden
Uploads failing with 403 Forbidden
Symptoms: Repeated upload failures, items reaching
maxRetriesCauses:- Invalid or expired credentials
- Insufficient IAM permissions
- Bucket policy denying access
- Verify credentials are correct in remote config
- Check IAM policy includes
s3:PutObjectpermission - Review bucket policies for deny rules
- Test with AWS CLI:
aws s3 cp test.txt s3://your-bucket/
R2 endpoint connection timeout
R2 endpoint connection timeout
Symptoms: Connection timeouts, failed uploadsCauses:
- Incorrect endpoint URL
- Firewall blocking Cloudflare IPs
- Invalid account ID
- Verify endpoint format:
https://<ACCOUNT_ID>.r2.cloudflarestorage.com - Check firewall rules allow HTTPS to Cloudflare IPs
- Confirm account ID in Cloudflare dashboard
- Test with curl:
curl -I https://<ACCOUNT_ID>.r2.cloudflarestorage.com
Backfill overwhelming upload queue
Backfill overwhelming upload queue
Symptoms: Queue at max size immediately after enabling backfillCauses:
- Large number of existing tasks
- Backfill queuing faster than upload processing
- Disable backfill temporarily:
"backfillEnabled": false - Let steady-state queue drain first
- Increase
batchSizeand decreaseintervalMs - Consider
maxQueueSizeincrease during backfill period - Re-enable backfill once queue is stable
Debug Logging
Enable debug logging to diagnose sync issues:- Check extension developer console (Help → Toggle Developer Tools)
- Look for
[ClineBlobStorage]and[SyncWorker]log entries - Failed uploads log error messages with details
Testing Configuration
Use the built-in test connection feature:Data Format Reference
Conversation File Schema
Uploadedapi_conversation_history.json files contain an array of messages:
Metadata Schema
Task metadata includes:Best Practices
Start Small
Test with a single team or project before rolling out organization-wide.
Monitor Costs
Set up billing alerts and review storage usage monthly.
Secure Credentials
Use dedicated IAM users with minimal permissions and rotate keys regularly.
Plan Retention
Define and implement data retention policies based on compliance needs.
See Also
OpenTelemetry
Configure metrics and logs export for comprehensive observability
Telemetry
Learn about Cline’s built-in anonymous usage tracking
Remote Configuration
Understand the remote configuration system

