Every interaction with Cline happens within a task. Tasks are self-contained work sessions that capture your entire conversation, code changes, command executions, and decisions.Documentation Index
Fetch the complete documentation index at: https://docs.cline.bot/llms.txt
Use this file to discover all available pages before exploring further.
What are Tasks?
A task begins when you submit a prompt to Cline. Your prompt defines the goal, and Cline works toward it through conversation, code changes, and tool use. The quality of your initial prompt directly affects how well Cline performs - clear, specific prompts lead to better results. Each task:- Starts with your prompt and builds context through the conversation
- Has a unique identifier and dedicated storage directory
- Contains the full conversation history
- Tracks token usage, API costs, and execution time
- Can be interrupted and resumed across sessions
- Creates checkpoints for file changes through Git-based snapshots
Scoping Your Tasks
Each task carries its own context: the conversation history, decisions made, and understanding built up over the session. How you scope your tasks directly affects how well Cline can help you. Think of it this way: one task = one goal. “Implement user authentication” is one task. “Fix an unrelated CSS bug” is a separate task, even if you notice it while working on auth. A focused task produces better results. When a task tries to cover too many unrelated goals, the context becomes cluttered and responses become less relevant.If you’re unsure, err on the side of starting fresh. You can always find previous sessions in your task history.
Context Window
Every AI model has a context window - a limit on how much information it can process at once. Think of it as Cline’s working memory for the current task. As you work, the context window fills up with:- Your prompts and Cline’s responses
- File contents Cline reads or edits
- Command outputs and tool results
- System instructions that guide Cline’s behavior (including Cline Rules)
.clineignore file to exclude dependencies, build artifacts, and other files Cline doesn’t need. This can dramatically reduce your baseline token usage.
New Task vs. Continue
Knowing when to start fresh versus continue can feel unclear at first. As you work with Cline more, you’ll develop an intuition for it. Use this table as a starting point:| Scenario | Action | Why |
|---|---|---|
| Switching to a different feature | New task | Clean context, focused responses |
| Building on work Cline just completed | Continue | Shared understanding preserved |
| Cline keeps going off-track | New task | Fighting context wastes time |
| Iterating on the same files | Continue | Conversation history helps |
| Explaining what to ignore | New task | Cluttered context hurts quality |
| Refining Cline’s last output | Continue | Momentum and decisions preserved |
/newtask command. Your file changes are preserved through checkpoints, and you can reference previous tasks from history anytime.
Understanding Task Costs
Every cloud-based AI model charges for usage based on tokens, the units of text the model processes. Cline tracks these costs automatically and displays them in the task header so you can monitor spending as you work.How Costs Are Calculated
When you interact with Cline, the model processes:- Input tokens: Your prompts, file contents, conversation history, and system instructions
- Output tokens: The model’s responses, code suggestions, and tool calls
When You Pay
You pay for AI usage when using cloud providers like Anthropic, OpenAI, OpenRouter, or Google. Costs vary significantly:| Provider Type | Billing Model |
|---|---|
| Cline Provider | Pay-per-use with credits you purchase |
| Direct API keys | Billed by your provider (Anthropic, OpenAI, etc.) |
| OpenRouter/Requesty | Aggregated billing across multiple models |
| Local models | Free (you provide the hardware) |
Free Options
Not ready to pay? Cline offers several free paths:- Free models: Search “free” in the model selector when using the Cline provider. These models display a FREE tag and work well for learning and experimentation.
- Free tiers: Some providers offer limited free usage when you use your own API key.
- Local models: Run models on your own hardware with zero per-request costs.
Self-Hosted Models
Running models locally means no API costs, ever. Your only expense is the hardware to run them. To run local models effectively, you need:- 32GB RAM minimum for entry-level models (4-bit quantization)
- 64GB RAM for better quality (8-bit quantization)
- 128GB+ RAM for cloud-competitive performance
Task History
Every task you work on is saved automatically to your local machine. You can revisit past conversations, resume interrupted work, or reference successful approaches from earlier sessions.Finding Your History
Click the History button in the Cline sidebar (clock icon at the top-right) to open the history view. You’ll see all your past tasks with their initial prompt, timestamp, and token usage. Each task card expands to show a preview of the conversation.Searching Tasks
Use the search bar at the top of the history view to find specific tasks. The fuzzy search looks across everything: your prompts, Cline’s responses, code snippets, and file names. Sort results by:- Newest/Oldest for chronological browsing
- Most Expensive/Most Tokens to find resource-heavy tasks
- Most Relevant when searching for specific content
- Favorites to show only starred tasks
Resuming Tasks
Cline can resume interrupted tasks with full context:- Open the task from history
- Cline loads the complete conversation
- File states are checked against checkpoints
- The task continues with awareness of the interruption
- Provide additional context if needed

