Skip to main content

Prerequisites

  • Windows, macOS, or Linux computer
  • Cline installed in VS Code

Setup Steps

1. Install Ollama

  • Visit ollama.com
  • Download and install for your operating system
Ollama download page

2. Choose and Download a Model

  • Browse models at ollama.com/search
  • Select model and copy command:
    ollama run [model-name]
    
Selecting a model in Ollama
  • Open your Terminal and run the command:
    • Example:
      ollama run llama2
      
Running Ollama in terminal
Your model is now ready to use within Cline.

3. Configure Cline

Complete Ollama setup process
Open VS Code and configure Cline:
  1. Click the Cline settings icon
  2. Select “Ollama” as your API provider
  3. Base URL: http://localhost:11434/ (default, usually no need to change)
  4. Select your model from the dropdown
For the best experience with Cline, use Qwen3 Coder 30B. This model provides strong coding capabilities and reliable tool use for local development. To download it:
ollama run qwen3-coder-30b
Other capable models include:
  • mistral-small - Good balance of performance and speed
  • devstral-small - Optimized for coding tasks

Important Notes

  • Start Ollama before using with Cline
  • Keep Ollama running in background
  • First model download may take several minutes

Enable Compact Prompts

For better performance with local models, enable compact prompts in Cline settings. This reduces the prompt size by 90% while maintaining core functionality. Navigate to Cline Settings → Features → Use Compact Prompt and toggle it on.

Troubleshooting

If Cline can’t connect to Ollama:
  1. Verify Ollama is running
  2. Check base URL is correct
  3. Ensure model is downloaded
Need more info? Read the Ollama Docs.
I