LogoLogo
Home
  • Cline Documentation
  • Getting Started
    • What is Cline?
    • For New Coders
    • Installing Cline
    • Installing Dev Essentials
    • Our Favorite Tech Stack
    • Context Management
    • Model Selection Guide
  • Improving Your Prompting Skills
    • Prompt Engineering Guide
    • Cline Memory Bank
  • Exploring Cline's Tools
    • Cline Tools Guide
    • Checkpoints and Messages
    • Plan & Act Modes: A Guide to Effective AI Development
    • New Task Tool
    • Slash Commands
    • Remote Browser Support
  • Enterprise Solutions
    • Security Concerns
    • Cloud Provider Integration
    • MCP Servers
    • Custom Instructions
  • MCP Servers
    • MCP Overview
    • MCP Marketplace
    • Adding MCP Servers from GitHub
    • Configuring MCP Servers
    • Connecting to a Remote Server
    • MCP Transport Mechanisms
    • MCP Server Development Protocol
  • Custom Model Configs
    • AWS Bedrock w/ Credentials Authentication
    • AWS Bedrock w/ Profile Authentication
    • GCP Vertex AI
    • LiteLLM & Cline (using Codestral)
  • Running Models Locally
    • Read Me First
    • Ollama
    • LM Studio
  • More Info
    • Telemetry
Powered by GitBook
On this page
  • 📋 Prerequisites
  • 🚀 Setup Steps
  • ⚠️ Important Notes
  • 🔧 Troubleshooting
  1. Running Models Locally

Ollama

A quick guide to setting up Ollama for local AI model execution with Cline.

PreviousRead Me FirstNextLM Studio

Last updated 1 month ago

📋 Prerequisites

  • Windows, macOS, or Linux computer

  • Cline installed in VS Code

🚀 Setup Steps

1. Install Ollama

  • Visit

  • Download and install for your operating system

2. Choose and Download a Model

  • Select model and copy command:

    ollama run [model-name]
  • Open your Terminal and run the command:

    • Example:

      ollama run llama2

✨ Your model is now ready to use within Cline!

3. Configure Cline

  1. Open VS Code

  2. Click Cline settings icon

  3. Select "Ollama" as API provider

  4. Enter configuration:

    • Base URL: http://localhost:11434/ (default value, can be left as is)

    • Select the model from your available options

⚠️ Important Notes

  • Start Ollama before using with Cline

  • Keep Ollama running in background

  • First model download may take several minutes

🔧 Troubleshooting

If Cline can't connect to Ollama:

  1. Verify Ollama is running

  2. Check base URL is correct

  3. Ensure model is downloaded

Browse models at

Need more info? Read the .

ollama.com/search
Ollama Docs
ollama.com