Ollama
A quick guide to setting up Ollama for local AI model execution with Cline.
π Prerequisites
- Windows, macOS, or Linux computer
- Cline installed in VS Code
π Setup Steps
1. Install Ollama
- Visit ollama.com
- Download and install for your operating system
2. Choose and Download a Model
-
Browse models at ollama.com/search
-
Select model and copy command:
-
Open your Terminal and run the command:
-
Example:
-
β¨ Your model is now ready to use within Cline!
3. Configure Cline
- Open VS Code
- Click Cline settings icon
- Select βOllamaβ as API provider
- Enter configuration:
- Base URL:
http://localhost:11434/
(default value, can be left as is) - Select the model from your available options
- Base URL:
β οΈ Important Notes
- Start Ollama before using with Cline
- Keep Ollama running in background
- First model download may take several minutes
π§ Troubleshooting
If Cline canβt connect to Ollama:
- Verify Ollama is running
- Check base URL is correct
- Ensure model is downloaded
Need more info? Read the Ollama Docs.