Ollama
A quick guide to setting up Ollama for local AI model execution with Cline.
Last updated
A quick guide to setting up Ollama for local AI model execution with Cline.
Last updated
Windows, macOS, or Linux computer
Cline installed in VS Code
Visit ollama.com
Download and install for your operating system
Browse models at ollama.com/search
Select model and copy command:
Open your Terminal and run the command:
Example:
✨ Your model is now ready to use within Cline!
Open VS Code
Click Cline settings icon
Select "Ollama" as API provider
Enter configuration:
Base URL: http://localhost:11434/
(default value, can be left as is)
Select the model from your available options
Start Ollama before using with Cline
Keep Ollama running in background
First model download may take several minutes
If Cline can't connect to Ollama:
Verify Ollama is running
Check base URL is correct
Ensure model is downloaded