Cline home page
GitHub
Discord
Install Cline
Install Cline
Search...
Search...
Navigation
Running Models Locally
LM Studio
Getting Started
What is Cline?
Model Selection Guide
Installing Cline
Task Management in Cline
Context Management
For New Coders
Improving Your Prompting Skills
Prompt Engineering Guide
Cline Memory Bank
Features
Auto approve
Checkpoints
Cline rules
Drag & Drop
Plan & Act
Workflows
Focus Chain
Auto Compact
Editing Messages
@ Mentions
Slash Commands
Commands & Shortcuts
Exploring Cline's Tools
Cline Tools Reference Guide
New Task Tool
Remote Browser Support
Enterprise Solutions
Cloud Provider Integration
Custom Instructions
MCP Servers
Security Concerns
MCP Servers
MCP Overview
Adding MCP Servers from GitHub
Configuring MCP Servers
Connecting to a Remote Server
MCP Made Easy
MCP Server Development Protocol
MCP Transport Mechanisms
Provider Configuration
Anthropic
Claude Code
AWS Bedrock
GCP Vertex AI
LiteLLM & Cline (using Codestral)
VS Code Language Model API
xAI (Grok)
Mistral
DeepSeek
Groq
Cerebras
Doubao
Fireworks AI
Z AI (Zhipu AI)
Ollama
OpenAI
OpenAI Compatible
OpenRouter
SAP AI Core
Vercel AI Gateway
Requesty
Running Models Locally
Read Me First
LM Studio
Ollama
Troubleshooting
Terminal Quick Fixes
Terminal Troubleshooting
More Info
Telemetry
On this page
🤖 Setting Up LM Studio with Cline
đź“‹ Prerequisites
🚀 Setup Steps
1. Install LM Studio
2. Launch LM Studio
3. Download a Model
4. Start the Server
5. Configure Cline
⚠️ Important Notes
đź”§ Troubleshooting
Running Models Locally
LM Studio
Copy page
A quick guide to setting up LM Studio for local AI model execution with Cline.
Copy page
​
🤖 Setting Up LM Studio with Cline
Run AI models locally using LM Studio with Cline.
​
đź“‹ Prerequisites
Windows, macOS, or Linux computer with AVX2 support
Cline installed in VS Code
​
🚀 Setup Steps
​
1. Install LM Studio
Visit
lmstudio.ai
Download and install for your operating system
​
2. Launch LM Studio
Open the installed application
You’ll see four tabs on the left:
Chat
,
Developer
(where you will start the server),
My Models
(where your downloaded models are stored),
Discover
(add new models)
​
3. Download a Model
Browse the “Discover” page
Select and download your preferred model
Wait for download to complete
​
4. Start the Server
Navigate to the “Developer” tab
Toggle the server switch to “Running”
Note: The server will run at
http://localhost:1234
​
5. Configure Cline
Open VS Code
Click Cline settings icon
Select “LM Studio” as API provider
Select your model from the available options
​
⚠️ Important Notes
Start LM Studio before using with Cline
Keep LM Studio running in background
First model download may take several minutes depending on size
Models are stored locally after download
​
đź”§ Troubleshooting
If Cline can’t connect to LM Studio:
Verify LM Studio server is running (check Developer tab)
Ensure a model is loaded
Check your system meets hardware requirements
Read Me First
Ollama
Assistant
Responses are generated using AI and may contain mistakes.