Getting an API Key
- Sign Up/Sign In: Go to Nebius AI Studio. Create an account or sign in.
- Navigate to API Keys: Access the API key section in your dashboard.
- Create a Key: Generate a new API key.
- Copy the Key: Copy the API key immediately and store it securely.
Supported Models
Cline supports the following Nebius models:DeepSeek Models
deepseek-ai/DeepSeek-V3- General-purpose model (1.50 per 1M tokens)deepseek-ai/DeepSeek-V3-0324-fast- Fast variant (6.00 per 1M tokens)deepseek-ai/DeepSeek-R1- Reasoning model (2.40 per 1M tokens)deepseek-ai/DeepSeek-R1-fast- Fast reasoning (6.00 per 1M tokens)deepseek-ai/DeepSeek-R1-0528- Latest reasoning version (163K context, 2.40 per 1M tokens)deepseek-ai/DeepSeek-R1-0528-fast- Fast latest reasoning (6.00 per 1M tokens)
Qwen Models
Qwen/Qwen3-Coder-480B-A35B-Instruct- 480B coding model (262K context, 1.80 per 1M tokens)Qwen/Qwen3-235B-A22B- 235B MoE model (0.60 per 1M tokens)Qwen/Qwen3-235B-A22B-Instruct-2507- Latest instruct version (262K context, 0.60 per 1M tokens)Qwen/Qwen3-32B/Qwen/Qwen3-32B-fast- Dense 32B modelQwen/Qwen3-30B-A3B/Qwen/Qwen3-30B-A3B-fast- Compact MoE modelQwen/Qwen3-4B-fast- Small fast model (0.24 per 1M tokens)Qwen/Qwen2.5-Coder-32B-Instruct-fast- Coding-optimized (0.30 per 1M tokens)Qwen/Qwen2.5-32B-Instruct-fast(Default) - General-purpose (0.40 per 1M tokens)
Other Models
moonshotai/Kimi-K2-Instruct- Kimi K2 with prompt caching (131K context, 2.40 per 1M tokens)openai/gpt-oss-120b- OpenAI’s 120B open-weight model (0.60 per 1M tokens)openai/gpt-oss-20b- OpenAI’s 20B open-weight model (0.20 per 1M tokens)zai-org/GLM-4.5/zai-org/GLM-4.5-Air- Z AI models with prompt cachingmeta-llama/Llama-3.3-70B-Instruct-fast- Fast Llama 3.3 (0.75 per 1M tokens)
Configuration in Cline
- Open Cline Settings: Click the settings icon (⚙️) in the Cline panel.
- Select Provider: Choose “Nebius AI Studio” from the “API Provider” dropdown.
- Enter API Key: Paste your Nebius API key.
- Select Model: Choose your desired model from the “Model” dropdown.
Tips and Notes
- Speed Tiers: Models with
-fastsuffix offer faster inference at higher prices. - Wide Selection: Access models from DeepSeek, Qwen, Meta, Moonshot, OpenAI, and Z AI.
- Competitive Pricing: Generally lower prices than direct provider APIs.
- Pricing: Check the Nebius documentation for current rates.

