Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cline.bot/llms.txt

Use this file to discover all available pages before exploring further.

Use this page for providers that follow the same setup pattern.

Shared Configuration in Cline

  1. Open Cline settings (⚙️).
  2. Select your provider from API Provider.
  3. Paste your API key/token in the matching credential field.
  4. Choose a model from Model.
For the full auth flow (IDE + CLI), see Authorization & Model Selection.

Providers

AIHubMix

AIHubMix is an OpenAI-compatible model aggregator that gives you one API surface for multiple model backends. Website: https://aihubmix.com/

Basic Setup

  1. Create/sign in to your AIHubMix account.
  2. Generate an API key from the dashboard.
  3. In Cline, choose AIHubMix and paste the key.

AskSage

AskSage is focused on enterprise and government AI access with compliance-oriented controls. Website: https://www.asksage.ai/

Basic Setup

  1. Sign in to AskSage and create API credentials.
  2. Confirm your workspace/organization has model access enabled.
  3. In Cline, select AskSage and enter the key.

Baseten

Baseten provides hosted model APIs and deployment infrastructure for production inference. Website: https://www.baseten.co/products/model-apis/

Basic Setup

  1. Create a Baseten account and open the API keys section.
  2. Generate an API key with the required permissions.
  3. In Cline, choose Baseten and paste the key.

Cerebras

Cerebras offers very fast hosted inference for supported model families. Website: https://cloud.cerebras.ai/

Basic Setup

  1. Sign in to Cerebras Cloud.
  2. Create or retrieve your API key.
  3. In Cline, select Cerebras and enter the key.

Dify.ai

Dify.ai is a workflow-centric AI platform with app and pipeline capabilities. Website: https://dify.ai/

Basic Setup

  1. Create a Dify workspace and API credential.
  2. Confirm your Dify endpoint/provider settings are active.
  3. In Cline, select Dify.ai and paste your key.

Doubao

Doubao is ByteDance’s model family, typically accessed through Volcengine services. Website: https://www.volcengine.com/

Basic Setup

  1. Sign in to Volcengine and enable model access.
  2. Generate API credentials for Doubao endpoints.
  3. In Cline, select Doubao and paste credentials.

Fireworks AI

Fireworks AI provides hosted inference for open models and performance-focused deployment. Website: https://fireworks.ai/

Basic Setup

  1. Create a Fireworks account.
  2. Generate an API key in your project/account settings.
  3. In Cline, choose Fireworks AI and enter the key.

GCP Vertex AI

Vertex AI is Google Cloud’s enterprise model platform with IAM and project-level governance. Website: https://cloud.google.com/vertex-ai

Basic Setup

  1. Set up a GCP project with Vertex AI enabled.
  2. Configure authentication (service account or application default credentials).
  3. In Cline, select GCP Vertex AI and provide required config values.

Groq

Groq is a low-latency inference provider for supported model families. Website: https://groq.com/

Basic Setup

  1. Sign in to Groq Console.
  2. Create an API key.
  3. In Cline, select Groq and paste the key.

Hicap

Hicap provides OpenAI-compatible access with multimodal support. Website: https://hicap.ai

Basic Setup

  1. Create your Hicap account.
  2. Generate an API key/token.
  3. In Cline, choose Hicap and enter the credential.

Huawei Cloud MaaS

Huawei Cloud MaaS provides model-as-a-service access within Huawei Cloud. Website: https://www.huaweicloud.com/

Basic Setup

  1. Sign in to Huawei Cloud and open MaaS services.
  2. Create API credentials for model access.
  3. In Cline, select Huawei Cloud MaaS and add credentials.

Hugging Face

Hugging Face provides hosted inference access for open-source models. Website: https://huggingface.co/

Basic Setup

  1. Sign in to Hugging Face.
  2. Create an access token with inference permissions.
  3. In Cline, select Hugging Face and paste the token.

Mistral

Mistral provides direct API access to its model lineup. Website: https://mistral.ai/

Basic Setup

  1. Create/sign in to Mistral platform account.
  2. Generate an API key.
  3. In Cline, choose Mistral and paste the key.

Moonshot

Moonshot provides API access to Kimi model families. Website: https://platform.moonshot.ai/

Basic Setup

  1. Sign in to Moonshot platform.
  2. Create an API key in account settings.
  3. In Cline, select Moonshot and enter the key.

Nebius AI Studio

Nebius AI Studio offers managed hosted model APIs. Website: https://studio.nebius.com/

Basic Setup

  1. Create/sign in to Nebius AI Studio.
  2. Generate API credentials.
  3. In Cline, choose Nebius AI Studio and provide the credential.

Nous Research

Nous Research provides access to Hermes-family model offerings. Website: https://nousresearch.com/

Basic Setup

  1. Get provider access/credentials for Nous-hosted endpoints.
  2. Confirm available model IDs for your account.
  3. In Cline, select NousResearch and add credentials.

Oracle Code Assist

Oracle Code Assist provides AI-powered coding assistance through Oracle Cloud Infrastructure (OCI) Generative AI service. Website: https://www.oracle.com/application-development/code-assist/

Basic Setup

  1. OCI Account: You need an Oracle Cloud Infrastructure account.
  2. Enable Generative AI: Enable the OCI Generative AI service in your tenancy.
  3. Get Credentials: Configure OCI authentication (API key, config file, or instance principal).

Qwen Code

Qwen Code provides coding-oriented access to Qwen models. Website: https://chat.qwen.ai/

Basic Setup

  1. Sign in to Qwen platform.
  2. Obtain API access credentials.
  3. In Cline, select Qwen Code and enter the credential.

Requesty

Requesty provides multi-provider routing behind a single API surface. Website: https://www.requesty.ai/

Basic Setup

  1. Create/sign in to Requesty.
  2. Generate an API key.
  3. In Cline, select Requesty and paste the key.

SambaNova

SambaNova offers hosted inference and enterprise AI platform capabilities. Website: https://sambanova.ai/

Basic Setup

  1. Create/sign in to SambaNova account.
  2. Create an API key/token.
  3. In Cline, choose SambaNova and enter credentials.

SAP AI Core

SAP AI Core is SAP’s enterprise AI platform for model integration and governance. Website: https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/what-is-sap-ai-core

Basic Setup

  1. Set up SAP AI Core / generative AI hub access.
  2. Configure your service key/endpoint credentials.
  3. In Cline, select SAP AI Core and provide required values.

Together

Together provides hosted inference for many popular open models. Website: https://together.ai/

Basic Setup

  1. Create/sign in to Together account.
  2. Generate an API key.
  3. In Cline, choose Together and paste the key.

Vercel AI Gateway

Vercel AI Gateway gives one API for multiple upstream model providers. Website: https://vercel.com/

Basic Setup

  1. Sign in to Vercel and open AI Gateway.
  2. Create a Gateway API key.
  3. In Cline, select Vercel AI Gateway and add the key.

VS Code Language Model API

VS Code Language Model API support lets Cline use models exposed by the VS Code host environment. Website: https://code.visualstudio.com/api/extension-guides/language-model

Basic Setup

  1. Use VS Code with LM API-compatible model access configured.
  2. Ensure the language model integration is available in your environment.
  3. In Cline, select VS Code Language Model API.

xAI (Grok)

xAI provides Grok models through its API platform. Website: https://x.ai/

Basic Setup

  1. Sign in to xAI platform.
  2. Generate an API key.
  3. In Cline, choose xAI (Grok) and paste the key.