Skip to main content

Overview

LiteLLM fits LemonData in two common ways:
  • use LemonData as an OpenAI-compatible upstream behind LiteLLM
  • put LiteLLM in front of LemonData when your team wants one more internal gateway layer for routing, virtual keys, or centralized observability
For LemonData, the cleanest default is LiteLLM’s custom OpenAI / OpenAI-compatible path pointed at https://api.lemondata.cc/v1.
If you specifically need Claude-native or Gemini-native request shapes, prefer LemonData’s dedicated native integrations instead of forcing those workflows through LiteLLM’s OpenAI-compatible abstraction.
Type: Framework or PlatformPrimary Path: OpenAI-compatible upstreamSupport Confidence: Supported path

Install

pip install 'litellm[proxy]'

Proxy Configuration

Create a litellm-config.yaml like this:
model_list:
  - model_name: lemondata-gpt-5.4
    litellm_params:
      model: custom_openai/gpt-5.4
      api_base: https://api.lemondata.cc/v1
      api_key: os.environ/OPENAI_API_KEY

  - model_name: lemondata-claude-sonnet
    litellm_params:
      model: custom_openai/claude-sonnet-4-6
      api_base: https://api.lemondata.cc/v1
      api_key: os.environ/OPENAI_API_KEY
Start the proxy:
export OPENAI_API_KEY="sk-your-lemondata-key"
litellm --config litellm-config.yaml --port 4000

Call LiteLLM Through OpenAI SDK

from openai import OpenAI

client = OpenAI(
    api_key="anything",
    base_url="http://127.0.0.1:4000"
)

response = client.chat.completions.create(
    model="lemondata-gpt-5.4",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Direct Python Usage

If you are using LiteLLM as a Python library instead of the proxy, keep the same LemonData base URL:
import litellm

response = litellm.completion(
    model="custom_openai/gpt-5.4",
    api_base="https://api.lemondata.cc/v1",
    api_key="sk-your-lemondata-key",
    messages=[{"role": "user", "content": "Summarize this repo."}]
)

Best Practices

Treat LemonData as an OpenAI-compatible upstream unless you have a very specific reason to build a more complex provider mapping.
LiteLLM makes sense when your own platform wants virtual keys, extra routing policy, or centralized logs in front of LemonData.
OpenAI-compatible translation layers are great for broad compatibility, but they are not the right place to promise every provider-native feature.

Troubleshooting

  • Verify api_base is exactly https://api.lemondata.cc/v1
  • Make sure LiteLLM can reach LemonData over the public internet
  • If you run the proxy locally, verify the OpenAI client points to your LiteLLM port instead of LemonData directly
  • Check that LiteLLM is reading the right OPENAI_API_KEY
  • Confirm the LemonData key starts with sk-
  • Confirm the key is active in LemonData dashboard
  • Verify the LemonData model name in custom_openai/<model>
  • Keep your LiteLLM model_name alias separate from the real LemonData model id