Skip to main content

Overview

This endpoint provides native Anthropic Messages API compatibility. Use this for Claude models with features like extended thinking.
Base URL for Anthropic SDK: https://api.lemondata.cc (no /v1 suffix)

Request Headers

x-api-key
string
required
Your LemonData API key. Alternative to Bearer token.
anthropic-version
string
required
Anthropic API version. Use 2023-06-01.

Request Body

model
string
required
Claude model ID (e.g., claude-sonnet-4-6 or claude-opus-4-6).
messages
array
required
Array of message objects with role and content.For Claude models with vision support, content can be either a plain string or an array of content blocks. To send images, use structured content blocks rather than placing image URLs or Base64 strings directly into plain text.Example content blocks:
  • text block: { "type": "text", "text": "Describe this image" }
  • image block via URL: { "type": "image", "source": { "type": "url", "url": "https://example.com/image.jpg" } }
  • image block via Base64: { "type": "image", "source": { "type": "base64", "media_type": "image/png", "data": "iVBORw0KGgoAAA..." } }
max_tokens
integer
required
Maximum tokens to generate.
system
string
System prompt (separate from messages array).
temperature
number
default:"1"
Sampling temperature (0-1).
stream
boolean
default:"false"
Enable streaming responses.
thinking
object
Extended thinking configuration (Claude Opus 4.5).
  • type (string): "enabled" to enable
  • budget_tokens (integer): Token budget for thinking
tools
array
Available tools for the model.
tool_choice
object
How the model should use tools. Options: auto, any, tool (specific tool).
top_p
number
Nucleus sampling parameter. Use either temperature or top_p, not both.
top_k
integer
Only sample from the top K options for each token.
stop_sequences
array
Custom stop sequences that will cause the model to stop generating.
metadata
object
Metadata to attach to the request for tracking purposes.

Response

id
string
Unique message identifier.
type
string
Always message.
role
string
Always assistant.
content
array
Array of content blocks (text, thinking, tool_use).
model
string
Model used.
stop_reason
string
Why generation stopped (end_turn, max_tokens, tool_use).
usage
object
Token usage with input_tokens and output_tokens.
curl -X POST "https://api.lemondata.cc/v1/messages" \
  -H "x-api-key: sk-your-api-key" \
  -H "anthropic-version: 2023-06-01" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-6",
    "max_tokens": 1024,
    "system": "You are a helpful assistant.",
    "messages": [
      {"role": "user", "content": "Hello, Claude!"}
    ]
  }'
{
  "id": "msg_abc123",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Hello! How can I help you today?"
    }
  ],
  "model": "claude-sonnet-4-6",
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 15,
    "output_tokens": 10
  }
}

Vision Input Example

For Claude models with vision support, place images inside messages[].content as structured image blocks.
{
  "model": "claude-sonnet-4-6",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "Please describe this image."
        },
        {
          "type": "image",
          "source": {
            "type": "url",
            "url": "https://example.com/demo.jpg"
          }
        }
      ]
    }
  ]
}
{
  "model": "claude-sonnet-4-6",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "Please describe this image."
        },
        {
          "type": "image",
          "source": {
            "type": "base64",
            "media_type": "image/jpeg",
            "data": "/9j/4AAQSkZJRgABAQ..."
          }
        }
      ]
    }
  ]
}

Extended Thinking Example

message = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=16000,
    thinking={
        "type": "enabled",
        "budget_tokens": 10000
    },
    messages=[{"role": "user", "content": "Solve this math problem..."}]
)

for block in message.content:
    if block.type == "thinking":
        print(f"Thinking: {block.thinking}")
    elif block.type == "text":
        print(f"Response: {block.text}")

Anthropic Message Batches

LemonData now exposes the native Anthropic Message Batches flow alongside /v1/messages. Available routes:
  • POST /v1/messages/batches
  • GET /v1/messages/batches
  • GET /v1/messages/batches/:message_batch_id
  • GET /v1/messages/batches/:message_batch_id/results
  • POST /v1/messages/batches/:message_batch_id/cancel
  • DELETE /v1/messages/batches/:message_batch_id
Operational notes:
  • Use the same LemonData API key plus Anthropic-native headers.
  • If batch items reference file_id, also include anthropic-beta: files-api-2025-04-14.
  • Batch jobs keep Anthropic-native request/response shapes while LemonData tracks their internal settlement lifecycle.