Type: Framework or PlatformPrimary Path: OpenAI-compatible via OpenAILikeSupport Confidence: Supported via OpenAILike
For LemonData, the more robust LlamaIndex setup is to use OpenAI-compatible integrations instead of the built-in OpenAI classes.Current LlamaIndex docs explicitly recommend OpenAILike for third-party OpenAI-compatible endpoints, because the built-in OpenAI classes infer metadata from official model names.In other words: treat OpenAILike as the supported LemonData path here, not the built-in OpenAI classes.
from llama_index.core.llms import ChatMessagemessages = [ ChatMessage(role="system", content="You are a helpful assistant."), ChatMessage(role="user", content="What is the capital of France?")]response = llm.chat(messages)print(response.message.content)
from llama_index.core import SimpleDirectoryReader, VectorStoreIndexdocuments = SimpleDirectoryReader("./data").load_data()index = VectorStoreIndex.from_documents(documents)query_engine = index.as_query_engine()response = query_engine.query("What is in my documents?")print(response)
chat_engine = index.as_chat_engine(chat_mode="condense_question")response = chat_engine.chat("What is LemonData?")print(response)response = chat_engine.chat("How many models does it support?")print(response)
Prefer llama_index.llms.openai_like.OpenAILike and llama_index.embeddings.openai_like.OpenAILikeEmbedding for LemonData and other third-party OpenAI-compatible gateways.
Set api_base explicitly
Pass api_base="https://api.lemondata.cc/v1" directly in code instead of relying on older OpenAI environment-variable names.
Keep model roles separated
Use chat/reasoning models for synthesis and text-embedding-3-small or text-embedding-3-large for retrieval.