Skip to main content

Overview

Type: Coding ToolPrimary Path: OpenAI ResponsesSupport Confidence: Supported with model/channel limits
OpenAI Codex is an open-source command-line tool (CLI) that serves as a lightweight coding agent, capable of reading, modifying, and running code in the terminal. It’s built on GPT models and optimized for code generation. For LemonData, Codex CLI is best used against /v1/responses. That path works well, but some Responses-native features still depend on whether the selected model and routed channel support native passthrough.

System Requirements

  • OS: macOS, Linux (official support), Windows via WSL
  • Node.js: Version 18+
  • npm: Version 10.x.x or higher

Installation

npm install -g @openai/codex
Verify installation:
codex --version

Configuration

Step 1: Set API Key

Temporary (current session):
export OPENAI_API_KEY="sk-your-lemondata-key"
Permanent configuration: Add to ~/.bashrc, ~/.zshrc, or ~/.bash_profile:
export OPENAI_API_KEY="sk-your-lemondata-key"
Then reload:
source ~/.zshrc  # or source ~/.bashrc

Step 2: Configure config.toml

Edit ~/.codex/config.toml:
model_provider = "lemondata"
model = "gpt-5.4"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
fast_mode = true
model_context_window = 1000000
model_auto_compact_token_limit = 900000
sandbox_mode = "danger-full-access"
approval_policy = "never"

disable_response_storage = false
personality = "friendly"
service_tier = "fast"

[model_providers.lemondata]
env_key = "OPENAI_API_KEY"
name = "lemondata"
base_url = "https://api.lemondata.cc/v1"
wire_api = "responses"
supports_websockets = true
websocket_connect_timeout_ms = 15000

[features]
responses_websockets = true
responses_websockets_v2 = true
If the config file doesn’t exist, run codex once to generate it, then edit the file. Restart Codex completely after changing config.toml so the new provider settings are reloaded.
Codex is deprecating chat/completions support for custom providers. Keep wire_api = "responses" for LemonData unless you are intentionally using an older compatibility path.
If a request uses Responses-native-only fields that your chosen model or routed channel does not support, LemonData returns an explicit error instead of silently downgrading the request.

Basic Usage

Start interactive mode:
codex
Direct command:
codex "Fix the bug in main.py line 42"
Specify model:
codex -m gpt-5.4 "Build a REST API server"
ModelBest For
gpt-5.4Best default choice for coding and reasoning
gpt-5-miniFaster, cheaper fallback for coding workflows
claude-sonnet-4-6Code review, documentation
deepseek-r1Algorithm design, reasoning

Interactive Commands

CommandDescription
/helpDisplay help
/exit or Ctrl+CExit
/clearClear conversation
/configView configuration
/model <name>Switch model
/tokensView token usage

Verify Configuration

# Check environment variable
echo $OPENAI_API_KEY

# Test API connection
codex "Hello, Codex!"

# View configuration
cat ~/.codex/config.toml

Common Use Cases

Code review:
git diff | codex "Review these code changes"
Generate commit messages:
git diff --staged | codex "Generate a commit message for these changes"
Fix errors:
codex "Fix the TypeScript errors in src/components/"
Explain code:
cat main.py | codex "Explain what this code does"

Troubleshooting

  • Verify base_url in config.toml is exactly https://api.lemondata.cc/v1
  • Check network connectivity
  • Ensure no proxy interference
  • Verify env_key = "OPENAI_API_KEY" is present in ~/.codex/config.toml
  • Verify OPENAI_API_KEY environment variable is set
  • Check that the key starts with sk-
  • Ensure the key is active in LemonData dashboard
  • Some fields are only available when the selected model and routed channel support native /v1/responses passthrough
  • If you see an error mentioning unsupported_request_field or native passthrough, remove the field or switch to a compatible model/channel