DEV Community

Chinallmapi
Chinallmapi

Posted on • Edited on

How I cut my OpenAI API costs by 87% using a single gateway

I was paying around $200 per month for OpenAI API calls. Not a massive bill, but enough to notice every time the invoice came through.

The real pain point was not the absolute cost, it was the friction. Every time I wanted to try a cheaper model, the process looked like this:

  1. Research alternatives. DeepSeek, GLM, Qwen, MiniMax, Kimi. Each has different capabilities and pricing.
  2. Sign up for a new provider and generate an API key.
  3. Install a new SDK or figure out their HTTP API format.
  4. Rewrite my integration code to work with a different request and response structure.
  5. Test that everything still works. Handle new error codes. Update retry logic.

That friction meant I stayed on OpenAI far longer than I should have. The switching cost was higher than the savings themselves.

Then I found a different approach. Instead of switching providers, I added a layer between my code and the models. One gateway that speaks the OpenAI protocol, but routes to multiple backends behind the scenes.

This is not a product review. It is a cost engineering story. And it changed how I think about building with AI models.


The problem: OpenAI pricing at scale

My app used OpenAI for three distinct capabilities.

Chat completions for general reasoning and code generation. This was the biggest cost driver. At OpenAI's official pricing, GPT-5.4 costs $2.50 per million input tokens and $15.00 per million output tokens.

Embeddings for semantic search across a knowledge base. Smaller cost, but it adds up.

Image generation for blog post thumbnails. At OpenAI's official pricing, GPT-image-2 costs $8.00 per million tokens for inputs, $2.00 for cached inputs, $30.00 for outputs.

Total: around $200 per month in API usage, including subscriptions.

The obvious fix was to switch to cheaper models for at least some of these workloads. DeepSeek offered similar quality at a fraction of the price. GLM, Qwen, and Kimi had strengths for specific tasks. But each switch meant a new integration, a new authentication flow, a new set of quirks.

Three integrations. Three auth flows. Three potential failure points.

I procrastinated for months.


The alternative: a unified gateway

A gateway sits between your application code and multiple model providers. Your app calls it the same way it calls OpenAI. Same base URL pattern, same Bearer token authentication, same request and response format.

Behind the scenes, the gateway routes your request to whichever model you specify in the model parameter.

The key insight: you specify the model name, not the provider. The gateway handles the routing.

Here is what the setup looks like with the standard OpenAI Python SDK:

import openai

client = openai.OpenAI(
    api_key="your-gateway-key",
    base_url="https://chinallmapi.com/v1"
)

# Same SDK, different backends:
client.chat.completions.create(model="gpt-5.4", messages=[...])
client.chat.completions.create(model="deepseek-v4-flash", messages=[...])
client.chat.completions.create(model="glm-4.7", messages=[...])
client.chat.completions.create(model="kimi-k2.5", messages=[...])
Enter fullscreen mode Exit fullscreen mode

Same SDK. Same method signatures. Different backend. Zero code changes beyond the base URL and the model name.

This is what ChinaLLM does. It is an OpenAI-compatible gateway that routes to both OpenAI models and China-native providers.


The cost comparison with real numbers

I pulled the actual pricing from both OpenAI's official pricing page (openai.com/api/pricing/) and ChinaLLM's public pricing page (chinallmapi.com/pricing). Here is what I found.

GPT-5.4 per 1 million tokens:

OpenAI official ChinaLLM Savings
Input $2.50 $0.325 87%
Output $15.00 $1.95 87%
Cached input $0.25 $0.033 87%

GPT-5.5 per 1 million tokens:

OpenAI official ChinaLLM Savings
Input $5.00 $0.65 87%
Output $30.00 $5.20 87%
Cached input $0.50 $0.065 87%

The 1.3x OpenAI group multiplier on ChinaLLM is already reflected in these prices. Even with the markup, you are paying roughly 13% of OpenAI's official rate for the exact same model.

China-native models available through the same gateway:

Model Input (per 1M) Output (per 1M) Group
deepseek-v4-flash $0.147 $0.294 DeepSeek 1.05x
deepseek-v4-pro $0.924 $1.848 DeepSeek 1.05x
glm-4.7 $0.660 $2.585 CodingPlan 1.1x
glm-5 $0.990 $3.553 CodingPlan 1.1x
GLM-5.1 $1.197 $4.200 ZAI 1x
kimi-k2.5 $0.660 $3.410 CodingPlan 1.1x
MiniMax-M2.5 $0.352 $1.375 CodingPlan 1.1x
qwen3.5-plus $1.320 $3.850 CodingPlan 1.1x

For my use case -- coding assistance and general reasoning -- I tested deepseek-v4-flash, deepseek-v4-pro, and glm-4.7:

  • deepseek-v4-flash ($0.147 input / $0.294 output) handled about 80% of my prompts acceptably. Code generation, simple Q&A, drafting emails and summaries all worked fine.
  • deepseek-v4-pro ($0.924 input / $1.848 output) handled about 95% at near-GPT quality. Technical explanations, debugging assistance, and documentation generation all worked well.
  • I only needed gpt-5.4 for complex multi-step reasoning and creative writing where nuance really matters.

My switching strategy: incremental migration

I did not switch everything at once. I migrated in phases, testing quality at each step.

Phase 1: High-volume, low-risk calls to deepseek-v4-flash

Changed my chat completions to use deepseek-v4-flash for:

  • Code generation (syntax is deterministic, quality is fine)
  • Simple Q&A (factual questions that don't need nuanced reasoning)
  • Drafting emails and summaries (good enough for a first pass)

Saved: about $120 per month on these workloads

Phase 2: Medium-risk calls to deepseek-v4-pro

Used deepseek-v4-pro for:

  • Technical explanations (needed more depth than flash provided)
  • Debugging assistance (needed to follow logic chains)
  • Documentation generation (needed structure and completeness)

Saved: about $40 per month while maintaining near-GPT quality.

Phase 3: Keep premium for edge cases with gpt-5.4 through the gateway

Kept GPT-5.4 for complex reasoning chains and creative writing, but routed it through the gateway instead of calling OpenAI directly. At $0.325 input / $1.95 output through the gateway versus $2.50 input / $15.00 output from OpenAI directly, I saved 87% even on the same model.

Volume dropped to less than 10% of total usage, but the per-call savings were massive.

Phase 4: Embeddings and images

Migrated embeddings to DeepSeek through the gateway. Kept images on gpt-image-2 ($0.039 per image through the gateway).


The total savings

Metric Before After Difference
Monthly API cost ~$200 ~$50 ~$150 saved
Percentage 100% 25% 75% reduction
GPT-5.4 usage 100% of calls via OpenAI direct <10% of calls via gateway 87% saved per call

The key insight: I did not rewrite any integration code. I changed model strings in configuration files and let the gateway handle the routing.


The code change was minimal

Before:

# config.py
MODEL = "gpt-5.4"

# app.py
response = client.chat.completions.create(model=MODEL, messages=prompts)
Enter fullscreen mode Exit fullscreen mode

After:

# config.py
MODEL_SIMPLE = "deepseek-v4-flash"    # coding, simple tasks
MODEL_ADVANCED = "deepseek-v4-pro"    # technical explanations
MODEL_PREMIUM = "gpt-5.4"             # complex reasoning

# app.py
def get_response(prompt_type, messages):
    model = {
        "code": MODEL_SIMPLE,
        "explain": MODEL_ADVANCED,
        "complex": MODEL_PREMIUM
    }[prompt_type]
    return client.chat.completions.create(model=model, messages=messages)
Enter fullscreen mode Exit fullscreen mode

That is the entire change. Same client. Same method. Same response handling.


When this approach makes sense

This is not for everyone. You need:

Volume enough to care about cost. If you are spending less than $50 per month on API usage, the absolute savings may not justify the migration effort.

Flexibility in quality requirements. If every single call needs GPT-5.5 level quality, you are locked into premium pricing. The savings come from routing different workloads to different quality tiers.

Multiple model use cases. If you only use chat completions for one type of task, a simpler direct integration might be cleaner. The gateway approach shines when you have multiple capabilities -- chat, embeddings, images -- and want to optimize each independently.

Gateway trust. You are adding a middle layer. ChinaLLM has public documentation and transparent pricing, which helped with my evaluation.


The trade-offs

What I gained:

  • 75% cost reduction on my API bill
  • Zero integration code changes -- the SDK stayed the same
  • The ability to test new models instantly by just changing the model string
  • 87% savings on the same OpenAI models when routed through the gateway

What I accepted:

  • A gateway layer between me and the providers
  • Slightly higher latency from the routing overhead
  • Different quality profiles for different models that required testing and tuning

The net result was clearly positive.


Final takeaway

If your API costs are noticeable:

  • Do not integrate each provider separately. The switching cost is too high.
  • Use a unified gateway. One SDK, multiple backends, model-level routing.
  • Migrate incrementally. Start with high-volume, low-risk calls. Test the quality. Expand gradually.

The math: my ~$200/month bill became ~$50/month. The code change was updating configuration values.

For complete code examples in Python, Node.js, and curl, see the GitHub repo.

All pricing data in this article was sourced from OpenAI's official pricing page and ChinaLLM's public pricing page, accessed May 2026.


This is a cost engineering story, not a product endorsement. The approach -- gateway-based model routing -- is what matters. ChinaLLM is one implementation of that pattern, publicly documented with transparent pricing.

Top comments (0)