ARTICLEhelp.openai.com4 min read

Understanding Codex Credit Rates Across Different Plans

Understanding Codex Credit Rates Across Different Plans

AI Summary

Codex credit rates are structured to accommodate various plans such as Plus, Pro, Business, and Enterprise/Edu, with pricing based on API token usage. This token-based model offers a more precise mapping of credits to actual usage, replacing the older per-message estimates. The credit consumption is determined by the mix of input, cached input, and output tokens, with specific rates for each token type across different models like GPT-5.4 and GPT-5.3-Codex. For instance, GPT-5.4 charges 62.50 credits per million input tokens, while GPT-5.3-Codex charges 43.75 credits.

Fast mode doubles credit consumption, and code reviews typically use GPT-5.3-Codex. The average cost for Codex is around $100-$200 per developer monthly, but this varies based on model usage, the number of instances, and fast mode utilization. Users can track their token usage in the Codex settings.

The legacy rate card, still in use by existing Plus/Pro and Enterprise/Edu customers, calculates credits per message or pull request, offering rough planning averages. Migration to the new token-based rates will be communicated to Enterprise admins.

The transition to token-based pricing aims to align Codex pricing with token-based metering, providing clearer visibility into credit usage. This change affects pricing based on workload mix, with output-heavy tasks and fast mode consuming more credits.

Key Concepts

Token-Based Pricing

A pricing model where costs are determined based on the number of tokens processed, including input, cached input, and output tokens.

Credit Consumption

The amount of credits used, which can vary based on the type and number of tokens processed during tasks.

Category

Technology
M

Summarized by Mente

Save any article, video, or tweet. AI summarizes it, finds connections, and creates your to-do list.

Start free, no credit card