How to Get Started with GPT-5.4 in ChatGPT, API, and Codex
TL;DR
- Switch to GPT-5.4 Thinking in ChatGPT (Plus/Team/Pro) for native computer-use and 1M token context
- Update your API calls to
gpt-5.4orgpt-5.4-profor improved reasoning, coding, and agent workflows - Leverage new tool search, browser capabilities, and token-efficient performance for professional tasks like spreadsheets, documents, and interactive debugging
Prerequisites
- An OpenAI account with ChatGPT Plus, Team, or Pro subscription (for the ChatGPT interface)
- API access with sufficient credits (GPT-5.4 and GPT-5.4 Pro are available immediately in the API)
- Basic familiarity with the OpenAI API or ChatGPT interface
- For advanced agent use: access to tools, browser environments, or coding sandboxes (Playwright recommended for frontend workflows)
Step-by-step instructions
1. Access GPT-5.4 Thinking in ChatGPT
- Log in to chatgpt.com
- Click the model selector at the top of the chat window
- Choose GPT-5.4 Thinking (it is rolling out now to Plus, Team, and Pro users and replaces GPT-5.2 Thinking)
- Start a new conversation
Tip: The legacy GPT-5.2 Thinking model will remain available under βLegacy Modelsβ until June 5, 2026.
2. Use Native Computer-Use Capabilities GPT-5.4 is OpenAIβs first general-purpose model with built-in computer-use. Try these practical prompts right away:
"Open the spreadsheet 'Q1_budget.xlsx', analyze the revenue trends for the last 6 months, create a new column with month-over-month growth, and generate a bar chart."
"Go to our company Notion page, read the latest product requirements document, then open Figma and update the login screen according to the new specs."
The model can now interact with spreadsheets, presentations, documents, and software environments natively.
3. Update Your API Code to Use GPT-5.4
Replace your existing model name with the new identifiers:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-5.4", # or "gpt-5.4-pro" for higher performance
messages=[
{"role": "system", "content": "You are a helpful professional assistant."},
{"role": "user", "content": "Analyze this 400-page research report and summarize key findings."}
],
max_tokens=4096,
temperature=0.7
)
print(response.choices[0].message.content)
4. Take Advantage of 1 Million Token Context
For long-running agent tasks or large codebases:
response = client.chat.completions.create(
model="gpt-5.4",
messages=[...],
max_tokens=8192,
# The model now supports up to 1M tokens of context
)
This is especially useful for agents that need to maintain state across long documents, entire repositories, or multi-step workflows.
5. Improve Coding Workflows with GPT-5.4 in Codex
GPT-5.4 matches or beats GPT-5.3-Codex on SWE-Bench Pro with better latency. Try these practical coding tasks:
- Frontend generation with improved accuracy
- Playwright-based interactive debugging:
"Write a Playwright test that logs into our web app, navigates to the dashboard, and verifies that the new pricing table renders correctly on mobile."
- Interactive debugging sessions where the model can control the browser to reproduce and fix bugs.
6. Use the New Tool Search Feature
When working with large tool ecosystems, simply tell the model what tools you have available:
"Search for the best available tools to extract data from PDFs and then import into Google Sheets. Then execute the workflow."
GPT-5.4 will now intelligently search and select from your tool collection.
Tips and best practices
- Token efficiency: GPT-5.4 is more token-efficient than GPT-5.2. You can often achieve the same results with shorter prompts or fewer follow-up messages.
- Professional documents: Use it for complex spreadsheets, slide decks, and long-form reports. Provide the model with direct file access when possible.
- Agent workflows: Combine the 1M context window with browser and computer-use capabilities for autonomous multi-hour tasks.
- Temperature settings: Use lower temperature (0.2β0.5) for coding and analysis, higher (0.7β1.0) for creative or brainstorming tasks.
- Fallback strategy: Keep GPT-5.2 Thinking in your legacy model list for the next three months while you migrate critical workflows.
Common issues
- Model not appearing in ChatGPT: The rollout is gradual. Refresh the page or log out and back in. If still missing, confirm you have an active Plus/Team/Pro subscription.
- Rate limits: GPT-5.4 Pro has higher performance but stricter rate limits. Start with the standard
gpt-5.4model for most tasks. - Long context costs: Although the model supports 1M tokens, using the full context increases cost. Test with smaller contexts first.
- Computer-use permissions: When using native computer control in ChatGPT, you may need to grant additional screen-sharing or automation permissions in your browser/OS.
Next steps
- Explore building persistent agents that can work across multiple days using the 1M token context
- Integrate GPT-5.4 into your internal tools for document processing, code review, and data analysis
- Test the improved Playwright debugging workflows on your existing test suites
- Compare performance of
gpt-5.4vsgpt-5.4-proon your most common professional tasks - Begin migrating production systems from GPT-5.2/GPT-5.3-Codex to GPT-5.4
Sources
