Not a subscriber?
Join 10,000+ getting a unique perspective every Saturday on growing their internet business with actionable AI workflows, systems, and insights.
You're in! Check your email
Oops! Something went wrong while submitting the form 🤔
October 11, 2025

ChatGPT-5 Feels Dumber? Here’s How to Make It Brilliant

ChatGPT-5 rolled out in the middle of summer.

I was in a pool with my kids when I noticed my usual model buddies—o3, 4o, and o4-mini—had vanished from my phone.

I tried GPT-5. It was… disappointing. The rollout felt chaotic, “thinking” wasn’t obviously available, and the answers were oddly bland. I posted a rant on social, then moved my personal day-to-day to Gemini—which, to its credit, has personality and happily argues with me. Loved it.

Since then, ChatGPT-5 has gotten much better—but more importantly, we’ve gotten better at using it. I use different tools for different jobs: Gemini for sparring and contrarian ideas; ChatGPT when I need deep search, concise reasoning, and didactic, source-aware outputs.

When my team started wiring GPT-5 into our AI systems, we had to learn to wield it. The “drop” in performance we felt? Mostly how we were communicating with the model—not the model itself.

I'll share with you everything I've learned about using GPT-5 for both personal productivity and production-grade AI systems.

GPT-5 is actually stronger. But two under-the-hood shifts broke our old prompting habits.

The fix: a 5-part system, stacked from easiest → hardest.

Let’s get into it.

What changed (and why your old prompts underperform)

Update 1: Model consolidation + an invisible router

  • Fewer visible choices for you (e.g., GPT-5, GPT-5 Thinking Mini, GPT-5 Thinking).
  • Behind the scenes, a router decides which brain handles your request.
  • If your prompt is vague, you might get the fast, shallow brain instead of the slow, brilliant one.
Key takeaway: Don’t leave routing to chance. A tiny nudge can steer you to deeper reasoning.

Update 2: GPT-5 follows instructions… a little too well

  • Trained for agent-like tasks, it obeys with surgical precision.
  • That’s great for “Insert a row in Sheet ABC, cell D12.”
  • It’s terrible for “Write something great about marketing.”
Key takeaway: GPT-5 guesses less than older models. Imprecise in → imprecise out.

The 5 Fixes (stackable, from easiest → hardest)

1) Router Nudge Phrases (effort: low)

Add a short directive that reliably triggers deeper reasoning:

  • Think hard about this.
  • Think deeply about this.
  • Think carefully.

These beat mushy signals like “This is important.” GPT-5 is literal; “important” is subjective, think is an action.

Real-world example (personal finance):

Weak prompt:

“Pros/cons of a low-cost index fund vs. a money market account?”

Strong prompt:

“Pros/cons of a low-cost index fund vs. a money market account? Think hard about this. Include second-order effects (tax drag, reinvestment risk, sequence-of-returns). End with a decision tree for 1–3 vs. 3–7 year horizons.”

What you'll see: A visible "thinking" step (or a noticeably richer chain of reasoning) and insights you didn't ask for but needed—the thinking mode is triggered.

Pro tip: Even with a “Thinking” model, nudges still help. Free users see the biggest jump.

2) Verbosity Control (effort: low)

There’s a separate dial for how much it says. Turn it with intent.

Keep these power phrases in a text expander:

  • Low verbosity:

Give me the bottom line in ≤100 words. Use Markdown for clarity.

Use for exec Slacks, tl;drs.

  • Medium verbosity:

Aim for a concise 3–5 paragraph explanation with key takeaways.

Use for team updates, cross-functional notes.

  • High verbosity:

Provide a comprehensive, detailed breakdown (600–800 words).

Use for briefs, research summaries, SOPs.

Example (Slack to CMO):

Draft a Slack update on Q3 performance.
Give me the bottom line in ≤100 words.
Use Markdown for clarity.
Include: revenue Δ, CTR ↓ rationale, conversion ↑ rationale, top 3 actions.

3) OpenAI’s Prompt Optimizer (or the free “meta-prompt” hack) (effort: medium)

OpenAI ships a prompt optimizer that rewrites your prompt to be clearer, structured, and error-aware. If you don’t want a dev account, use this meta-prompt directly inside chat:

Meta-prompt (copy/paste):

You are an expert prompt engineer for the GPT-5 Thinking model. 
**Rewrite my prompt to improve clarity, structure, and completeness.
** Add explicit sections, eliminate vagueness, and include error handling (ask for missing inputs). 
Then output the improved prompt and a one-paragraph rationale.

**Here’s my original prompt:** [paste yours]

What it fixes by default:

  1. Structure (turns walls of text into labeled parts)
  2. Specificity (replaces “make it good” with criteria)
  3. Error handling (asks for missing constraints)

Here’s an example:

My original prompt:

Create an infographic svg explaining the attention mechanism in transformers

The optimized prom

Generate a high-quality **SVG infographic** that visually and conceptually explains the **attention mechanism in transformer models** (e.g., those used in GPT and BERT).

### Instructions

1. **Content Requirements**
    - Clearly illustrate **how attention works** in a transformer, including:
        - The concept of **queries, keys, and values**.
        - How **attention weights** are calculated and applied.
        - The difference between **self-attention** and **multi-head attention**.
        - The **flow of information** between layers or tokens.
    - Include concise **text labels and annotations** for all key components.
    - Use **arrows, nodes, and color-coded highlights** to enhance comprehension.
2. **Design Guidelines**
    - Output format: **SVG** (scalable vector graphic).
    - Visual style: clean, minimalistic, educational (suitable for presentations or technical blogs).
    - Ensure all text elements are **legible** and properly aligned.
    - Maintain a logical **top-to-bottom or left-to-right layout**.
3. **Clarity & Readability**
    - Avoid unnecessary visual clutter.
    - Keep terminology consistent with standard NLP literature.
    - Include a small **title** and **legend (if applicable)**.
4. **Error Handling & Missing Input Checks**
    - If the user has **not specified**:
        - The **target audience** (e.g., beginners, ML practitioners, researchers),
        - The **complexity level** (e.g., high-level overview vs. mathematical detail),
        - The **color scheme or branding style**,
        - The **intended use case** (e.g., educational slide, internal training, article illustration),
            
            — then **ask the user to clarify these before generating** the SVG.
            
5. **Output Format**
    - Return a **single SVG code block** ready for rendering.
    - If multiple layouts are possible, briefly **describe 2–3 layout options** and ask which to proceed with.

This is a powerful professional prompt that you can use to generate compelling infographics that match your writing style (or your design patterns).

Note: For this specific use case, I recommend generating the content in HTML first, then converting it to the desired format.

4) Build an XML Sandwich (effort: medium)

Think bento box, not soup. Label your inputs so GPT-5 knows what’s what.

This is a SUPERPOWER.

Reusable template

<TASK>
[One clear, testable instruction. Role + objective.]
</TASK>

<CONTEXT>
[Only what’s needed. Keep it lean.]
</CONTEXT>

<INPUTS>
  <PRIMARY>[paste]</PRIMARY>
  <SECONDARY>[paste]</SECONDARY>
</INPUTS>

<OUTPUT_FORMAT>
[Bullets? Table? JSON? Numbered steps? Tell it.]
</OUTPUT_FORMAT>

<TONE>
[Direct, warm, practical, etc.]
</TONE>

Example (PM interview drill)

<TASK>
Act as a hiring manager. Based on my resume and this JD, ask 3 likely interview questions with 2 follow-ups each.
</TASK>

<INPUTS>
  <RESUME>[paste]</RESUME>
  <JOB_DESCRIPTION>[paste]</JOB_DESCRIPTION>
</INPUTS>

<OUTPUT_FORMAT>
Numbered list. After questions, add a 5-bullet coaching checklist and a 3-bullet “common pitfalls.”
</OUTPUT_FORMAT>

<TONE>
Supportive, candid, no fluff.
</TONE>
Pro tips:
- Save 3–5 XML templates (interviews, briefs, market scans, emails).
- Add a permanent <TONE> tag that matches your brand voice.

5) The Perfection Loop (effort: high)

GPT-5 is fantastic at critiquing itself. Make it define excellence, grade its own work, and iterate privately until it hits that bar.

This technique has worked wonders for me.

Universal Perfection Loop (append to any prompt)

Before generating the final output, draft a brief internal rubric (5–7 criteria) for what ‘excellent’ looks like for this task. Iterate privately (do not show drafts) until your work scores 10/10 against the rubric. Then output the final result. Afterward, show the rubric and a one-paragraph post-mortem on what changed from your first internal draft to the final.

Example 1 — Market analysis

  • Task: “Write a market analysis on the enterprise AI industry.”
  • With loop: You get structured sections (TAM/SAM/SOM, buyer personas, pricing models, regulatory risk, go-to-market plays), not just vibes.

Example 2 — QBR outline

  • Task: “Draft my QBR outline.”
  • With loop: GPT-5 builds a rubric (Outcomes, Learnings, Pipeline, Risks, Asks), iterates, and hands you leadership-ready slides.
When to use: 0→1 tasks (finished docs, production code).
When to skip:
Quick answers. It’s overkill.

The “Do-This-Now” Toolbox (Copy/Paste)

Router nudges

  • Think hard about this.
  • Think deeply about this.
  • Think carefully.

Verbosity controls

  • Low: Bottom line in ≤100 words. Use Markdown.
  • Medium: Concise 3–5 paragraphs with key takeaways.
  • High: Comprehensive 600–800 words.

Meta-prompt (optimizer)

  • “Rewrite my prompt for clarity, structure, completeness, with error handling; show the improved prompt + a brief rationale.”

XML Sandwich tags

  • <TASK> <CONTEXT> <INPUTS> <OUTPUT_FORMAT> <TONE>

Perfection Loop

  • “Create a rubric → iterate privately → 10/10 → final + post-mortem.”

How these stack (and why stacking matters)

You don’t have to choose. Stack them:

  1. Nudge the router for deeper reasoning.
  2. Lock in the right verbosity for the audience.
  3. Run an optimizer/meta-prompt to tighten your ask.
  4. Wrap in an XML Sandwich so nothing’s ambiguous.
  5. Add the Perfection Loop when quality really matters.

Complete Stack Example — Project Brief

<TASK>
Create a cross-functional project brief to overhaul our (currently nonexistent) AI strategy. Audience: Execs, Eng, GTM.
</TASK>

<CONTEXT>
Company: Mid-market SaaS, $40M ARR. Churn +2pts. Competitors shipping AI features. Need clear ROI within 2 quarters.
</CONTEXT>

<OUTPUT_FORMAT>
Sections: Objective, Success Metrics, Risks, Milestones (Gantt-style table), Resourcing, Budget, Comms Plan, Open Questions. 700 words.
</OUTPUT_FORMAT>

<TONE>
Direct, practical, user-centric. No fluff.
</TONE>

Appends:

  • Think hard about this. Aim for 600–800 words.”
  • Perfection Loop clause (rubric → iterate privately → 10/10 → final + post-mortem).

What you get: A document people can actually execute—second-order effects included.

Swipe File: Mini Scripts for Everyday Work

1) Exec summary (low verbosity)

“Summarize this initiative for the COO. ≤100 words, Markdown bullets. Include: goal, status, risk, next step. Think carefully.

[paste]”

2) Sales email variant test (medium)

“Rewrite this email into 3 variants for a CFO buyer. Concise 3–5 paragraphs each, with one bolded metric, one CTA. Think hard about this.

[paste]”

3) Research synthesis (high)

“Synthesize these notes into a comprehensive 700-word brief with assumptions, unknowns, and next experiments. Think deeply about this.

[paste]

Perfection Loop.”

4) Interview drill (XML)

<TASK>
Ask me 5 behavioral interview questions based on this resume, with STAR coaching per question.
</TASK>
<INPUTS><RESUME>[paste]</RESUME></INPUTS>
<OUTPUT_FORMAT>
Numbered list. After each question: 3 STAR prompts + one “red flag to avoid.”
</OUTPUT_FORMAT>
<TONE>Supportive, candid.</TONE>

Common mistakes (and fast fixes)

  • Mistake: “Just be creative.”

Fix: Define creative: audience, tone, constraints, and examples.

  • Mistake: One giant paragraph.

Fix: XML Sandwich with clear <OUTPUT_FORMAT>.

  • Mistake: Fishing for long answers in Slack.

Fix: Low verbosity directive + bullets.

  • Mistake: Accepting the first draft.

Fix: Perfection Loop for 0→1 tasks.

  • Mistake: Expecting one prompt to do it all.

Fix: Build a workflow (analyze → test small batch → scale).

Tiny Dialogue (a reminder you’re human)

You: “Why do I have to do all this? Shouldn’t AI just, you know, be smart?”

Me: “It is. You’re driving a race car now. You still need a steering wheel.”

GPT-5 (probably): “Please keep hands and feet inside the XML at all times.”

TL;DR — Pin this

  • Router matters. Nudge it: Think hard / deeply / carefully.
  • Control the length. Use low/medium/high verbosity phrases.
  • Optimize first. Meta-prompt to clean your ask.
  • Label everything. XML Sandwich = clarity.
  • Force excellence. Rubric → iterate → 10/10 → final.

Close the tab after you copy the toolbox into your AI OS system and prompt library—and then actually use it today. Your future self will thank you.

Have a great day :)

— Charafeddine (CM)

Share this article on: