Context Isn't Optional
Why AI breaks when it lacks stakes, and how to add tension, constraints, and consequences
Hey Alchemists,
I already showed you how structure controls AI output. How giving AI a framework role, context, constraints, format turns vague requests into precise directions.
But here's what I didn't tell you: Structure without context is just a fancy template.
Today we're diving into Prompt Crafting Principle #3: Context Isn't Optional.
This is where prompting stops being mechanical and starts being strategic. Where AI goes from giving you technically correct answers to giving you answers that actually work.
The $5K Lesson
Last month, I watched my friend blow a major opportunity because of bad prompting.
But let me give you the backstory first.
This dude had been hearing everywhere that "ChatGPT can do everything for you." Social media, podcasts, YouTube everyone saying AI was gonna handle all the hard stuff.
So when his product launch went sideways, he figured he'd just let ChatGPT fix it.
Didn't even think to ask me for help when he knows I live and breath this AI stuff …
Just went straight to the AI like it was some magic solution machine.
Here's what happened:
Product launch day. Total disaster. 200 pre-orders. Only 12 actual sales. Panic mode.
My friend asks ChatGPT:
"Write a professional email explaining why our product launch didn't go as planned and what we're doing to fix it."
Gets back the most generic, corporate-speak garbage you've ever seen.
All about "unforeseen challenges" and "working diligently to resolve issues."
No second thought. No "maybe I should run this by someone." Just copy, paste, send.
Customers read it. Feel like they're talking to some faceless corporation, not the person they believed in and gave their money to.
Refund requests started flooding in.
Now you know what: The email wasn't wrong. It just wasn't right for the audience.
What Actually Happened Next
Two weeks later, my friend finally calls me. Probably after realizing ChatGPT wasn't the magic fix-everything button he thought it was.
"I don't get it," he says. "The email was professional. It covered everything."
Yeah, that was the problem.
So I helped him rewrite the prompt:
"You are writing as the founder of a 6-person startup to 200 customers who pre-ordered a productivity app. Launch day: app crashes, payment system fails, customers can't access what they paid for. These aren't corporate buyers they're solopreneurs who scraped together $49 because they believed in us. They're not just disappointed, they're embarrassed they trusted a small company. Write an email that takes real ownership, shows we understand what happened to their money and their trust, and proves we're fixing this like our business depends on it—because it does. No corporate speak. No 'we apologize for any inconvenience.' Sound like a human who screwed up, not a company managing PR."
The output? Honest. Direct. Human.
Three customers forwarded it to friends saying "This is how you handle a crisis."
Same structure. Completely different result.
The difference wasn't format. It was stakes.
Why Context Changes Everything
AI doesn't just need to know what to write. It needs to know why it matters.
When you tell AI "write a sales email," it optimizes for generic best practices.
When you tell AI "write a sales email for a consultant who's been ghosted by this prospect for two weeks, this is their last outreach before moving on, and they need to salvage a relationship that could be worth $50K annually," it optimizes for your actual situation.
Same basic request. Completely different strategy.
The Three Layers of Context That Matter
Layer 1: Situational Stakes (What's Really Happening)
This isn't just background information. This is the pressure, the timeline, the consequences.
Generic: "Write a project update for my client."
With Stakes: "Write a project update for a client who's already questioning our timeline because their boss is breathing down their neck about launch date. Project is 10 days behind, but we just solved the main technical issue. They need reassurance without false promises, and they need to be able to defend our timeline to their boss."
Layer 2: Relationship Reality (Who You're Actually Talking To)
Not demographics. Not personas. The actual human psychology of your specific situation.
Generic: "Write content for small business owners."
With Relationship Context: "Write for restaurant owners who've been burned by marketing agencies before, are stressed about post-pandemic recovery, and immediately skeptical when anyone promises quick results. They don't trust outsiders, but they're desperate for something that actually works."
Layer 3: Constraint Reality (What You Can't Do)
This is your real limitations. Not just "don't be salesy" your actual boundaries.
Generic: "Write a persuasive email."
With Constraints: "Write a persuasive email without mentioning price (they already know it), without using urgency tactics (they hate pressure), and without comparing us to competitors (it sounds defensive). Focus on long-term partnership, not quick wins."
Context in Action: The Before and After
Let me show you how context transforms the same basic request.
Without Context:
"Write a LinkedIn post about delegation for entrepreneurs."
With Context:
"You're writing for entrepreneurs who built their businesses by doing everything themselves and now run teams of 8-15 people but still can't let go. They work 65-hour weeks while their employees work 40, then wonder why the business isn't scaling. They know they should delegate but secretly believe no one will do it as well as they do. They don't need motivation to delegate they need permission to accept 'good enough' and a framework that doesn't feel like losing control. Write something that makes them feel understood, not judged."
The first prompt gets you generic delegation advice.
The second gets you content that speaks to someone's actual internal struggle.
The Context Stack (My Real Process)
Here's how I layer context into every prompt:
Layer 1: The Immediate Reality
What just happened?
What pressure are they under right now?
What's the timeline?
Layer 2: The Relationship History
How do they feel about you/your brand?
What's worked/failed with them before?
What are they secretly worried about?
Layer 3: The Real Constraints
What can't you say?
What would backfire?
What resources do you actually have?
Layer 4: The Success Criteria
What would make this work?
How will you know if it's effective?
What's the real goal beyond the obvious one?
Context Kills Generic Output
Here's what most people don't get: AI's default mode is average.
When you ask for "a good email" without context, AI averages across millions of emails and gives you something that could work for anyone.
Which means it probably won't work for your specific situation.
But when you add context real stakes, actual relationships, specific constraints AI stops optimizing for generic "good" and starts optimizing for your version of good.
The Context Reality Check
Want to know if you're adding real context? Ask yourself:
Would this prompt work for someone else's generic situation? If yes, add more specific context.
Does it include what could go wrong? If not, you're missing the stakes.
Would AI need to ask you clarifying questions? If yes, answer them in the prompt.
Does it sound like a real human situation? If not, add more relationship reality.
Your Context Homework
Take this generic prompt:
"Write a follow-up email to a potential client who hasn't responded to my proposal."
Now add three layers of context:
What's the real situation? (Timeline, stakes, what's happened)
What's the relationship reality? (Their mindset, your history, their concerns)
What are your constraints? (What you can't do, won't do, shouldn't do)
Rewrite it with all three layers. See how different the output becomes.
Don't just read this. Actually do it.
Because here's the truth: Structure tells AI what to build. Context tells it why it matters.
And when AI understands why something matters? That's when you get output that doesn't just follow your format it serves your purpose.
Next Post: Guide Thinking, Not Just Asking - how to get AI to work through problems step by step instead of just giving you first-draft answers.
Drop a comment: What's one situation where AI keeps giving you "technically correct but strategically useless" output? Let's add the context that would fix it.
Next we learn how to make AI think, not just respond.
Nice art