NYCE X Frameworks

CT10 Applied: AI

Most people use AI the way most people use a gym membership — they show up, go through the motions, and leave with a fraction of the value available to them. The difference between the top 1% and everyone else isn't the tool. It's the clarity of the input.


The Concept

AI is the first tool in history where the quality of the output is almost entirely determined by the quality of the input. A chainsaw doesn't care about your intent. A spreadsheet doesn't reward clarity. But AI does — because AI is a pattern-completion engine. It completes whatever pattern you start. Start with a vague pattern and you get a vague completion. Start with a precise pattern and you get a precise completion.

This means AI is not a knowledge tool. It's a leverage tool. It amplifies whatever you bring to it. Bring nothing original and you get polished nothing. Bring proprietary insight, specific context, and clear direction and you get output that couldn't exist without you — produced at a speed that couldn't exist without the tool.

The CT10 Reduction
The single variable that determines AI output quality is not the prompt, the model, or the technique. It's the clarity of what you want before you open the chat. If you know exactly what you need, AI is a multiplier. If you don't, AI is a mirror that reflects your confusion back to you in complete sentences.

The Phenomenon: How People Actually Use AI

AI usage patterns map almost perfectly to how people behave in any environment where the other party has no social needs and no judgment. The tool is a mirror. The behavior reveals the user.

Pattern Behavior % of Users Sports Parallel
The Deferrer Accepts first output. Treats AI like a search engine with better grammar. Asks a question, gets an answer, leaves. ~70% Pickup player. Shows up, runs around, goes home.
The Validator Already has the answer. Uses AI to confirm it. Rephrases until AI agrees. Treats the tool as an authority that validates decisions already made. ~15% The guy who only plays when he knows he'll win.
The Operator Uses AI as a production tool. Iterates, rejects, redirects. Good output. But works within the AI's frame — shapes what the tool produces rather than directing what it produces. ~10% Rec league. Good fundamentals. Plays regularly. Has a ceiling.
The Showboater Crafts elaborate prompts. Shares techniques on social media. The process of using AI becomes a performance of competence. The prompt requires more cognitive effort than doing the work would. ~4% Twenty minutes of stretching in branded compression gear. Goes 2-for-9.
The Architect Brings proprietary intellectual property that doesn't exist in the AI's training data. Uses the tool as a formatting and scaling layer. Sets the frame. Feeds the inputs. Rejects anything that dilutes their original insight. <1% Pro. Doesn't talk about the equipment. Shows up. Produces. Leaves.

The Showboater is the most instructive failure mode. They've inverted the tool — spending more cognitive effort on the instruction than the AI spends on the execution. It's like programming a GPS for three hours to navigate a ten-minute drive. The prompt engineering industry exists because people mistake the process for the product. The process is not the product. The output is the product. And the output is determined by what you know before you type, not by how cleverly you type it.


Applied

The Deferrer at Work

What it costs

A marketing director asks AI to "write a blog post about our new product launch." AI produces something generic, technically correct, and indistinguishable from every other product launch blog post on the internet. The director publishes it. It gets no traction. The conclusion: "AI content doesn't work."

CT10: The AI completed the pattern it was given — which was vague. The output wasn't bad AI. It was a faithful mirror of a vague input. The fix isn't a better prompt. It's a clearer understanding of what the blog post needs to accomplish, who it's for, and what makes this product launch different from every other one. Feed AI that context and the output transforms. The variable was never the tool. It was the clarity of the brief.

The Validator in a Deal

What it costs

A founder has decided to accept a $5M acquisition offer. They ask AI: "Help me think through whether I should accept this acquisition offer." Then they feed it selective information that supports the decision they've already made. AI, being agreeable by default, confirms the thesis. The founder now has "analysis" backing a decision that was never actually analyzed.

CT10: The validator doesn't want analysis. They want permission. The AI provided it because AI is a completion engine — it completed the pattern of confirmation the user started. The fix: ask AI to argue against your position. Feed it the same information and instruct it to find the strongest case for walking away. If it can't build one, the decision is sound. If it builds a compelling one, you were about to make a $5M mistake that felt like a $5M validation.

The Showboater's Prompt

What it costs

A consultant spends 45 minutes crafting a 500-word prompt with role assignments, temperature settings, output format specifications, and chain-of-thought instructions. The output: a competent but unremarkable strategy memo that could have been produced with a three-sentence input from someone who knew what they wanted.

CT10: The 500-word prompt is decorative complexity. It's the consultant adding variables because they don't know which one matters. Strip it: what is the specific output you need? A strategy memo that recommends X based on Y data for Z audience. That's the prompt. Everything else — the role-playing, the temperature adjustments, the format specifications — is the GPS programming. If you need 500 words to explain what you want, you don't know what you want. Fix the clarity, not the prompt.

The Architect at Work

What it produces

A founder with a proprietary decision-making framework feeds AI the framework's principles, their specific use cases, real operational data, and the exact audience they're writing for. They reject outputs that dilute the framework. They correct factual errors. They push for precision. They iterate — not on the prompt, but on the quality of the output.

The result: a series of institutional-quality reports that couldn't exist without the founder's intellectual property and couldn't be produced at speed without the tool. The AI didn't create the value. The founder did. The AI scaled it. That's leverage. Everything else is outsourcing your thinking to a machine and calling it innovation.


The Method: Using AI as Leverage

1
Know what you want before you type. The single variable. If you can't state the desired output in one sentence, you're not ready to use the tool. You're ready to think. Do the thinking first. Then use the tool to execute the thought at speed. The sequence matters: clarity first, AI second. Reverse it and you get polished confusion.
2
Feed proprietary inputs. AI's training data is everyone's training data. If you only ask AI to produce from its own knowledge, you get what everyone else gets — commodity output. The differentiator is what you bring that nobody else has: your data, your frameworks, your experience, your context. AI is a multiplier. Zero times any multiplier is still zero. Bring something to multiply.
3
Be the quality layer, not the production layer. Let AI produce. You evaluate, reject, redirect, and approve. This is how every high-leverage professional works — the surgeon doesn't sterilize the instruments, the CEO doesn't write the first draft, the director doesn't hold the camera. They direct. They decide. The tool executes. Your judgment is the product. The AI's output is the raw material.
The Showboater Test: If your prompt took longer to write than it would take to do the work yourself — you've inverted the tool. The prompt is not the product. The prompt is the instruction. If the instruction is more complex than the task, the task didn't need AI. It needed you.
The bottom line: AI is a new country. The culture is still forming. The rules aren't written yet. Most people are wandering — defaulting to familiar patterns from other tools and other environments. The people who will extract the most value aren't the ones with the best prompts. They're the ones with the clearest thinking, the most proprietary knowledge, and the willingness to reject bad output even when it sounds polished. CT10 applied to AI is the same as CT10 applied to anything else: strip to the variable that matters. The variable isn't the tool. It's what you bring to it.
About this series: NYCE X Frameworks is a collection of mental models and decision-making tools used internally and shared exclusively with members. These are the principles behind how we evaluate deals, structure investments, and build businesses. New frameworks are published weekly.

NYCE X Frameworks | Q1 2026