AI as Design Stuff

A Simulation Experiment

In partnership with

Better inputs. Sharper outputs. Download the guide to premium AI.

Building or refining generative AI models? This guide shows why scraped data falls short—and what to use instead. Learn how real-world behavior signals, clustering, semantic scoring, and visual diversity improve output. Plus, see how Shutterstock’s licensed data and services reduce risk and boost performance. Train smarter, faster, and more responsibly.

Hey, it’s Kushagra. Welcome to this week’s AtlasMoth drop.

Josh Clark says AI is the new “design stuff.” Cool line. Big hype. But most of the folks tried to ship a co-pilot, and they froze.

They found out fast: this game ain’t about what you see. It’s about how AI thinks, and that flip changed the whole game.

💬 Building for people beyond borders? Book a call to explore more

Vibing While Designing

This track gave me a serious boost—check out ‘Low Sun’ by Chicane🎵

How do you “use AI as design stuff” when you can’t shape it with your hands?

No screen to sketch. No frame to mock. No flow to map.

Josh Clark knew he had to touch the product he built with. Push it. Pull it. Break it. Try it. That’s how he made sense of it.

But with AI? He felt stuck. He could tell the engineering team, “Hey, what if it spoke like this?” but that felt weak. He didn’t want to talk around it. He wanted to play with it.

Perplexity UI

The Mess with AI UX

AI chat is unlike a typical app.
It hides in the back. It thinks, it reads, it spits words.
It’s not set math. It’s “problem might work, might not.”
Same ask, new result.
One word flips to a whole new path.
The past chat, the mood, the cues all shape the next line.

You can’t draw that in Figma.
You can’t lock it in flow maps.
AI needs a new test style. Not “how it looks.”
But “how it acts.”

That’s why he saw mock runs not as a perk but a must.

The Only AI That Knows All Your Work

Most AI tools start from scratch every time. ClickUp Brain already knows the answers.

It has full context of all your work—docs, tasks, chats, files, and more. No uploading. No explaining. No repetitive prompting.

It's not just another AI tool. It's the first AI that actually understands your workflow because it lives where your work happens.

Join 150,000+ teams and save 1 day per week.

Not a Map, But a Simulation

The trick was clear: don’t just plan how it looks, test how it acts.

So Josh built a chat simulation. Not some flow chart, but a live play space. A bot in a box, run by ChatGPT.

He set the scene: what problem it stood for, who it spoke to, what rules it had to keep, and what jobs it had to help with.

Then he just played. Ran real chats. Pushed, poked, and saw how it came back.

He was hands-on, tuning how the AI would act when it went live in Filter.

Design with ChatGPT

How He Built His Chat Simulation

Josh says it’s way chill. Just play a role, think in loops, and let the AI cook.

He told the AI, “Yo, who’s in this play?” It shot back a crew: rule check, policy cop, boss nod, and more. Wild. AI nailed the grind.

Is Shadow IT already in your organization?

You wouldn’t allow unmanaged devices on your network, so why allow unmanaged AI into your meetings?

Shadow IT is becoming one of the biggest blind spots in cybersecurity.

Employees are adopting AI notetakers without oversight, creating ungoverned data trails that can include confidential conversations and sensitive IP.

Don't wait until it's too late.

This Shadow IT prevention guide from Fellow.ai gives Security and IT leaders a playbook to prevent shadow AI, reduce data exposure, and enforce safe AI adoption, without slowing down innovation.

It includes a checklist, policy templates, and internal comms examples you can use today.

Next, he made the AI lay out how each part should talk.
When does the Boss Agent jump in?
When does the Rule Agent step up?
Who tells the user why it did what it did?

Then he had it split the test into two views:

Backstage: which bots got called, in what order, and how they spoke back.

Frontstage: what the user sees, like a teacher or IT guy, in a chat with more than one step.

This split made the sim not just a tool to check rules, but a way to lock tone and keep chat clear.

Next step? Build a Custom GPT. Now the simulations are not just a one-off; it’s a tool the whole crew can use. Lock in rules, drop in docs, add prompt sets, or mock runs. Boom. Share it, test it, flex it.

They can tweak the sim a lot or a bit. Swap how the bot acts, set the vibe, and add back-up moves as they go.

30 Minutes Can Save You

Great design doesn’t happen alone.
Let’s talk strategy, UX, and the real stuff that moves metrics.

One session can save you 10+ design iterations later.

Build the Crew

Each agent had one job. One runs the chat path. One checks if the ask is cool to do. One keeps the rules tight. One breaks down stuff so it makes sense. And more in the mix.

These agents turned into mind pics and chat Lego no code, just design vibes I could play with and tweak.

Simulating is the New Sketch

For him, simulating was the new dood.
Not on look, but on vibe.

He’d test, tweak, swap, same as draw, but with flow, not lines.
Each run showed what worked, what broke, and what felt off.

One test? Drop a Hint bot.
Next? Swap in a No bot that softens the blow.
Need more? Tune the Why bot to spill the tea.

He saw not just words, but how the whole thing was thought out.
That’s when it clicked, he wasn’t just making bots.
He was sketching brains.

Each tweak took minutes, not days.
Each swap made the bot feel clearer, firmer, kinder, and real.
For him, simulations were like wire frames but with talk, not screens.
Not boxes, not clicks.
The stuff he shaped was how it thinks, how it vibes, how it talks back.

Paths That Hurt

He didn’t just test the nice asks. He threw in the bad ones, too. No rights? Weird ask? Off-beat case? He tried them all.
That’s when flaws popped up. Too soft with a boss. Too harsh with a teacher. No clue on some asks.
So he patched tone, built fallbacks, and killed clash points. Each test made the flow feel more real, more fair.

What hit most was how small calls flipped the vibe of the whole system:

  • Should it say why it said “no”?

  • Should it toss a plan B?

  • Should it match the tone of who’s on the other end?

Not just big talk, these showed up fast in simulation runs.

Each tweak was done by a quick swap in the agent stack or a clean tweak in GPT’s notes.

GPT Simulations

From Mock Chats to Real Specs

What starts as a play test turns into gold for the devs.
’Cause when devs run fake chats, devs can point and say:
“See? When a user with X role asks Y, this is how it should go.”

Those sims don’t just fade. They morph into specs:

  • Who does what in the stack

  • When to fire each move

  • Chat clips for each role

  • Side notes on tone, outs, and hard stops

Now devs don’t guess. They don’t pray.
They see it, they click it, they build it.

Why It Hits

That’s why a quick talk sim helps. It’s fast, cheap, and real. PMs, design heads, and even bosses can see how the bot might chat way before code drops. It takes blur and turns it into a plan.

For Josh, it was a hack to push on with the Copilot build. But with each run, he saw it clearly: this was more than a trick. It felt like a new tool for the trade.

Shape the Brain, Not Just the Face

AI test runs = trust check, vibe check, flow check.
Not replacing devs, just giving us the first real tool to design the mind.

UX isn’t just sketched anymore, it’s performed.

Building chatbots? You’re basically a director: set the scene, cast the crew, script the chat, tune the energy.

Every choice shapes the model’s personality and the user's feelings.

Design is the art of making chaos feel calm.

Reply

or to participate.