Six months ago, I started running AI agents as part of the actual operations at AtheonX and Funnel Duo Media.

Not testing. Not experimenting in a sandbox.

Running them as part of real client delivery and internal workflows.

Here is what I learned that I did not expect.


What I Did Not Expect

There is a version of the AI agent story that goes: hire digital workers, automate your business, multiply your output, sit back.

That is not wrong, exactly. But it is missing most of the picture.

What actually happens when you run agents in a real business: every gap in your processes becomes visible immediately.

Agents do not improvise. They do not read between the lines. They do not figure out what you probably meant.

If your brief is unclear, they do the wrong thing perfectly.

If your process has an undocumented step that a human would intuitively fill in, the agent skips it.

If the handoff between one part of your workflow and another relies on someone "just knowing" what comes next, the agent stops and waits.

I have had more clarity about where my operations were actually weak in the last 6 months than in the 3 years before that. Not because the agents told me. Because they failed at exactly the places where my systems had invisible assumptions.


Lesson 1: The Brief Is the Job

Running agents taught me that whoever writes the brief is doing the most important work.

I used to think the "smart" work was building the agent, setting up the workflow, configuring the tools, getting the integrations running.

That is the easy part.

The hard part is writing a brief clear enough that an AI agent can execute it without ambiguity.

A vague brief produces vague output. The agent is not confused. It is doing exactly what you asked. The problem is what you asked was not precise enough.

The skill that actually matters in an AI-native operation is not prompting. It is the ability to translate a fuzzy human intention into a precise, unambiguous instruction with explicit success criteria.

That skill is surprisingly rare. And it turns out it is the same skill that makes a great manager.

If you can write a brief that an AI agent can execute without asking clarifying questions, you have probably got a brief that a human team member could execute the same way.


Lesson 2: Accountability Does Not Automate

This one took me longer to figure out.

When you automate a task, it is tempting to think you have also automated the accountability for that task.

You have not.

Someone still needs to own the output. Someone still needs to review it, catch errors, flag when something is off, and care whether the output meets the standard.

In the early months, I made the mistake of treating "automated" as synonymous with "handled." It is not.

The agent produces the output. A human still needs to verify it meets the requirement.

What this actually changes: who does the verification work, and when. You are not removing oversight. You are shifting where the human attention goes, from doing the task to reviewing the output.

That is still a significant time saving. But if you structure your operations assuming the agent handles everything, you will eventually publish something wrong, send a client something embarrassing, or miss a problem until it is large.

The agents I trust most are the ones in workflows where I have clear human review gates at the right points.


Lesson 3: Speed Surfaces Problems You Were Hiding

Agents work fast. That sounds like a benefit. It usually is.

But it also means that if something in your process is wrong, it compounds faster.

In the first month, one of our content agents was producing output that technically matched the brief but missed the intent entirely. Reasonable people could read the brief and produce what the agent produced. But it was not what we wanted.

Because the agent worked fast, we had 14 of those incorrect outputs before we caught the pattern and fixed the brief.

With a human doing the same work, we probably would have caught it on output two or three, because the human would have asked a clarifying question or flagged something felt off.

The speed of AI agents is a multiplier. It multiplies quality when the inputs are right. It multiplies problems when they are not.

Build quality checks before you optimize for speed.


Lesson 4: Management Clarity Gets Tested

Here is the uncomfortable one.

A lot of what I thought was management was actually me being present and filling in gaps in real time.

I would give a brief that was underspecified. A team member would hit an ambiguity. They would ask me. I would clarify. The work got done.

That is not a system. That is me as a patch for my own unclear thinking.

AI agents cannot ask me questions in real time. They work from what they have.

The first time I handed complex work to an agent and it failed, my instinct was that the agent was not capable enough.

Looking back at most of those failures: the agent was doing exactly what I had specified. The problem was the specification.

That is a hard thing to look at honestly. But it is also the most useful thing I have gotten from the last six months.

When an agent fails at a task consistently, I now start with the assumption that the brief is wrong. I fix that first. Most of the time, the agent succeeds once the brief is right.


Lesson 5: Small Teams With Agents Outrun Big Teams Without Them

I run Funnel Duo Media with my brother Reeve and a small team. We produce a volume of content and client work that teams twice our size struggle to match.

A year ago, that was not true.

What changed is not that we "added AI." It is that we redesigned how work moves through the team with agents as a layer of the operation.

Every routine, repeatable task, first drafts, formatting, distribution, reporting, has an agent or automation handling it.

The human time goes toward judgment calls, client relationships, strategy, and review.

That is the right use of human time.

I have talked to business owners who think AI is going to make their team obsolete. That is not how it plays out in practice.

What I have seen: AI agents make a clear-thinking small team more productive. They do not replace the team. They change what the team spends time on.

If your team is spending significant time on work that a clear brief could hand to an agent, that time is available for higher-leverage work.

That is the shift. Not replacement. Reallocation.


What I Am Still Sitting With

Six months in, the questions I ask about my operations have changed.

Before: "How do we get this done?"

Now: "Could an agent do this if we wrote a clear enough brief? And if not, why not?"

That second question surfaces problems differently than the first. It forces precision. It surfaces assumptions. It makes the quality of your thinking visible in a way that human-to-human work often does not.

I do not have clean answers for every challenge that comes with running agents in production. The field is moving fast. Things that were true 3 months ago are not necessarily true today.

But I am clearer about my operations than I have ever been. And I think a big part of that is what agents expose when they fail.

That might be the most valuable thing they have given me.


If you are working through how to integrate AI agents into your business without things breaking constantly, book a call with my team.

We have spent six months working out what actually holds up in production. Happy to share what we have learned.

Jackson