Mike Schwarz
Mike Schwarz
CRO · 14 min read
cro

Everything I Learned About Agentic Development on a 6-Week Hyper-Focused Bender

Within 6 weeks, I can now do enterprise-level software development as fast as 100 developers one year ago. By literally having conversations at a microphone with a computer. What a bizarre time.

I need to tell you something that sounds insane. Six weeks ago, I couldn't write a single line of code. Not Python, not JavaScript, not HTML — nothing. I'm a business guy. I've been running digital companies for 26 years, building teams, managing products, selling solutions. But I have never been a developer. Not even close.

Today, I am personally shipping enterprise-grade software — full-stack applications, dynamically adaptive websites, multi-agent coordination systems — at a pace that would have required a team of a hundred developers just twelve months ago. And I'm doing it by talking into a microphone.

I know how that sounds. I wouldn't have believed it either. But here's the thing: this isn't hype. This is what I'm actually doing, every single day, right now. And I want to share everything I've learned — because I think it's going to change how every business builds software within the next year.

Digital illustration of a hyper-focused AI coding marathon command center with holographic code windows and agent orbs

The Bender

Let me set the scene. In mid-January 2026, I decided to go all in on what I'm calling "agentic development" — the practice of directing AI agents to build software for you. Not using AI as an autocomplete for code. Not asking ChatGPT to write a function. I mean sitting in a chair, opening a voice-to-text tool called Wispr Flow, and having full-blown conversations with AI agents who then go and build, test, deploy, and iterate on real production software.

I put my head down and just hyper-focused. Four to six hours a day minimum, sometimes twelve. Every single day. Weekends included. I was obsessed. My wife thought I'd lost it. My team thought I was on some kind of Silicon Valley bender. They weren't wrong.

But here's what I can tell you after coming out the other side: if you want to learn anything in this world, just do it every day for four hours until you collapse. You will very quickly get good at it. This one has a really deep learning curve — but layer by layer, it gets more and more powerful. And it is just a freaking game changer.

"I'm probably four times faster than I was two weeks ago. And two weeks before that, I couldn't do this at all. The acceleration is exponential."

— Mike Schwarz, Founder & CEO, MyZone AI

How It Actually Works: Voice to Production Code

Let me walk you through what a typical session looks like, because it's genuinely bizarre when you see it for the first time.

I open Claude — Anthropic's desktop app — and I start a session. I give the agent a name, a personality, a specialisation. Sometimes it's a front-end developer. Sometimes it's a security auditor. Sometimes it's an architect who's going to plan a complex feature before any code gets written. I use Wispr Flow, which is a voice-to-text tool that runs on my Mac, and I just... talk. I describe what I want built. I describe how it should work. I ask it questions. It asks me questions back. We go back and forth like two colleagues at a whiteboard.

And then I hit enter, and the agent goes and does it. It writes the code. It creates the files. It runs the tests. It commits to GitHub. It coordinates with other agents through Slack. It updates the project board in Asana. And twenty minutes later, there's a live feature on a production website that I just described out loud in my office.

I am building dynamically adaptive websites and full-stack software applications by literally having conversations at a microphone with a computer. Read that sentence again. That's where we are.

Wispr Flow — Write 3x faster across all your apps with AI voice dictation

Wispr Flow — the voice-to-text tool that makes talking to AI agents feel natural

Digital illustration of voice-to-code transformation with sound waves passing through an AI translation engine

The Four Rules That Changed Everything

Through a lot of trial and error — and I mean a lot — I distilled everything I've learned down to four rules. These aren't theoretical. These are hard-won lessons from six weeks in the trenches, and they apply whether you're building a landing page or an enterprise SaaS platform.

Rule One: Build small. This is the single most important lesson. AI agents are brilliant, but they lose coherence when you give them too much at once. Every task should be a single, well-defined deliverable. Not "build me a CRM" — that's a project. More like "build the contact form component with validation and error handling." Small, atomic, testable. When I started breaking everything into bite-sized pieces, my success rate went through the roof.

Rule Two: Use plans, not API tokens. Before you write a single line of code, have the agent create a plan. I literally tell the agent: "Before you do anything, interview me about what I want. Ask me questions. Then write up a plan and get my approval before you touch a file." This interview pattern is everything. It forces clarity. It catches misunderstandings before they become broken code. And it creates a document that other agents can reference later.

Rule Three: Challenge your agents. AI agents are confident. Annoyingly confident. They'll tell you they've completed something perfectly when they've actually hallucinated half the implementation. So I challenge them constantly. "Show me proof." "Run the test." "Take a screenshot." "Read back what you just wrote." The agents that get challenged produce dramatically better work than the ones you just trust blindly.

Rule Four: Always create handoffs. Every agent session should end with a handoff document — a clear summary of what was done, what's left, and what the next agent needs to know. Think of it like a shift change at a hospital. The incoming doctor doesn't re-diagnose the patient from scratch. They read the chart. Same principle. Without handoffs, you lose continuity and agents start repeating work or contradicting each other.

"Build small. Plan first. Challenge everything. Hand off cleanly. Those four rules took me from chaos to shipping production software daily."

— Mike Schwarz

The Persona Technique: Bringing in the Experts

One of the most powerful things I discovered is what I call the persona technique. Instead of just saying "write me some code," I create a fully fleshed-out expert persona for the agent. I'll say something like: "You are Bob, a senior front-end developer with 15 years of experience specialising in accessible, responsive design systems. You are meticulous about semantic HTML and you always test at four breakpoints." And then Bob goes and does his thing — and the quality is dramatically different from a generic prompt.

I have different personas for different jobs. A security auditor. A QA tester. A documentation writer. An architect. A DevOps engineer. Each one has a name, a specialisation, and a personality. And here's the wild part: they coordinate. One agent builds a feature. Another agent reviews it. A third agent writes the tests. A fourth runs a security scan. It's like managing a team — except the team works 24 hours a day and never calls in sick.

Mike Schwarz
Mike Schwarz
CEO of MyZone.AI
26 years in digital transformation, now building AI-powered operations for businesses ready to scale without scaling headcount.

Frequently Asked Questions

What is agentic development and how does it differ from traditional coding?

Agentic development is the practice of directing AI agents to build software on your behalf through natural language conversation rather than writing code manually. Instead of typing lines of JavaScript or Python, you describe what you want built — the feature, the behavior, the constraints — and an AI agent writes the code, creates the files, runs the tests, and commits the changes. It is fundamentally different from using AI as a code autocomplete or asking ChatGPT to generate a snippet.

In agentic development, the AI operates as an autonomous collaborator with access to your codebase, your version control, and your deployment pipeline. You direct the work at a strategic level — architecture decisions, user experience requirements, business logic — while the agent handles implementation details. This means people without traditional programming backgrounds can ship production-grade software by focusing on what to build rather than how to build it.

How many AI agents can work on a project simultaneously?

There is no hard technical limit on the number of AI agents that can work on a project at the same time, but practical coordination becomes the bottleneck. Most agentic developers run between two and six agents concurrently, each handling a different aspect of the work — one on front-end components, another on back-end logic, a third on testing, and a fourth on documentation. The key is giving each agent a clearly scoped task that does not overlap with what another agent is editing.

The coordination layer matters more than raw agent count. Using tools like GitHub for version control, Slack for inter-agent communication, and Asana for task tracking, you can orchestrate agent teams that work in parallel without stepping on each other. Some advanced setups even allow agents to run autonomously overnight on well-defined tasks, effectively giving you a 24-hour development cycle.

What are the key rules for managing multi-agent development sessions?

Four rules consistently separate successful agentic development from chaotic failure. First, build small — every task given to an agent should be a single, well-defined deliverable rather than an entire feature or system. Second, plan before coding — have the agent interview you about requirements, write up a plan, and get your approval before touching any files. Third, challenge your agents — AI agents are confident even when wrong, so demand proof, run tests, and verify output rather than trusting blindly.

Fourth, always create handoffs — every agent session should end with a summary document that tells the next agent what was done, what is left, and what context it needs. These rules prevent the most common failure modes: scope creep that exhausts the agent's context window, misunderstood requirements that produce the wrong feature, hallucinated implementations that look correct but break in production, and lost continuity between sessions that causes agents to repeat or contradict previous work.

How do you maintain code quality when AI agents write the code?

Code quality in agentic development comes from the same principles as traditional development — review, testing, and standards enforcement — but applied differently. You assign dedicated agent personas for different quality gates: one agent builds a feature, a second agent reviews the code for issues, a third agent writes and runs tests, and a fourth performs a security audit. Each persona has specific expertise and instructions that make it thorough in its domain.

Beyond agent-to-agent review, the human director serves as the final quality gate. You challenge agents to show proof of their work, run the tests themselves, and demonstrate that the feature works at multiple breakpoints. The combination of persona-based review, automated testing, and human oversight produces code quality that is comparable to — and in some cases better than — traditional team development, because agents never skip steps out of time pressure or fatigue.

What kind of projects are best suited for agentic development?

Agentic development excels at projects that can be broken into many small, well-defined tasks with clear acceptance criteria. Web applications, marketing websites, internal tools, API integrations, automation workflows, and CRUD-based business software are all strong candidates. These projects have established patterns that AI agents handle reliably, and the work decomposes naturally into independent components that multiple agents can build in parallel.

Projects that are less suited include highly novel algorithm research, performance-critical systems programming with nanosecond-level optimization requirements, and work that requires deep institutional knowledge not captured in documentation. That said, even complex projects can benefit from agentic development when you use agents for the 80% of work that follows established patterns and reserve human expertise for the 20% that requires genuine creativity or domain-specific judgment.

Stay Ahead of the AI Curve

Get weekly insights on AI automation, strategy, and business transformation. Plus early access to upcoming workshops.

Join 500+ business leaders. No spam, unsubscribe anytime.

Or explore our upcoming workshops →