Lessons on designing an AI software development workflow
The experiment shows that AI-assisted development becomes reliable only when grounded in structured documentation rather than exploratory prompting. Early output appeared productive but lacked coherence, revealing that incomplete context leads to fragile systems and inefficient iteration. By shifting to short cycles where documentation, constraints and specifications are continuously refined, teams gain more predictable implementation outcomes. Over time, the workflow evolves into a controlled system where AI operates within clearly defined boundaries, reinforcing the role of product reasoning and engineering discipline in shaping results.
Earlier this year, one of our teams participated in an internal AI hackathon to experiment with what an AI software development workflow might look like in practice.
Six developers had three days to build part of a new product flow using AI tooling. The business team prepared around twenty-five documents describing the product: personas, journeys and business rules. The idea was simple. Use AI as much as possible and see how far we could get.
Most of us ignored the documentation and did what many teams do when they first approach AI coding: we started prompting. We summarized the idea on a whiteboard and began experimenting.
Three days later we had produced more than 100,000 lines of code.
We also had something that barely worked.
The code was not the valuable output
The amount of code we generated made the experiment look successful at first glance.
Parts of three steps in a seven-step user journey were implemented, and two developers spent another week stabilizing the code so we could demonstrate something that resembled a working flow.
But the codebase itself was fragile. Small changes often meant starting new prompt chains rather than improving the existing implementation. The system behaved more like a sequence of generated artifacts than a coherent piece of software.
That observation shifted how we evaluated the experiment. Instead of looking at the volume of code produced, we started looking at the process that produced it.
For the purpose of this experiment we define an AI software development workflow as the process through which AI systems receive structured context, generate implementation artifacts and iterate on tasks within a controlled engineering environment.
Viewed through that lens, the outcome of the hackathon was different from what the code suggested. The most useful result was not the implementation itself but the list of things that clearly did not work.
The fastest way forward was to discard the code and rethink the workflow.
The insight that changed the experiment
At that point, the stakeholder supporting the project suggested something simple - when you try something new and fail, the next attempt usually fails differently. The value comes from repeating the attempt and learning each time.
Instead of treating the hackathon as a one-off experiment, we turned it into an iterative process. Each iteration would:
- incorporate the lessons from the previous one
- improve the documentation and constraints
- generate new code from scratch
The output of each iteration was disposable. The process was what we were refining.
Where AI productivity actually comes from
One pattern became obvious quickly: AI productivity depends more on documentation quality than on prompting technique. When product context, architecture and constraints are documented clearly, AI systems produce consistent results. Without that structure, output becomes unpredictable.
Early attempts relied heavily on prompts and context. Results were inconsistent and difficult to steer. Once the documentation improved, the system became predictable.
The shift was straightforward: instead of asking AI what to build, we described the product and the system clearly enough that implementation became the obvious next step.
How the AI software development workflow evolved
After the first iteration, we redesigned the workflow around documentation.
Step 1: Curate the business context
The original twenty-five business documents were reduced to eight. These describe personas, journeys, business needs and rules. They contain no user stories and no technical implementation details.
Step 2: Strengthen the technical documentation
Technical documentation focuses on areas that are not visible from code:
- why decisions were made
- high-level architecture
- patterns to follow and patterns to avoid
- testing strategy
The goal is to remove ambiguity before implementation begins.
Step 3: Generate a technical specification
Using the documentation as context, AI produces a detailed technical specification for the new build. This specification is reviewed carefully before any implementation begins.
Step 4: Create structured PRDs
From the specification, we generate Product Requirement Documents. Each PRD contains a small number of well-defined stories with acceptance criteria and granular tasks.
Step 5: Execute implementation loops
AI systems work through the PRDs, implementing tasks, validating results and marking progress.
When issues appear we do not patch them mid-iteration. We capture the learning and improve the documentation for the next cycle.
The results improved quickly
By the third iteration the workflow was producing meaningful results - Two complete user journeys were implemented, including modifiers and the provider side of the flow. The implementation took four days.
The code is still treated as experimental. The process behind it is becoming reliable, which makes us confident the next iteration may be ready for production.
What this means for engineering teams experimenting with AI
Teams often approach AI-assisted development as a prompting problem. The assumption is that better prompts will produce better code.
Our experience suggests a different conclusion - AI systems respond far more reliably to structured context than to clever prompts.
Clear product documentation, explicit technical constraints and small iteration cycles create the conditions for useful output.
Without that structure AI becomes another source of noise inside the development process.
Why this matters for product development
For product teams the lesson is familiar - progress rarely comes from producing large volumes of output quickly. It comes from tightening the feedback loop between learning and execution.
AI accelerates implementation. It does not replace the need for product reasoning or engineering discipline.
The teams that benefit most from these tools are the ones that already value clarity, documentation and iteration. Those habits translate well into an AI-assisted workflow.
Where we are now
We are currently running the fourth iteration of this experiment.
The objective is not to produce more code faster. The objective is to build a repeatable AI software development workflow where AI can contribute predictably inside a well-defined engineering system.
The experiment started with prompts and a large codebase. It is now focused on documentation, constraints and iteration. That shift has made all the difference.
Navigate AI adoption with our assistance
If you want to understand whether AI can strengthen your architecture or whether it would amplify existing issues, we can help you assess that.