Skip to main content
All articles
Portfolio Updates10 min readApril 15, 2026

Seven Products, Twelve Months: What I Learned Building AI Applications from Scratch

A candid account of building seven commercial-grade AI products without a software engineering background — the methodology that worked, the assumptions that broke, and the lessons that matter for anyone considering the same transition.

I built seven AI products in twelve months. I had no formal software engineering background when I started. I had twenty years of enterprise project management and regulatory transformation experience, a documented set of workflow problems I knew needed solving, and a conviction that generative AI had finally lowered the barrier enough to make building viable.

Here is what I actually learned — not the version that makes the transition sound inevitable, but the honest account.

What I Got Right First

Starting with problem briefs rather than technology choices was the single best decision I made.

Before writing any code for any of the seven products, I wrote a one-page document that answered four questions: Who is struggling? With what, specifically? Why do existing solutions fall short? What does a minimally valuable outcome look like?

This discipline — which is just requirements engineering applied to product development — filtered out a significant number of ideas that would have been technically interesting but commercially useless. It also gave each product a definition of done that wasn't "we've shipped features" but "a specific user can accomplish a specific outcome they couldn't accomplish before."

The second thing I got right early was treating each product as a programme with a delivery cadence. Two-week sprint cycles. Defined scope for each cycle. A RAID log adapted for product development. The enterprise governance instincts, applied to solo product development, produced a velocity that surprised me.

What Broke My Assumptions

Assumption 1: The AI Is the Product

The first product I built spent too much of its surface area on the AI interaction itself. The interface was essentially a sophisticated chat window. Users had to know what to ask and how to frame their questions.

It failed to gain traction, and the reason was obvious in retrospect: users didn't want to interact with an AI. They wanted to accomplish a task. The AI was the mechanism, not the product. The product was the outcome.

Every subsequent product hid the AI. The user interface shows structured outputs — tables, scored assessments, formatted recommendations — not raw AI responses. The user never writes a prompt. The product writes it for them, informed by structured inputs. This change transformed adoption.

Assumption 2: Technical Correctness Is the Quality Bar

Early products were technically correct and practically frustrating. The AI gave accurate responses, but those responses required interpretation, had inconsistent formats, and occasionally produced outputs that were right in content but wrong in structure for the context they'd be used in.

Quality for an AI product is not accuracy of the underlying model. Quality is the precision of the output in relation to the specific use case. A medical decision support tool that gives technically correct information in a format that a clinician can't quickly scan during a consultation has failed at the only quality metric that matters.

The quality bar is: does this output let the user take action immediately? If the user has to do cognitive work to interpret it, the product needs more work.

Assumption 3: Shipping Is the Finish Line

The first time a product went live, I made the mistake of treating deployment as completion. The product was functional, it was online, it was accessible. I moved on to the next build.

The products I've learned most from are the ones I've stayed with past the initial deployment — monitoring usage patterns, interviewing users, iterating on the output format, adjusting the input interface. The gap between "technically functional" and "genuinely useful" is only visible after real users have spent time with the product.

The Technical Learning Curve

I want to be honest about this because the prevailing narrative — "anyone can build AI products now" — undersells the genuine effort required.

Frontend development was harder than I expected. Not because the concepts are particularly complex, but because the accumulated knowledge of a software developer — knowing which problems have established patterns, knowing when to reach for a library versus build from scratch, knowing when a specific behaviour is a bug versus intended — comes from years of practice. I made expensive mistakes early by not having that knowledge. The mistakes were educational but not free.

LLM orchestration at scale is non-trivial. Prompting a model in isolation and building a product that reliably prompts a model under variable real-world conditions — different input lengths, different user behaviours, edge cases the development environment never surfaced — are different problems. The second one requires defensive engineering that doesn't feel necessary until production breaks it.

Infrastructure taught me humility. Environment management, secret handling, rate limiting, cost management, monitoring — none of this is intellectually complex, but all of it is necessary, and learning it through production failures is an expensive school.

The Methodology That Worked

Looking back at the seven products, the ones that have the strongest adoption and the clearest product-market fit share a development pattern:

  1. Problem brief first, always. The technology choice came last, after the problem definition was locked.
  2. User workflow before features. Before building anything, I mapped the workflow the product would sit inside. The product was designed to fit the workflow, not the other way around.
  3. Output design before input design. What does the ideal output look like? Work backwards from there to determine what inputs are needed.
  4. Build the boring parts first. Error handling, edge case management, graceful degradation — the parts that don't make the demo impressive but that make the product reliable. These went in first.
  5. Ship early, iterate honestly. Every product launched before it was "ready." Every product became significantly better after six weeks of real user feedback that no amount of internal testing had surfaced.

What I Would Tell Someone Starting Now

The transition from enterprise leader to product builder is genuinely achievable. The tools are better than they were twelve months ago. The learning resources are better. The community is more accessible.

But go in with clear eyes about what the enterprise background gives you and what it doesn't.

It gives you: a problem-definition discipline that most first-time builders lack, a risk instinct that produces more reliable products, and a stakeholder/adoption intuition that turns technically functional products into organisationally embedded ones.

It doesn't give you: frontend engineering fundamentals, LLM orchestration experience, or the operational knowledge that only comes from running production systems. Those have to be earned, and they take longer than any online course suggests.

The combination — enterprise expertise plus genuine building capability — is rare and valuable. Building that combination is hard and worth it.

Richard Leclézio

Richard Leclézio

Enterprise Transformation & AI Delivery Leader

ShareLinkedInX