top of page
Search

Design your data model before writing a single line of code

Finance professionals are building their own tools with AI for the first time. The prototypes are exciting. Then they break. Here's what I learned hitting that wall — and the concrete steps others used to avoid it.



A few weeks ago I came back from a four-month career break. The thing that struck me most was AI. Specifically, how much had become possible for people like me — finance professionals who think in systems and logic but have never written production code — to actually build the tools we'd always wanted and never had the technical means to make.


One of the first things I wanted to build was a cash flow app with proper budget tracking. Four months earlier, building a cash flow app from scratch without a developer would have meant a long Jira backlog and a conversation about budget. Coming back, it meant opening Claude Code, describing what I needed in plain language, and watching working software appear. The first day felt like a superpower. Features appeared faster than I could test them. The dashboard looked exactly how I'd imagined it.


Then, around feature eight or nine, the numbers stopped making sense.


Not dramatically — no error messages, no crashes. Just quietly wrong totals. A cash balance that didn't match. A budget variance that seemed off. The kind of subtle incorrectness that in finance you learn to distrust immediately, because it usually means something structural is broken underneath.


I had to stop, dig in, and find out what had happened. What I found taught me the most important thing I know about building with AI.


What was actually broken


When I traced the problems back, they weren't random. They were three specific structural failures — each invisible during the build, each obvious in retrospect.


  • "Type" and "category" meant different things in different tabs. I'd used the words interchangeably when describing features to Claude Code. In the transactions view, "type" referred to whether something was income or expense. In the budget tab, "type" referred to the category grouping. The AI built exactly what I described each time — but across tabs, the same field name pointed to different concepts. Filters broke. Rollups pulled the wrong data. The system was internally consistent in each feature and structurally incoherent as a whole.


  • Income and incoming transfers looked identical to the model. A salary payment and a transfer from a savings account both arrived as positive numbers. The data model had no field to distinguish them — both were "incoming," full stop. So monthly income figures included transfers between my own accounts. The cash flow summary was overstating income, silently, every month. This is the kind of error that's easy to miss when you're building fast and only notice when a number feels wrong at the end of the month.


  • Absolute values were summed, ignoring sign. In some calculations, expenses were stored as positive numbers and subtracted at the display layer. In others, they were stored as negatives. When I added a feature that summed across both, it was adding absolute values regardless of sign — so a €500 expense and a €500 income both contributed €500 to the total. The balance looked healthy. It wasn't. This one took the longest to find because the logic looked correct on the surface.


This is what makes financial data uniquely unforgiving as a domain. A category has to mean the same thing in a transaction, a budget line, and a monthly report. A sign convention — are expenses positive or negative? — has to be decided once and enforced everywhere. Skip any of these decisions and the AI will make them for you, inconsistently, across dozens of features.


What the builders who got it right did first


After my rebuild, I went looking for people who'd successfully built financial or data tools with AI coding assistants without hitting the same wall. The pattern was consistent: they treated the data model as a design document to be written before the first prompt, not a structure to emerge from the build.


The concrete steps — before you build anything


  1. Map your entities and relationships on paper first

    Name every core object in your system and draw how they connect. Don't open the AI tool yet. This is a finance thinking exercise, not a technical one — and finance professionals are better equipped to do it than most developers because you already know what the data needs to answer.


  2. Standardize your field names before the first prompt

    Decide what things are called and write it down. "Category" must mean the same thing everywhere — in a transaction, a budget line, and a monthly report. "Period" must be consistently monthly or quarterly, never both. The AI will follow your naming if you give it; it will invent its own if you don't.


  3. Stress-test the schema against future features before you start

    List the three features you'll most likely want in three months — forecasting, variance by cost center, multi-currency support, whatever — and check whether your entity map supports them without a structural rewrite. If it doesn't, fix the model now. Fixing it later means rebuilding.


  4. Hand the AI a blueprint, not a blank canvas

    Paste your entity map and field definitions into the first prompt. Describe your system's structure before describing any feature. One Claude Code handbook captures the principle well: you're not asking the AI to guess at your architecture — you're handing it a blueprint. The more detailed the blueprint, the less time you spend correcting wrong turns later.


  5. Use Plan Mode before any feature that touches multiple entities

    Plan Mode (Shift+Tab) forces the AI into a read-only phase where it outlines what it's about to do before touching any files. For features that cross entity boundaries — anything involving joins, rollups, or period calculations — always run Plan Mode first and review the plan before approving execution. One wrong assumption at this stage cascades across every report built on top of it.


The wider point is this: AI removes the coding barrier. It doesn't remove the judgment barrier. Finance professionals who are winning at this are the ones who bring the domain judgment first — who know what questions the data will need to answer in six months, and build the structure to support those questions before the first feature is written.


Design the model. Then build.



 
 
 

Comments


©2022 by Startup Finances.

bottom of page