Place2Page
Why We Wrote a Risk Map Before Adding More Tests

Engineering

Why We Wrote a Risk Map Before Adding More Tests

Instead of chasing raw test counts, Place2Page used a risk map to choose which API paths needed DB-backed integration coverage first.

It is easy to talk about test quality in the abstract.

Add more tests. Raise coverage. Catch regressions earlier.

None of that is wrong.

But once a product has auth, billing, webhooks, streaming, and external API calls, raw test count stops being a good planning tool.

That is why Place2Page wrote a risk map before adding another broad wave of tests.

Coverage numbers hide the question that matters

The useful question was not "how many tests do we have?"

It was:

"Which failures would actually hurt the product most if the current test suite missed them?"

That shifts the conversation.

A mocked happy-path test and a DB-backed idempotency test do not reduce the same kind of risk. They may both count as one test, but they protect very different things.

The map started with request flows, not frameworks

The first helpful move was to describe the risky areas in product terms:

  • auth and authorization
  • billing consume and idempotency
  • webhook deduplication
  • external API resilience
  • abuse and rate limiting

That framing mattered because it tied tests to failure modes, not to modules.

Instead of saying "we need more tests around this router," the team could say "this path can double-charge, leak access, or fail under concurrent writes."

That is a much better prioritization language.

P0 paths got DB-backed integration tests first

Once the map was explicit, the first investment was obvious.

The highest-risk flows were the ones where mocks could hide real problems:

  • owner versus admin access decisions
  • duplicate billing request_id handling
  • webhook receipt uniqueness
  • rate-limit windows that depend on real inserts and timestamps

Those are exactly the places where the database is part of the behavior, not just a storage detail.

So the strategy became "move the dangerous paths onto DB-backed integration tests first."

That is more useful than adding another layer of mocked coverage around the same assumptions.

The PR chain made the work easier to review

One detail I like in the testing risk map is the PR tracker.

Each stage had:

  • a narrow scope
  • a status
  • one sentence describing the risk it reduced

That sounds administrative, but it is actually a strong engineering tool.

It keeps the work reviewable. It prevents the testing effort from turning into one vague "quality improvement" branch. And it forces clarity about why each batch of tests exists.

This approach also protected production code simplicity

There is a trap in test-improvement work.

Sometimes the test plan becomes so ambitious that production code starts bending around the harness.

The better result here was more conservative:

  • keep production behavior explicit
  • use real DB-backed fixtures where risk justifies it
  • mock unstable external systems, not core internal logic

That is a healthier way to grow a test suite in a product that is still moving quickly.

What this changed culturally

A risk map does more than organize tests.

It changes how the team talks about confidence.

Instead of saying "the suite is pretty broad," you can say:

  • auth and project access now have a DB-backed matrix
  • billing deduplication is covered under real uniqueness constraints
  • webhook duplication races are part of the harness
  • rate-limit window behavior is checked against actual stored events

That is a sharper and more defensible statement.

Closing

More tests are not automatically better. More risk reduction is better.

For Place2Page, writing the map first made it easier to spend effort where the product was actually fragile.

That is the part worth keeping, even if the specific tools or routes change later.