SP

Engineering — How We Work

The Cheat Sheet · v1.0

Technical Process Culture
Source ↗
TL;DR · The 10 Rules That Matter Most

If you only remember ten things…

A high-performing engineering org isn't an act of heroism. It's the steady application of a small number of disciplined practices. These ten are the load-bearing ones at SolidProfessor — every card below is just the detail behind one of these rules.

  1. Ship to trunk daily. Branches live less than a day. Long branches are technical debt.
  2. One PR at a time. Stay with it through QA & UAT — finished work beats busy work.
  3. Validate in Preview before merge. Main is always releasable; staging is not a quality gate.
  4. Branch name = <work-type>/<TICKET>. The prefix tells the system what kind of work it is.
  5. Test every endpoint. 200 / 401 / 403 / 422. System tests under 2 seconds.
  6. Feature-flag everything risky. camelCase keys. Clean up the flag after launch.
  7. AI is a first-draft generator, not an author. If you can't explain every line, you can't ship it.
  8. Tech debt is product-quality work. Pod-owned. Visible to PMs. Not a backlog trash bin.
  9. Test behavior, not implementation. Public APIs only. State, not interactions. DAMP, not DRY.
  10. Discovery is a weekly habit. Talk to customers. You hear what no one else does.
01 / TECHNICAL

Technical Practices

How code moves from local to production: branching, testing, deploying, and keeping the codebase healthy.

TechnicalDORA Capability

Trunk-Based Development

Short-lived branches integrated daily into trunk; eliminates merge hell, enforces small changes, and is a prerequisite for continuous integration.

Confluence ↗
Core Rules
  • Branches live less than a day before being merged back to trunk.
  • Only fast-forward merges to trunk; squash short-lived branches before merging.
  • Rebase your branch against current trunk to keep it fresh — never merge trunk back into a feature branch.
  • No hotfixing. Fix forward; gate with a feature flag if other features shouldn't ship yet.
  • Mention the ticket number in every commit.
  • Understand git internals before relying on graphical tools (GitKraken, etc.).
Why TBD
TBD is a required practice for continuous integration. CI = TBD + a fast suite of automated tests run after each commit to trunk. "…fewer than three active branches in a code repository; branches and forks having very short lifetimes (e.g., less than a day) before being merged." — Accelerate, Forsgren / Humble / Kim
Technical

Branching Strategy

Standardized branch naming makes work types visible and surfaces bottlenecks in delivery data.

Confluence ↗
Naming Convention
<work-type>/<TICKET-NUMBER>
# examples
feature/SPPLT-1234        # new feature
enhancement/ASE-5678      # improving existing functionality
bugfix/ASE-123            # prod bug
defect/ASE-3456           # caught in QA, pre-release
techdebt/SPPLT-4567       # refactor / cleanup
maintenance/ASE-6789      # dep updates, routine ops
Guidelines
  • Always include the JIRA ticket number after the prefix.
  • Keep names short and clear — no extra description in the branch.
  • Make sure the prefix matches the actual work. A bug fix is not a feature.
Why It Matters
Categorized prefixes feed delivery analytics — they make it visible what proportion of capacity is going to new work vs. unplanned work vs. routine maintenance, and reveal where the system is bottlenecked.
Technical

Semantic Versioning

MAJOR.MINOR.PATCH — a shared rhythm tied to year, sprint, and in-sprint cadence.

Confluence ↗
MAJOR

Bumped at the start of each new year. 1.0.0 → 2.0.0

MINOR

Bumped at the end of each sprint for features and enhancements. 1.0.0 → 1.1.0

PATCH

Bumped within a sprint for bug fixes, defects, tech debt. 1.0.0 → 1.0.1

Why It Matters
A version number is a contract. By tying MAJOR/MINOR/PATCH to year/sprint/in-sprint, anyone glancing at a version can tell when a change shipped and what kind of change it was without reading the changelog.
TechnicalLaunchDarkly

Feature Flagging

A disciplined lifecycle (Create → Develop → Test → Rollout → Cleanup) so flags decouple deploy from release without rotting in the codebase.

Confluence ↗
The Five Stages
01 · CREATE
In LaunchDarkly
Descriptive name. camelCase key. Enable Client-side SDK if FE.
02 · DEVELOP
Wire it in
Implement on FE (Vue/Nuxt) and/or BE (Laravel) with safe fallbacks.
03 · TEST
All envs
Validate dev → staging → prod. Ensure toggling has immediate effect.
04 · ROLLOUT
Canary first
Internal users → small % of prod. Be ready to flip off if errors spike.
05 · CLEANUP  ·  this is the one that gets skipped
Delete the flag once it's permanent
Remove every code reference, delete the flag in LaunchDarkly, code-review the removal, deploy & monitor. Flags left behind become silent tech debt.
Naming Conventions (LaunchDarkly)
  • Name can be anything human-readable (e.g. "Show Portfolio Link").
  • Key must be a valid JS property name in camelCase — e.g. showPortfolioLink NOT show-portfolio-link. This lets it be used with dot notation in Vue components and templated as {{ showPortfolioLink }}.
  • Skill assessment flags: name Skill Assessment: {Skill}, key skill-assessment-{hyphenated-skill-tag}, type Kill Switch, tag skill-assessment. The key cannot be changed after creation — verify before saving.
Vue Consumption Patterns
Vue 3 — official SDK

Use the LaunchDarkly Vue SDK directly.

Vue 2 — custom plugin

In components: mapFeatureFlags(), featureFlags computed. Outside: $getFeatureFlag(key, fallback), $getFeatureFlagAsync().

// Inside a component
computed: {
  ...mapFeatureFlags('myFeature', 'myOtherFeature'),
  // with fallbacks
  ...mapFeatureFlags({ canUploadPhotos: false, minimumPhotos: 5 })
}

// In a Nuxt middleware / store
if (await this.$getFeatureFlagAsync('achievementsEnabled')) { /* … */ }
Flags vs. Doppler Configurations
Use a Feature Flag when…
  • You want immediate on/off without redeploy
  • You're rolling out gradually or A/B testing
  • The value is per-user or per-segment
Use a Doppler Config when…
  • It's an environment-specific constant (URLs, IDs)
  • An app restart is acceptable to apply
  • It's an API key or secret
TechnicalLaravel

Database Change Management

Treat schema like code: version-controlled, migration-driven, with one-time operations for production data changes.

Confluence ↗
Core Rules
  • All schema changes live in version control next to the application code they belong to.
  • Use a tool that records which changes ran in which environments and the result (Laravel migrations).
  • Migrations are for schema only. No data seeding, no command execution inside a migration.
  • Required data seeding → a Seeder class registered in DatabaseSeeder.
  • One-time production data changes → a One-Time Operation class run as part of the release.
  • Always design for rollback so a failed change can be reverted.
Laravel One-Time Operations

For one-off production tasks (data backfills, cache priming) the laravel-one-time-operations package automatically runs each operation exactly once and tracks completion.

# 1. Generate
php artisan make:one-time-operation UpdateUserRoles

# 2. Implement (app/OneTimeOperations/…)
class UpdateUserRoles extends OneTimeOperation {
  public function handle() {
    User::whereNull('role')->update(['role' => 'user']);
  }
}

# 3. Run (this fires automatically as part of deploy)
php artisan one-time-operations:run
Best Practices for OTOs
  • Commit the operation file — it's the audit trail.
  • Test locally and in staging before letting prod run it.
  • Name them descriptively — the class name is the deploy log.
  • Delete the file once it has run successfully in every environment so the codebase stays clean.
TechnicalDORA Capability

Continuous Integration

Frequent integration to main with automated tests; integration errors are found in minutes, not weeks.

Confluence ↗
How We Practice CI
  • Trunk-based development — work integrates to main at minimum daily.
  • Every change has automated tests before merge.
  • Work is tested with all other work automatically on merge.
  • When the build is red, all feature work stops until it's green.
  • New work does not break delivered work.
Prerequisites for Continuous Delivery
  • Your work must have test coverage.
  • Shared understanding that we will never push untested changes.
  • Use BDD to define tests, TDD to implement them.
  • When something breaks the pipeline, harden the gate so it can't break the same way again.
"The key is to automate absolutely everything and run the process so often that integration errors are found quickly. As a result everyone is more prepared to change things when they need to, because they know that if they do cause an integration error, it's easy to find and fix." — Martin Fowler
TechnicalQuality

Continuous Testing & What Makes a "Good Test"

Quality is built in, not bolted on. Tests should be robust, behavioral, and unchanging until requirements change.

Confluence ↗
Continuous Testing Principles
  • The Beyoncé Rule: "If you liked it, then you shoulda put a test on it." — SWE@Google.
  • Run faster tests first. Unit before component, system, functional. — Continuous Integration.
  • Write a test for every defect so it can never resurface.
  • One assert per test. Faster diagnosis when something fails.
  • When an integration test fails, add the unit test that would have caught it cheaper.The DevOps Handbook.
  • Visualize coverage. Fail the validation suite if it drops below threshold.
  • Developers own testing. Testers exist for exploratory, usability, and acceptance work — not as the safety net.
Anatomy of a Good Test
Robust, not brittle

Doesn't fail on unrelated production changes.

Stable & unchanging

Once written, only changes when requirements do.

Public APIs only

Invokes the system the way a user would.

State > interactions

Tests the result, not how the system got there.

Clear & immediate

Failure reason is obvious without spelunking.

Concise & complete

All info needed, nothing distracting.

Behavior, not methods

One test per behavior, not one per method.

No logic in tests

No conditionals, loops, or operators.

Strive for Unchanging Tests

When production code changes, tests should react like this:

Change TypeExisting TestsNew Tests?
Pure refactorUnchangedNo
New featureUnchangedYes
Bug fixUnchangedAdd the test that would have caught it
Behavior changeUpdateMaybe

Only the last row touches existing tests. If you find yourself rewriting tests during a refactor, the tests were too tied to implementation.

DAMP, not DRY
In production code, DRY (Don't Repeat Yourself) wins. In tests, DAMP — Descriptive And Meaningful Phrases — wins. A little duplication is fine if it makes the test self-contained and obvious.
TechnicalPod-Owned

Tech Debt Management

Tech debt is product-quality work — owned by the pod, visible to PMs, advocated for in every sprint.

Confluence ↗
Principles
  • Tech debt is owned by each pod and stream-aligned.
  • Creating a tech-debt ticket is fast, easy, and accessible.
  • Tech debt is visible to PMs and discussed in refinement.
  • Tech debt is not a backlog trash bin — it's curated with intent.
  • It becomes a Tech Initiative when it spans pods, requires architectural change, or exceeds 1–2 days per engineer.
Pod Workflow
01
DEFINE
As soon as you spot debt, create a Tech Debt ticket. Label with your pod + fe-tech-debt/be-tech-debt as needed.
02
REFINEMENT PREP
Engineers review their pod's TD board before sprint planning and advocate for the high-priority tickets.
03
SPRINT BACKLOG
Selected tickets get pulled into the sprint and treated like any other product work.
PROMOTE
If it's bigger than a pod can absorb → escalate to a Tech Initiative.
Tech Debt Ticket Checklist
  • Clear title — what and where.
  • Description & context — why the debt exists.
  • Impact — technical / product risk, performance, delivery speed.
  • Acceptance criteria for resolution.
  • Optional subtasks for complex work.
  • Pod label + optional fe-tech-debt / be-tech-debt.
When to Create a Tech Debt Ticket
  • You hit a workaround or shortcut introduced under deadline pressure.
  • You see unaddressed issues from prior code, migrations, or deprecated patterns.
  • Code violates team standards, lacks tests, or is hard to maintain.
  • You spot duplicated logic / copy-paste blocks that should be centralized.
  • A tech decision is slowing delivery or complicating testing.
  • You postponed a refactor or test-coverage task to unblock a feature.
Why It Matters
Tech debt left invisible becomes a silent quality killer that compounds. Pod ownership + PM visibility + sprint advocacy is what turns "we should clean this up someday" into "we cleaned it up last sprint."
TechnicalAI / LLM

AI Coding Standards

AI is a first-draft generator, not an author. Context is the multiplier; humans own architecture and learning.

Confluence ↗
"If you cannot explain why every line exists to a teammate, you don't understand it well enough to ship it." — SP AI Coding Standards
Three Guiding Policies
  • AI is a First-Draft Generator, Not an Author. Treat output as a starting point requiring human refinement.
  • Context is the Multiplier. Output quality is a direct function of input context — references to existing patterns, explicit constraints, scope boundaries, relevant docs.
  • Human Review Owns Architecture and Learning. AI helps with syntax and boilerplate. Humans own architectural fit, pattern consistency, and knowledge transfer.
Every Prompt Must Include
Tool / Context

Who the AI is and what it knows.

Constraints

DO/DO NOT rules — SOLID, patterns, dependencies.

Task

The specific deliverable.

Custom Context

Relevant existing files and docs (use the Memory Bank: docs/ai-context/).

Review Checklist for AI-Generated Code
  • Necessity. Is every line necessary? Could this be simpler?
  • Patterns. Does it match our established patterns?
  • Readability. Would a teammate understand it without explanation?
  • Efficiency. Is this reasonably efficient, or just "working"?
  • Test scope. Are tests focused, or bloated and redundant?
Test Standards for AI Output
  • No test file over 300 lines.
  • No redundant test cases. No vague descriptions like "should work correctly".
  • Specify exact scenarios upfront — don't let AI decide coverage.
  • Delete excess when AI over-generates (it will).
Commit / PR Standards
  • AI-assisted commit messages are fine if accurate; generic messages are unacceptable regardless of source.
  • PR description: what can be AI-assisted; why and non-obvious decisions must be human-written.
  • If a PR is >50% AI-generated, note it in the description.
  • Keep PRs focused. AI makes large changes easy to generate — that doesn't make them good.
Anti-Patterns to Avoid
Don't
  • Copy-paste without comprehension
  • Let AI decide test coverage
  • Skip human review on "simple" changes
  • Use AI to avoid learning
  • Over-rely on AI for architecture
Mental Model
  • Think of AI as a very fast junior engineer
  • It's read everything but knows nothing about your codebase
  • Your job: provide context, validate, learn
  • Outsource typing, never understanding
TechnicalTaxonomy

Types of Work

Shared vocabulary for what we're working on. Names matter — they drive analytics, capacity planning, and prioritization.

Confluence ↗
TypeDefinition
FeatureDistinct functionality that delivers user/stakeholder value — identified, planned, implemented, tested.
EnhancementAdding capability or improving UX of something that already exists.
BugFlaw, error, or unexpected behavior found in production.
DefectSame as a bug, but caught pre-release (during QA / UAT).
Tech DebtRefactoring/optimization to improve maintainability or scalability.
MaintenanceRoutine ops — package updates, framework upgrades, deprecation cleanup, monitoring.
SupportUser migrations, content migrations, customer conversions (SSO), localization.
Refactor vs. Rewrite
Refactoring

Restructuring internals without changing external behavior. Part of every implementation — it lives inside Feature, Enhancement, Bug, or Defect tickets pre-release. Post-release it's Maintenance or Tech Debt.

Rewrite

Redesigning a significant component. A planned effort, prioritized as a Feature (if it adds new capability) or Enhancement (if it improves an existing one).

02 / PROCESS

Process Practices

How work flows through the team: batch size, approval gates, and the path from PR to production.

ProcessDORA Capability

Working in Small Batches

Break features into releasable increments shipped to trunk at least daily. Practice it like a skill — it gets easier.

Confluence ↗
Daily Practice

Begin each day by asking yourself:

  • What code can I push to production today?
  • What's the smallest improvement I can make to this code?
  • Which part of this feature can I release today?
Evolutionary Coding Methods
Keystone Interfaces (Dark Launching)

Deploy code to production invisibly. Review metrics before exposing to users.

Branch by Abstraction

Replace frameworks or behaviors while continuing to deliver — no long-running branches.

Feature Flags

Temporary or permanent toggles for controlled rollout and A/B testing.

Stop Starting, Start Finishing

Constrain WIP. Pull a thing through to done before starting the next.

"Working in small batches is a skill; practice makes perfect. Start by thinking in small steps and embrace it as a challenge. With each practice session, it will become easier, and the benefits become clearer." — SP — Working in Small Batches
ProcessPreview-First

Change Approval — Preview-First Workflow

QA & UAT happen in the preview environment before merge. Main is always releasable; staging is no longer the quality gate.

Confluence ↗
"Nothing merges to Main until it's been validated in the Preview Environment." — SP Change Approval Guidelines
The Four Steps
01
Open PR
Preview environment spins up automatically.
02
Stay With Your PR
The most important part. Don't move on. Shepherd it through.
03
QA & UAT in Preview
Reviewers validate in preview, not staging. Issues fixed in the same PR — no new tickets.
04
Approval → Merge → Deploy
Auto-deploys to staging and production.
While Your PR Is Open
Do
  • One PR at a time
  • Notify reviewers when preview is ready
  • Be available to answer questions
  • Fix issues immediately when found
  • Push updates to the same PR (preview auto-updates)
  • Treat "waiting for QA & UAT" as active work
  • Fix forward — found an issue? Fix it in the same PR.
Don't
  • Open a second PR before the first merges
  • Merge without QA or UAT approval
  • Let a PR sit idle for hours
  • Use staging as a quality gate
  • File a separate ticket for in-PR defects
Why It Works
The old way merged first and reviewed after. Defects meant new tickets and context-switching, while the original engineer moved on. The preview-first way keeps the engineer in context for fast fixes, gives reviewers one thing at a time instead of a backlog, keeps main clean, and ships smaller batches more often. "We're trading fake activity (tickets sitting idle on staging) for real speed (work actually shipping). The goal is finished work, not busy work."
ProcessRelease

Branch For Release

Release to production via cherry-picked release branches → preprod → prod, automated by CI/CD, gated by feature flags, communicated through Teams.

Confluence ↗
Release Flow (mostly automated)
01
Cherry-pick → release branch
From main into release/<ticket> (automated).
02
Deploy to preprod
Validate in a production-like environment.
03
PR to prod
Submit (automated). Validate work in prod.
04
Communicate & document
Teams channels + JIRA release. Release branch deleted (automated).
CLI Reference
# Cut a release branch from prod
git checkout prod
git checkout -b release/<ticket-identifier>
git cherry-pick -m 1 <merge-commit-hash>
# …repeat cherry-pick for each commit you need
git push

# Promote to preprod
git checkout preprod
git merge release/<ticket-identifier>
git push
Recommended Practices
  • Squash before merging — keep history clean.
  • Make frequent, manageable changes to keep reviews simple and conflicts rare.
  • All work must meet testing requirements before submission.
  • Finish what you started before pulling new work.
  • Every release goes behind a feature flag or toggle to avoid redeployment risk.
  • Coordinate through Release Coordination + Product Releases Teams channels.
  • Track release versions in JIRA.
"There are no bad teams, only bad leaders." — Jocko Willink, Extreme Ownership
ProcessBackend

Platform Backend Standards

Backend PRs must include endpoint test coverage (200/401/403/422), one approval, passing tests, and system tests under 2 seconds.

Confluence ↗
Endpoint Test Matrix (every new endpoint)
200 — Success

Happy path returns the expected payload.

401 — Unauthorized

No / invalid auth is rejected.

403 — Forbidden

Authenticated but lacks permission.

422 — Validation

Bad input is rejected with the expected errors.

Plus tests for any associated Actions, Jobs, Mailables, Notifications. How to run your tests must be in the PR description.

Bug Tickets
  • Either add a new test covering the bug, or fix an existing broken test.
Merge Requirements
  • 1 approval.
  • Passing tests.
  • System tests must complete in <2 seconds. If yours don't, refactor the logic or mock the slow interactions.
"If you don't test your work, someone will. It could be QA or it could be the end user (yikes!)." — SP Platform Backend Conventions
03 / CULTURE

Culture Practices

How we work together: how leaders show up, how engineers stay close to customers, and how discovery happens every week.

CultureFor Leaders

Transformational Leadership

Leaders serve their teams through humility, respect, and trust — and build self-sufficient organizations that don't depend on them.

Confluence ↗
The Five Dimensions (DORA)
Vision

Knows where the team and org are going — and where they want to be in five years.

Inspirational Communication

Says positive things about the team. Frames change as opportunity.

Intellectual Stimulation

Challenges old assumptions. Invites new ways of thinking.

Supportive Leadership

Considers personal feelings; accommodates needs and interests.

Personal Recognition

Commends above-average work. Acknowledges improvement.

The Three "Always"

Always Be Deciding — find the right trade-off and iterate. Always Be Leaving — build a team that runs without you. Always Be Scaling — protect time, attention, energy.

Patterns to Adopt
  • Lose the ego. Be a Zen master, a catalyst, a teacher.
  • Remove roadblocks. Set clear goals. Be honest.
  • Track happiness. It's a leading indicator.
Antipatterns to Avoid
Don't
  • Hire pushovers
  • Ignore low performers
  • Ignore human issues
  • Be everyone's friend
  • Compromise the hiring bar
  • Treat your team like children
Why "Always Be Leaving"?
  • A SPOF is vulnerable to attrition
  • A SPOF burns out
  • A team built around you collapses without you
  • Build a self-driving team instead
"The best leaders work to serve their team using the principles of humility, respect, and trust. Great managers worry about what things get done — and trust their team to figure out how." — SP Transformational Leadership
CultureDiscovery

Continuous Discovery

Engineers are essential partners in customer discovery, not late-stage implementers. Your technical lens prevents waste and shapes better products.

Confluence ↗
Core Beliefs
  • Discovery is continuous, not a phase at the start of a project.
  • The Product Trio — PM, designer, senior engineer — shares responsibility for product decisions.
  • Half of product ideas don't deliver real value. Early engineer involvement prevents waste.
  • Engineers hear what others miss — the technical signal under a customer's pain.
  • Great solutions are customer-inspired and technology-enabled. You know the system best.
  • You reduce risk before we build by asking the "what if this fails?" questions.
  • You gain fulfillment — direct view of the problem and the impact your code has.
The Four Risks to Mitigate
Value

Will customers actually want this?

Usability

Will they be able to use it?

Feasibility

Can we build it with what we have?

Viability

Does it work for our business?

Why It Matters
In traditional product development, engineers were brought in late — usually to implement fixed features. That's risky and wasteful. Modern teams use continuous discovery where engineers, PMs, and designers work together early. Your involvement catches details others miss, proposes innovative-yet-practical approaches, challenges risky assumptions, and gives you autonomy, mastery, and purpose in your work.
CulturePractitioner Toolkit

Discovery Toolkit

Five companion practices for engineers: customer interviews, an interview script, assumption mapping, rapid prototyping, and the opportunity solution tree.

Continuous Discovery ↗
Customer Interviewing for Engineers

Interview to learn, not to sell or validate. Ask about the past — past behavior is more reliable than future predictions. Make it a weekly rhythm.

  • Ask about the past: "Can you walk me through the last time you did [task]?"
  • Bring your technical lens. Slow page load = DB bottleneck or architectural limit. You hear the technical signal under the pain.
  • Discover opportunities, don't validate ideas.
  • Shift from mercenary to missionary. Solve real problems, not just code tasks.
Confluence ↗
Interview Script — the practical flow
  1. 01 Open warmly. "I'm not here to sell you anything or show you a new feature. I want to learn how you do your work."
  2. 02 The Golden Question. "Can you walk me through the last time you tried to [specific task]?"
  3. 03 Probe deeper. "What were you hoping to achieve?" "What was the most challenging part?" "What did you do to work around it?" "How did it make you feel?"
  4. 04 Engineer's Ear. Listen for technical pain, technical assumptions, and recurring problems that hint at underlying flaws.
Avoid
  • Hypotheticals: "Would you use a feature that…?"
  • Leading questions: "Did you like the new design?"
  • Future guesses: "What features do you want?"
  • Premature solutioning
Use
  • "Tell me about the last time you struggled to…"
  • "Walk me through your process for…"
  • "What was the hardest part of that experience?"
  • You're a detective, not a salesperson
Confluence ↗
Assumption Mapping — find the riskiest unknown

An assumption is any belief about your customer, market, business, or technical solution that you haven't proven with real evidence. Plot them on Importance × Certainty:

Important & Unknown

Danger zone. Test these first.

Important & Known

Verify lightly; trust your evidence.

Unimportant & Unknown

Park it. Not worth testing yet.

Unimportant & Known

Ignore.

Your role as an engineer: bring the analytical mindset, ask disconfirming questions, surface feasibility risks (time, skills, tech).

Confluence ↗
Rapid Prototyping — learn the most with the least

Prototypes are the fastest, cheapest way to test an idea. Goal: learn as much as possible with the least effort. Always ask "what do we need to learn?"

TypePurposeKey Question
Low-fi UserTest workflow with sketches/mockupsWill users understand this?
High-fi UserTest visual design with polished simDo users think this solves their problem?
FeasibilityDe-risk technical unknowns with quick codeCan we build this?
Live-DataTest in the real world with real usersDo users actually use this and drive our outcome?

Engineer's role: own Feasibility (small, throwaway code, prevents "discovery boondoggles") and Live-Data prototypes (ship to a small group, gather analytics, get the most reliable validation).

Confluence ↗
Opportunity Solution Tree — tie work to outcomes

A visual framework (Teresa Torres) connecting business outcomes to opportunities, solutions, and validated experiments. If an idea doesn't connect to the tree, it's a distraction.

1 · Outcome

One measurable goal. "Increase DAU by 15%," not "Launch the dashboard."

2 · Opportunities

Unmet customer needs surfaced through interviews — not pre-chosen solutions.

3 · Solutions

Multiple ideas per opportunity. Anyone on the team contributes; each must link back to an opportunity.

4 · Experiments

Small tests (prototype, spike, survey) targeting the riskiest assumption of each solution.

How to build it (team sport): define one measurable goal → run ongoing customer interviews → brainstorm multiple ideas per opportunity → pick the riskiest assumption per solution and design a quick test.

Your role: participate in every step, especially feasibility and experiment design.

Confluence ↗