A high-performing engineering org isn't an act of heroism. It's the steady application of a small number of disciplined practices. These twelve are the load-bearing ones at SolidProfessor — every card below is the detail behind one of them.
Technical 5 rules
Ship to trunk daily. Branches live less than a day. Long branches are technical debt.
Default to adopt, not build. For undifferentiated infrastructure, the burden of proof is on building. Document what you're choosing not to build.
Test behavior, not implementation. 200 / 401 / 403 / 422 every endpoint. System tests < 2s. Beyoncé Rule: if you liked it, put a CI test on it.
Engineer-mode is the default. Code is read more than written. Boy Scout Rule. AI accelerates programming, not engineering — decomposition discipline is the safety net.
Operate as a Trio. PM, Designer, Engineer co-discover. "What evidence would change your mind?" is the unlock question. Discovery is a weekly habit.
Systems Owned — services and capabilities the stream is responsible for.
Data Ownership — what data domains the stream owns and who consumes them.
Boundary Clarifications / Areas of Contention — where ownership overlaps and how the line is drawn.
Dependencies — what other streams this one depends on.
When to Consult
Cross-team work — knowing whose code you're touching.
Ownership disputes — boundary clarifications resolve "whose feature is this?"
Dependency questions — what your stream consumes vs. owns.
Onboarding to a new domain — use the /value-stream Claude skill to load all pages at once.
TechnicalInvestment
Build vs. Adopt
Every engineering hour carries an opportunity cost. Default to adopting proven solutions for infrastructure; build only where it creates meaningful differentiation.
Undifferentiated heavy lifting (AWS) — work necessary for the product to function but creating zero competitive advantage. SSR, CI/CD, auth flows, build pipelines — table-stakes infrastructure that doesn't separate us from competitors. Differentiation is the learning experience we build on top: library search, recommendations, lesson player, retention features.
Always ask: "Does building this ourselves give us a meaningful advantage over adopting a proven solution?" If no, every hour spent building is an hour traded away from work that moves the needle.
The Maintenance Multiplier
70%+
of total cost of ownership is maintenance, not initial build (Westarete, Appinventiv).
40%
of IT budget consumed by tech debt (Gartner).
35%
of large custom IT projects abandoned (McKinsey).
4–5×
revenue growth in top-quartile dev-velocity orgs vs peers (McKinsey).
Custom infrastructure costs the years of maintenance, not the weeks of building: upstream upgrades, production debugging, onboarding drag. When we adopt a framework, thousands of OSS engineers find and fix edge cases before we do — leverage we can't replicate at our size.
Our Guiding Policy
Default to Adopt for undifferentiated infrastructure. The burden of proof is on building, not adopting.
Build only where it creates meaningful differentiation — the learning experience or a domain problem nothing existing solves well.
Evaluate with a total-cost-of-ownership lens. Ongoing maintenance + onboarding + opportunity cost — not just the build estimate.
Document what we're choosing not to build. Keep the opportunity cost visible. Make every decision intentional.
"Mature engineering organizations invest in rigorous selection, not invention."
— SP Build vs. Adopt
TechnicalDORA Capability
Trunk-Based Development
Short-lived branches integrated daily into trunk; eliminates merge hell, enforces small changes, and is a prerequisite for continuous integration.
Branches live less than a day before being merged back to trunk.
Only fast-forward merges to trunk; squash short-lived branches before merging.
Rebase your branch against current trunk to keep it fresh — never merge trunk back into a feature branch.
No hotfixing. Fix forward; gate with a feature flag if other features shouldn't ship yet.
Mention the ticket number in every commit.
Understand git internals before relying on graphical tools (GitKraken, etc.).
Why TBD
TBD is a required practice for continuous integration. CI = TBD + a fast suite of automated tests run after each commit to trunk.
"…fewer than three active branches in a code repository; branches and forks having very short lifetimes (e.g., less than a day) before being merged." — Accelerate, Forsgren / Humble / Kim
Always include the JIRA ticket number after the prefix.
Keep names short and clear — no extra description in the branch.
Make sure the prefix matches the actual work. A bug fix is not a feature.
Why It Matters
Categorized prefixes feed delivery analytics — they make it visible what proportion of capacity is going to new work vs. unplanned work vs. routine maintenance, and reveal where the system is bottlenecked.
Technical
Semantic Versioning
MAJOR.MINOR.PATCH — platforms version on a calendar/sprint cadence; published packages version by change type.
Bumped on breaking changes to the public API. 2.0.1 → 3.0.0
MINOR
Bumped on backwards-compatible features. 2.0.1 → 2.1.0
PATCH
Bumped on bug fixes. 2.0.1 → 2.0.2
Released via npm run release:patch | release:minor | release:major from the package repo (see auth README → Publishing).
Why It Matters
A version number is a contract — but the contract differs by audience. Platforms ship to end users on a fixed cadence, so the calendar rule lets anyone read a version and know when it shipped. Packages ship to other apps that depend on them, so MAJOR/MINOR/PATCH must signal the kind of change (breaking, feature, fix) so consumers can upgrade safely.
TechnicalLaunchDarkly
Feature Flagging
A disciplined lifecycle (Create → Develop → Test → Rollout → Cleanup) so flags decouple deploy from release without rotting in the codebase.
Descriptive name. camelCase key. Enable Client-side SDK if FE.
02 · DEVELOP
Wire it in
Implement on FE (Vue/Nuxt) and/or BE (Laravel) with safe fallbacks.
03 · TEST
Local → Preview → Prod
Validate locally and in the preview environment, then in prod after merge. Ensure toggling has immediate effect.
04 · ROLLOUT
Canary first
Internal users → small % of prod. Be ready to flip off if errors spike.
05 · CLEANUP · this is the one that gets skipped
Delete the flag once it's permanent
Remove every code reference, delete the flag in LaunchDarkly, code-review the removal, deploy & monitor. Flags left behind become silent tech debt.
Naming Conventions (LaunchDarkly)
Name can be anything human-readable (e.g. "Show Portfolio Link").
Key must be a valid JS property name in camelCase — e.g. showPortfolioLink NOT show-portfolio-link. This lets it be used with dot notation in Vue components and templated as {{ showPortfolioLink }}.
Skill assessment flags: name Skill Assessment: {Skill}, key skill-assessment-{hyphenated-skill-tag}, type Kill Switch, tag skill-assessment. The key cannot be changed after creation — verify before saving.
All schema changes live in version control next to the application code they belong to.
Use a tool that records which changes ran in which environments and the result (Laravel migrations).
Migrations are for schema only. No data seeding, no command execution inside a migration.
Required data seeding → a Seeder class registered in DatabaseSeeder.
One-time production data changes → a One-Time Operation class run as part of the release.
Always design for rollback so a failed change can be reverted.
Laravel One-Time Operations
For one-off production tasks (data backfills, cache priming) the laravel-one-time-operations package automatically runs each operation exactly once and tracks completion.
# 1. Generate
php artisan make:one-time-operation UpdateUserRoles
# 2. Implement (app/OneTimeOperations/…)class UpdateUserRoles extends OneTimeOperation {
public function handle() {
User::whereNull('role')->update(['role' => 'user']);
}
}
# 3. Run (this fires automatically as part of deploy)
php artisan one-time-operations:run
Best Practices for OTOs
Commit the operation file — it's the audit trail.
Test locally and in the preview environment before letting prod run it.
Name them descriptively — the class name is the deploy log.
Keep the file after it has run — the package tracks completion, and the file stays as the permanent audit record.
Trunk-based development — work integrates to main at minimum daily.
Every change has automated tests before merge.
Work is tested with all other work automatically on merge.
When the build is red, all feature work stops until it's green.
New work does not break delivered work.
Prerequisites for Continuous Delivery
Your work must have test coverage.
Shared understanding that we will never push untested changes.
Use BDD to define tests, TDD to implement them.
When something breaks the pipeline, harden the gate so it can't break the same way again.
"The key is to automate absolutely everything and run the process so often that integration errors are found quickly. As a result everyone is more prepared to change things when they need to, because they know that if they do cause an integration error, it's easy to find and fix."
— Martin Fowler
The Beyoncé Rule: "If you liked it, then you shoulda put a test on it." — SWE@Google.
Run faster tests first. Unit before component, system, functional. — Continuous Integration.
Write a test for every defect so it can never resurface.
One assert per test. Faster diagnosis when something fails.
When an integration test fails, add the unit test that would have caught it cheaper. — The DevOps Handbook.
Visualize coverage. Fail the validation suite if it drops below threshold.
Developers own testing. Testers exist for exploratory, usability, and acceptance work — not as the safety net.
Anatomy of a Good Test
Robust, not brittle
Doesn't fail on unrelated production changes.
Stable & unchanging
Once written, only changes when requirements do.
Public APIs only
Invokes the system the way a user would.
State > interactions
Tests the result, not how the system got there.
Clear & immediate
Failure reason is obvious without spelunking.
Concise & complete
All info needed, nothing distracting.
Behavior, not methods
One test per behavior, not one per method.
No logic in tests
No conditionals, loops, or operators.
Strive for Unchanging Tests
When production code changes, tests should react like this:
Change Type
Existing Tests
New Tests?
Pure refactor
Unchanged
No
New feature
Unchanged
Yes
Bug fix
Unchanged
Add the test that would have caught it
Behavior change
Update
Maybe
Only the last row touches existing tests. If you find yourself rewriting tests during a refactor, the tests were too tied to implementation.
DAMP, not DRY
In production code, DRY (Don't Repeat Yourself) wins. In tests, DAMP — Descriptive And Meaningful Phrases — wins. A little duplication is fine if it makes the test self-contained and obvious.
You hit a workaround or shortcut introduced under deadline pressure.
You see unaddressed issues from prior code, migrations, or deprecated patterns.
Code violates team standards, lacks tests, or is hard to maintain.
You spot duplicated logic / copy-paste blocks that should be centralized.
A tech decision is slowing delivery or complicating testing.
You postponed a refactor or test-coverage task to unblock a feature.
Why It Matters
Tech debt left invisible becomes a silent quality killer that compounds. Pod ownership + PM visibility + sprint advocacy is what turns "we should clean this up someday" into "we cleaned it up last sprint."
TechnicalAI / LLM
AI Coding Standards
AI is a first-draft generator, not an author. Context is the multiplier; humans own architecture and learning.
"If you cannot explain why every line exists to a teammate, you don't understand it well enough to ship it."
— SP AI Coding Standards
Three Guiding Policies
AI is a First-Draft Generator, Not an Author. Treat output as a starting point requiring human refinement.
Context is the Multiplier. Output quality is a direct function of input context — references to existing patterns, explicit constraints, scope boundaries, relevant docs.
Human Review Owns Architecture and Learning. AI helps with syntax and boilerplate. Humans own architectural fit, pattern consistency, and knowledge transfer.
Every Prompt Must Include
Tool / Context
Who the AI is and what it knows.
Constraints
DO/DO NOT rules — SOLID, patterns, dependencies.
Task
The specific deliverable.
Custom Context
Relevant existing files and docs (use the Memory Bank: docs/ai-context/).
Review Checklist for AI-Generated Code
Necessity. Is every line necessary? Could this be simpler?
Patterns. Does it match our established patterns?
Readability. Would a teammate understand it without explanation?
Efficiency. Is this reasonably efficient, or just "working"?
Test scope. Are tests focused, or bloated and redundant?
Test Standards for AI Output
No test file over 300 lines.
No redundant test cases. No vague descriptions like "should work correctly".
Specify exact scenarios upfront — don't let AI decide coverage.
Delete excess when AI over-generates (it will).
Commit / PR Standards
AI-assisted commit messages are fine if accurate; generic messages are unacceptable regardless of source.
PR description: what can be AI-assisted; why and non-obvious decisions must be human-written.
If a PR is >50% AI-generated, note it in the description.
Keep PRs focused. AI makes large changes easy to generate — that doesn't make them good.
Anti-Patterns to Avoid
Don't
Copy-paste without comprehension
Let AI decide test coverage
Skip human review on "simple" changes
Use AI to avoid learning
Over-rely on AI for architecture
Mental Model
Think of AI as a very fast junior engineer
It's read everything but knows nothing about your codebase
Your job: provide context, validate, learn
Outsource typing, never understanding
TechnicalTaxonomy
Types of Work
Shared vocabulary for what we're working on. Names matter — they drive analytics, capacity planning, and prioritization.
User migrations, content migrations, customer conversions (SSO), localization.
Refactor vs. Rewrite
Refactoring
Restructuring internals without changing external behavior. Part of every implementation — it lives inside Feature, Enhancement, Bug, or Defect tickets pre-release. Post-release it's Maintenance or Tech Debt.
Rewrite
Redesigning a significant component. A planned effort, prioritized as a Feature (if it adds new capability) or Enhancement (if it improves an existing one).
02 / PROCESS
Process Practices
How work flows through the team: batch size, approval gates, and the path from PR to production.
ProcessDORA Capability
Working in Small Batches
Break features into releasable increments shipped to trunk at least daily. Practice it like a skill — it gets easier.
Solid scope. Manageable cognitive load and reviewability.
Fair — 371–698
Review thoroughness drops; reviewers start to skim instead of analyze.
Needs Focus — > 698
> 17% change-failure rate, > 38h review time, > 8% rework. Decompose before starting.
90-minute rule: if a PR can't be reviewed, tested, and understood in under 90 minutes, it hasn't been decomposed enough. Reviewers can hold ~200–400 LOC in working memory — past that, bugs slip through. Source: LinearB 2026 Engineering Benchmarks (8.1M PRs, 4,800+ orgs) + DORA / Accelerate.
AI makes this more important, not less. AI-assisted coding accelerates the production of large, risky batches without decomposition discipline. Small PRs are the safety net for AI-assisted velocity.
Decomposition Techniques (before you start)
SPIDR Slicing
Split by Spike, Paths, Interface, Data, or Rules.
Hamburger Method
Ship the thinnest end-to-end slice across all layers first; layer on after.
Walking Skeleton
Get the full path working with stubs, then fill in the real implementation.
One Concern Rule
Each PR does exactly one thing. If you'd describe it with "and", split it.
Evolutionary Coding Methods (delivery mechanics)
Keystone Interfaces (Dark Launching)
Deploy code to production invisibly. Review metrics before exposing to users.
Branch by Abstraction
Replace frameworks or behaviors while continuing to deliver — no long-running branches.
Feature Flags
Temporary or permanent toggles for controlled rollout and A/B testing.
Stop Starting, Start Finishing
Constrain WIP. Pull a thing through to done before starting the next.
"Working in small batches is a skill; practice makes perfect. Start by thinking in small steps and embrace it as a challenge. With each practice session, it will become easier, and the benefits become clearer."
— SP — Working in Small Batches
"Nothing merges to Main until it's been validated in the Preview Environment."
— SP Change Approval Guidelines
The Four Steps
01
Open PR
Preview environment spins up automatically.
02
Stay With Your PR
The most important part. Don't move on. Shepherd it through.
03
QA & UAT in Preview
Reviewers validate in preview, not staging. Issues fixed in the same PR — no new tickets.
04
Approval → Merge → Deploy
Merge to main auto-deploys to staging, preprod, and production.
While Your PR Is Open
Do
One PR at a time
Notify reviewers when preview is ready
Be available to answer questions
Fix issues immediately when found
Push updates to the same PR (preview auto-updates)
Treat "waiting for QA & UAT" as active work
Fix forward — found an issue? Fix it in the same PR.
Don't
Open a second PR before the first merges
Merge without QA or UAT approval
Let a PR sit idle for hours
Use staging as a quality gate
File a separate ticket for in-PR defects
Why It Works
The old way merged first and reviewed after. Defects meant new tickets and context-switching, while the original engineer moved on. The preview-first way keeps the engineer in context for fast fixes, gives reviewers one thing at a time instead of a backlog, keeps main clean, and ships smaller batches more often.
"We're trading fake activity (tickets sitting idle on staging) for real speed (work actually shipping). The goal is finished work, not busy work."
ProcessReview
Code Review Practices
Review through orthogonal lenses — each lens covers a dimension the others explicitly ignore. Time-boxed at 90 minutes per PR.
Pick 3–4 Orthogonal Lenses
Each lens has a focus (what to examine) and explicit exclusions (what to ignore). If two lenses would flag the same issue, sharpen the boundary or merge them. Prefer lenses that are non-obvious for the subject — the author can spot surface-level issues themselves.
Surface critical correctness, security, and design issues.
Time-box at 90 minutes — if you can't, the PR isn't decomposed enough.
Ask clarifying questions before assuming bad intent.
Approve fast on Elite-tier PRs to keep flow.
Say "nothing notable" when there's nothing — no padding.
Don't
Bikeshed style — Pint and oxlint own formatting.
Hold up a PR for taste preferences.
Manufacture issues to look thorough.
Rubber-stamp large PRs you didn't actually read.
Block on changes that belong in a follow-up PR.
For Deep Reviews
Use the /deep-review Claude skill on architectural changes or risky refactors — it launches parallel agents, each assigned one lens with explicit exclusions, and synthesizes their findings into a deduplicated report.
"If you find nothing notable for your lens, say so — don't manufacture issues."
— SP /deep-review skill
ProcessBackend
Platform Backend Standards
Backend PRs must include endpoint test coverage (200/401/403/422), one approval, passing tests, and system tests under 2 seconds.
Boy Scout Rule — leave it cleaner than you found it.
Beyoncé Rule — "if you liked it, you should have put a CI test on it." Tests are first-class engineering artifacts.
"What if we could…?" — co-discover with PM & Designer; expand the option space, don't just deliver the one inside it.
"What evidence would change your mind?" — the unlock question for stalled debate.
Document what we're choosing not to build — keep opportunity cost visible.
AI accelerates programming, not engineering. Decomposition discipline is the safety net — not optional.
The 1-on-1 Question
Stop asking "what did you ship?" Start asking "what did you make easier to change?" Asked consistently, that question rewires what the team believes the job actually is.
What We Owe You in Return
Decision authority on how to build it, after genuine debate.
Time horizon that protects space for repaying debt and selection rigor.
Default to adopt for undifferentiated infrastructure — no pressure to invent.
Recognition that compounds — celebrate the refactor that paid back, the bug that didn't ship, the migration that didn't break.
The Trio — engineering belongs in the room from discovery, not handed a spec at delivery.
"The longer a piece of software exists, the more the engineering decisions matter relative to the initial programming decisions."
— Software Engineering at Google
CultureCollaboration
The Product Trio
PM, Designer, and Engineer operate as one unit — co-discovering the problem, not handing specs across roles. You succeed or fail together.
Customers will use it; the business can sell, support, and profit from it. Brings strategic context.
Designer — Usability & Desirability
Users can figure it out and it solves a real need. Shapes how we learn from customers.
Engineer — Feasibility & Technical Insight
We can build it without crippling debt. Contributes possibilities the others don't know exist.
The overlap is the point. PM is in customer interviews. Designer understands technical constraints. Engineer engages with customer problems. Clean separations = three individuals, not a trio.
Decision Rights (when consensus fails)
What problem to solve
PM has final call — business value & strategy.
How to solve it (UX)
Designer has final call — user experience integrity.
How to build it (technical)
Engineer has final call — technical sustainability.
The unlock question
"What evidence would change your mind?" Then go get it.
Default to consensus through evidence. Final call is the tiebreaker, not the starting point. Escalate only when the problem space is wrong, not for solution disagreements.
What Good Looks Like
Good Trio
PM brings a customer quote to planning.
Designer sketches three options before the team picks one.
Engineer says "what if we tried X — faster, and we'd learn Y?"
All three were in the last customer interview.
OST updated weekly with real customer evidence.
Not a Trio
PM writes tickets and hands them to Engineer.
Designer receives requirements, produces mockups.
Engineer only speaks up about technical concerns, never customer needs.
Only one person talks to customers.
OST is a doc the PM maintains alone.
The Test
When was the last time we were all in the same customer conversation?
Can each of us name the top 3 customer opportunities we've discovered?
Have we tested any assumptions in the last two weeks?
If asked why we're building X, can we point to customer evidence?
CultureFor Leaders
Transformational Leadership
Leaders serve their teams through humility, respect, and trust — and build self-sufficient organizations that don't depend on them.
Always Be Deciding — find the right trade-off and iterate. Always Be Leaving — build a team that runs without you. Always Be Scaling — protect time, attention, energy.
Patterns to Adopt
Lose the ego. Be a Zen master, a catalyst, a teacher.
Remove roadblocks. Set clear goals. Be honest.
Track happiness. It's a leading indicator.
Antipatterns to Avoid
Don't
Hire pushovers
Ignore low performers
Ignore human issues
Be everyone's friend
Compromise the hiring bar
Treat your team like children
Why "Always Be Leaving"?
A SPOF is vulnerable to attrition
A SPOF burns out
A team built around you collapses without you
Build a self-driving team instead
"The best leaders work to serve their team using the principles of humility, respect, and trust. Great managers worry about what things get done — and trust their team to figure out how."
— SP Transformational Leadership
Discovery is continuous, not a phase at the start of a project.
The Product Trio — PM, designer, senior engineer — shares responsibility for product decisions.
Half of product ideas don't deliver real value. Early engineer involvement prevents waste.
Engineers hear what others miss — the technical signal under a customer's pain.
Great solutions are customer-inspired and technology-enabled. You know the system best.
You reduce risk before we build by asking the "what if this fails?" questions.
You gain fulfillment — direct view of the problem and the impact your code has.
The Four Risks to Mitigate
Value
Will customers actually want this?
Usability
Will they be able to use it?
Feasibility
Can we build it with what we have?
Viability
Does it work for our business?
Why It Matters
In traditional product development, engineers were brought in late — usually to implement fixed features. That's risky and wasteful. Modern teams use continuous discovery where engineers, PMs, and designers work together early. Your involvement catches details others miss, proposes innovative-yet-practical approaches, challenges risky assumptions, and gives you autonomy, mastery, and purpose in your work.
CulturePractitioner Toolkit
Discovery Toolkit
Five companion practices for engineers: customer interviews, an interview script, assumption mapping, rapid prototyping, and the opportunity solution tree.
01Open warmly. "I'm not here to sell you anything or show you a new feature. I want to learn how you do your work."
02The Golden Question. "Can you walk me through the last time you tried to [specific task]?"
03Probe deeper. "What were you hoping to achieve?" "What was the most challenging part?" "What did you do to work around it?" "How did it make you feel?"
04Engineer's Ear. Listen for technical pain, technical assumptions, and recurring problems that hint at underlying flaws.
An assumption is any belief about your customer, market, business, or technical solution that you haven't proven with real evidence. Plot them on Importance × Certainty:
Important & Unknown
Danger zone. Test these first.
Important & Known
Verify lightly; trust your evidence.
Unimportant & Unknown
Park it. Not worth testing yet.
Unimportant & Known
Ignore.
Your role as an engineer: bring the analytical mindset, ask disconfirming questions, surface feasibility risks (time, skills, tech).
Prototypes are the fastest, cheapest way to test an idea. Goal: learn as much as possible with the least effort. Always ask "what do we need to learn?"
Type
Purpose
Key Question
Low-fi User
Test workflow with sketches/mockups
Will users understand this?
High-fi User
Test visual design with polished sim
Do users think this solves their problem?
Feasibility
De-risk technical unknowns with quick code
Can we build this?
Live-Data
Test in the real world with real users
Do users actually use this and drive our outcome?
Engineer's role: own Feasibility (small, throwaway code, prevents "discovery boondoggles") and Live-Data prototypes (ship to a small group, gather analytics, get the most reliable validation).
A visual framework (Teresa Torres) connecting business outcomes to opportunities, solutions, and validated experiments. If an idea doesn't connect to the tree, it's a distraction.
1 · Outcome
One measurable goal. "Increase DAU by 15%," not "Launch the dashboard."
2 · Opportunities
Unmet customer needs surfaced through interviews — not pre-chosen solutions.
3 · Solutions
Multiple ideas per opportunity. Anyone on the team contributes; each must link back to an opportunity.
4 · Experiments
Small tests (prototype, spike, survey) targeting the riskiest assumption of each solution.
How to build it (team sport): define one measurable goal → run ongoing customer interviews → brainstorm multiple ideas per opportunity → pick the riskiest assumption per solution and design a quick test.
Your role: participate in every step, especially feasibility and experiment design.