In less than a year, we went from biweekly deployments to over 100 a month. We went from institutional knowledge locked in senior engineers' heads to AI-encoded workflows any engineer can use. And we did it while improving team engagement and reducing our failure rate.
The engine is built, the team is aligned, and the right leadership is already in the room.
There's a question every technology company will have to answer in the next few years: Are you using AI, or just talking about it?
At SolidProfessor, we've spent the last year actually answering it. Not with a press release or a pilot program, but with a deliberate, disciplined transformation in how we organize, how we lead, and how we build software. Behind every deployment, every workflow, and every line of code is a manufacturing engineer trying to get better at their job, an instructor trying to deliver a great learning experience, or a manager trying to develop their team. That's who this work is for. Building faster and more reliably isn't an internal goal. It's how we keep our promise to them.
The numbers are strong. But the numbers are a consequence of the work, not the cause of it. AI has accelerated what we've built. This is that story.
Great outcomes don't come from tools. They come from teams that are clear on what they're doing, why it matters, and who's responsible for what. That's where we started.
Structural clarity sounds simple. Getting there wasn't.
Our initial pass at organizing the product organization surfaced something valuable: ambiguity. Teams weren't always clear on where their ownership ended and another team's began. Rather than paper over those problems, we treated them as feedback. We iterated on the model, worked through the boundary questions, and built a cleaner, more intentional structure as a result.
That structure, version 2.0, launches in Q2. Starting April 1st, our product organization will be organized into focused, mission-driven teams, each with a clear purpose and unambiguous ownership.
| Stream | Mission |
|---|---|
| Library | Be the single source of truth for what learning content exists and help users find the right content. |
| Learning Experience | Deliver a world-class learning experience that helps users build skills through engaging, measurable learning moments. |
| Organized Learning | Empower educational institutions and organizations to deliver structured, measurable learning programs. |
| Platform Access | Own the "front door" experience for how users discover, access, and manage their relationship with SolidProfessor. |
| VAR | Enable resellers to efficiently manage their customers and grow their business on our platform. |
| Live-Training | Deliver exceptional live learning experiences that connect learners with expert instructors in real time. |
| Performance Management | Empower managers to develop their teams by identifying skill gaps, tracking growth, and driving targeted up-skilling. |
| Talent Marketplace | Eliminate the gap between a professional's true capability and how hiring managers perceive them. |
Two platform teams provide the shared capabilities that underpin all of them:
| Platform Team | Mission |
|---|---|
| Core Platform | Provide reliable, self-service infrastructure and shared services that enable product streams to build and ship faster. |
| Skills Validation | Provide reliable, scalable skills intelligence that powers career development, performance management, and learning validation across the platform. |
Every team knows exactly what they own, who they serve, and what success looks like. That clarity is not just an organizational nicety. It's a performance multiplier. When engineers aren't debating ownership, they're shipping.
Underpinning all of it is a deliberate investment in shared foundations. Rather than each team building the same capabilities independently, we're creating common building blocks for things like authentication, permissions, and interface components that any team can use. Less duplicated effort. Faster delivery. A more consistent experience for our users.
Structural clarity matters. So does leadership that listens.
Kevin Pimentel, Favour Anifowose, Will Gooch, and Elizabeth Thomas built a repeatable feedback process rooted in direct, honest conversation. Starting from our Q4 2025 engagement data, the team used AI to generate targeted surveys and structured questions, then took those directly into 1:1 conversations with every engineer. AI helped us go deeper, surfacing themes, organizing insights, and identifying patterns across eleven individual conversations that would have taken far longer to synthesize manually. What we heard shaped what we did. Concrete actions followed: sprint recognition rituals and clearer team boundaries.
Here is how those actions moved the needle on our engineering team's engagement scores.
| Engagement Dimension | Q4 2025 Percentile | Q1 2026 Percentile | Change |
|---|---|---|---|
| Coworkers committed to quality | 28th | 59th | +31 pts |
| Opportunity to do what I do best | 46th | 81st | +35 pts |
| Satisfaction with company | 63rd | 80th | +17 pts |
| My opinions count | 64th | 78th | +14 pts |
| Manager cares about me | 68th | 82nd | +14 pts |
| Know what's expected of me | 35th | 53rd | +18 pts |
| Overall Engagement | 68th | 73rd | +5 pts |
Our overall engagement now sits at the 73rd percentile globally. The question that moved the most, "Opportunity to do what I do best," jumped 35 points. When people have clarity on their role and the tools to do their best work, it shows.
The same model is now being brought to the product and design side of the organization, led by Taylor Anderson, who joins us on the Platform Leadership team. The playbook is proven. The next chapter is beginning.
The other cultural shift was in how we think about shipping. We moved to a model of small, frequent changes delivered daily rather than large batches every two weeks. That's not a technical decision. It's a discipline. It requires trust, coordination, and a shared commitment to keeping quality high even when moving fast.
That discipline shows up in the numbers.
| Month | Deployments | Month-over-Month |
|---|---|---|
| Baseline (prior year) | ~2 | — |
| November 2025 | 60 | — |
| December 2025 | 76 | +27% |
| January 2026 | 82 | +8% |
| February 2026 | 109 | +33% |
We went from shipping once every two weeks to 109 deployments in a single month, in less than a year. That's not a sprint. That's a new operating rhythm.
Once the foundation was in place, the clarity, the culture, the discipline, AI gave us the ability to do more with it. Not as a shortcut, but as a force multiplier for a team that had already done the hard work of getting organized.
Alex Vakhovski, a mid-level engineer with a deep passion for AI, built a library of 8 production-ready AI workflows that encode our platform's conventions, patterns, and best practices directly into the development environment.
| Workflow | Purpose |
|---|---|
| domain-explorer | Navigate platform domains and understand the codebase |
| feature-scaffold | Generate complete API features following our conventions |
| module-scaffold | Create new modules following project structure |
| write-tests | Generate comprehensive tests for existing code |
| deep-review | Perform architectural code reviews |
| health-check | Verify code quality before commits |
| migration | Database migration utilities |
| bug-fix | Systematic bug diagnosis and resolution |
New engineers can scaffold complete API features, run architectural code reviews, and generate comprehensive tests from day one, following the same conventions our most senior engineers use. Institutional knowledge is no longer locked in anyone's head. It's encoded in repeatable, accessible workflows.
We track a metric called Time to First Commit as a quiet indicator of onboarding health.
| Engineer | Start Date | First Commit | Working Days |
|---|---|---|---|
| Engineers #1–3 | Dec 9, 2025 | Jan 2–9, 2026 | 14–19 days |
| Engineer #4 | Jan 27, 2026 | Feb 2, 2026 | 4 days |
| Industry benchmark (high-performing) | 3–5 days |
Our most recent hire made their first production contribution in 4 working days. As this library matures, we expect onboarding speed to keep improving.
As we transition from our legacy platform into 2.0, our Customer Success and Sales teams need to be able to support customers confidently in a space that looks and behaves very differently from what they're used to. Elizabeth built a comprehensive internal help center, 50+ articles across 16 categories, built entirely with AI assistance. The goal was straightforward: don't drop our internal teams into unfamiliar territory without support. The help center ensures that as we move forward, our Customer Success Managers and Sales team have everything they need to show up confidently for our customers.
| Scope | Detail |
|---|---|
| Total articles | 50+ |
| Categories | 16 |
| Topics covered | Account management, roles/permissions, SAML, LTI 1.3, compliance, school management, troubleshooting |
| Primary audience | Customer Success, Sales |
One of the most consequential things AI helped us do this quarter wasn't about shipping code. It was about understanding the quality of our assessments.
Using AI to analyze our question content through the lens of Bloom's Taxonomy, we were able to classify, at scale, whether our assessments were testing lower-order thinking (recall and recognition) or higher-order thinking (application, analysis, and judgment). The findings are directly shaping the data foundation required for our upcoming SP Careers and SP Develops initiatives.
| Assessment Tier | Lower-Order | Higher-Order |
|---|---|---|
| Current state | High | Low |
| SP Target | 40–45% | 55–60% |
| Industry benchmark (CompTIA, PMP, CISSP) | 35–45% | 55–65% |
Employers need to know more than whether someone can recall a fact. They need to know if someone can solve a problem. Engagement metrics and course completions tell you what someone did. Verified, domain-level competency data tells you what someone can do. AI helped us identify that gap, and that work will shape how we think about skills and competency data as our platform evolves.
Our infrastructure work, led by John Piccirillo, migrated our frontend preview deployments from Vercel to Cloudflare Pages, consolidating fragmented infrastructure and unlocking capabilities that weren't possible before.
| Dimension | Before | After |
|---|---|---|
| Preview environments | Siloed per app | Linked across all platforms |
| Infrastructure | Vercel + AWS + Route 53 | Consolidated: DNS, CDN, deploys in one place |
| Cost per additional app surface | $250/month | Eliminated |
| Baseline monthly savings | — | $300+/month |
| Annual savings per app surface | — | ~$3,500+ |
To put that in concrete terms, as we continue to build out our platform we can identify at least 6 distinct application surfaces. At $250/month each, that's $1,500/month or $18,000/year in fees alone, before factoring in the baseline savings from consolidating our infrastructure.
| Cost Scenario | Monthly | Annual |
|---|---|---|
| Vercel (6 surfaces at $250/mo each) | $1,500 | $18,000 |
| Cloudflare | Eliminated | Eliminated |
| Baseline infrastructure savings | $300+ | $3,600+ |
| Total estimated savings | $1,800+ | $21,600+ |
When intentional transformation meets the right accelerants, it shows up in the metrics.
| Metric | Our Performance | DORA Elite Benchmark | Status |
|---|---|---|---|
| Deployment Frequency | 19.2 PRs/week | On-demand | High |
| PR Cycle Time | 11h 24m | < 1 day | Elite |
| Change Failure Rate | 0.6% (2 of 327) | < 5% | Elite |
| Mean Time to Restore | Process improving | < 1 hour | In Progress |
Our change failure rate of 0.6% means that out of 327 production deployments, only 2 caused an issue. We are also actively improving our incident response process so that Mean Time to Restore becomes a metric we can report with full confidence, a priority for the next reporting period.
We want to be honest about where we are: this transformation is still unfolding. The habits are forming. The culture is building. Some of our metrics, like Mean Time to Restore, aren't yet reliable because our process for capturing them needs work. When a critical incident occurs, we aren't always creating the tracking ticket right away, which means the clock doesn't start when it should. Fixing that process is a priority so we can measure and improve our incident response accurately.
But the trajectory is clear. In less than a year, we went from biweekly deployments to over 100 a month. We went from institutional knowledge locked in senior engineers' heads to AI-encoded workflows any engineer can use. We went from a new hire taking three weeks to make their first commit to four days. And we did it while improving our team's engagement and reducing our failure rate.
The engine is built, the team is aligned, and the right leadership is already in the room.