SolidProfessor Certified Engineer

A Foundry Investigation
Kevin Pimentel | Tech Lead

What this document is

This is a Foundry investigation into professional certification for mechanical engineers — whether the opportunity is real, whether we can build it, and whether the bet is worth making. It is not a product spec or a roadmap. It answers three questions:

  1. Does the problem exist? — Employer interviews and data analysis say yes. Engineering hiring has no trusted, standardized way to prove CAD competency.
  2. Can we solve it? — Technical feasibility analysis of proctored exams, automated CAD grading, and our existing infrastructure says yes, with known risks.
  3. Is it worth the bet? — That's the decision this document is meant to inform.

Everything below is research, analysis, and a first-bet proposal. It is scoped to what The Foundry can validate — not what a product team would ship.

1

The Opportunity

There is no Cisco CCNA/CCIE equivalent for mechanical engineering. Networking has CCNA. Project management has PMP. Cloud has AWS certifications. Mechanical engineering — an industry facing a 1.7 million annual talent shortfall — has no trusted, standardized way to prove someone can actually engineer. That gap is the opportunity.
1.7M
Annual shortfall of skilled workers. 2.9M openings vs. 1.25M qualified graduates — every single year.
2.1M
Manufacturing jobs could go unfilled by 2030. A potential $1 trillion hit to the US economy.
77%
Of manufacturers expect ongoing extreme difficulty attracting and retaining workers. Talent sourcing is 36% harder than 2018.

The problem isn't compensation — Ford's CEO can't fill 5,000 mechanic jobs at $120K/year. The problem is verification. Employers can't tell who is actually qualified, and qualified engineers can't prove what they know because their best work is locked behind NDAs. Existing credentials (CSWA, CSWP) test whether candidates arrive at correct geometry in a single vendor's tool — not whether their modeling approach is production-ready or transferable across platforms. The market lacks a credible signal.

Why this matters for SolidProfessor. SP sits at the center of this ecosystem — training hundreds of thousands of engineers on the tools they use every day. But today, that training produces course completions and quiz scores. It does not produce a credential that employers trust as proof of capability. The question this investigation answers: should SP build that credential?
2

What Employers Told Us

6 interviews. 6 companies. One consistent message: existing credentials don't influence hiring decisions. We spoke with engineering managers, talent acquisition leaders, and CAD administrators at Micron, Clorox, Karman Space & Defense, Wagstaff, Knapheide, and Aristocrat Gaming. Every conversation pointed to the same gaps -- and the same opportunities.
Raised by 3 of 6 employers

The IP Paradox

Experienced engineers can't show their best work. Everything they've built is proprietary. This creates a credibility vacuum -- the people with the most skill have the least proof.

"Employees can't show anything they've worked on, it's all proprietary."
-- Julie Snyder, Talent Acquisition, Micron
"Concerned that candidates can't safely share real project examples due to proprietary IP."
-- Gautam Shah, Engineering Lead, Karman Space & Defense
The Opportunity Build a controlled assessment environment where engineers prove production-level modeling capability without revealing proprietary designs. Proctored, standardized, verifiable — a portfolio equivalent that IP restrictions can't block.
Raised by 6 of 6 employers

The Credibility Gap

Certifications signal motivation, not capability. Every employer acknowledged them as "a plus" -- but not one said they trust certifications as proof that someone can do the job.

"SolidWorks certification? That's a plus. It tells me they've done the work."
-- Julie Snyder, Talent Acquisition, Micron
"We need to make sure portfolios show real, hands-on experience. No fake or fluff projects."
-- Dave Stockdale, CAD/PDM Admin, Wagstaff
"I'm 20% faster now because of SolidProfessor."
-- Brent McClearen, Principal ME, Aristocrat Gaming
But SP has no way to capture, verify, or present this claim.
The Opportunity A proctored, performance-based exam that produces measurable outcomes — not completion badges — would give employers the capability signal they're asking for. The credential that closes this gap becomes the hiring standard.
Critical for enterprise adoption

The Standards Requirement

Organizations with training budgets and compliance requirements need external validation. They won't adopt a certification that exists only within SP's ecosystem -- no matter how good the content is.

"We follow ASE and NIMS. We'd need to see how your content maps directly to those standards."
-- Marty Ohnemus, Sr. PLM/CAD Admin, Knapheide
Note: Likely referencing ASME (engineering design standards) rather than ASE (automotive). Knapheide is a truck body manufacturer — their shop floor runs ASME Y14.5 tolerancing and NIMS machining standards. ASE and ASME are commonly conflated in vocational contexts. The strategy is identical either way.
"We can't push anything experimental -- we need something proven, used successfully by other schools."
-- Marty Ohnemus, Sr. PLM/CAD Admin, Knapheide
The Opportunity A certification built on ISO/IEC 17024 accreditation, governed by an independent advisory board, mapped to ASME/NIMS standards, and psychometrically validated would meet the bar these organizations require. It would shift from "something experimental" to an institutional standard.

Two Audiences, Two Needs

The interviews revealed a clear split in how different personas evaluate candidates:

PersonaWhat They Need
Recruiters
Julie (Micron), Stella (Clorox)
Screening signal -- quick filter for qualified candidates
Engineering Managers
Dave (Wagstaff), Gautam (Karman), Brent (Aristocrat)
Capability proof -- can this person actually engineer?

Julie Snyder explicitly said she would pass structured assessment results to hiring managers for deeper evaluation.

Research observation: These two personas likely require different product surfaces. Recruiters need an ATS-visible, pass/fail screening signal. Engineering managers need granular capability data — domain-level scores, CAD analysis results, performance task breakdowns. A product team should investigate whether a single credential experience serves both, or whether different touchpoints are needed.

The Champion Dependency Risk

Today, SP adoption at a company depends on finding one internal champion willing to advocate up the chain. This is fragile.

"I'll be running this by Pat later this week... please send materials I can share."
-- Dave Stockdale, CAD/PDM Admin, Wagstaff
"It's hard to get some people to use the platform."
-- Brent McClearen, Principal ME, Aristocrat Gaming

If SolidProfessor has independent institutional credibility (ISO accreditation, standards body endorsement, employer advisory board), it doesn't need a champion at every company. The credential speaks for itself -- the way CCNA does for networking or PMP does for project management.

The portfolio gap is SolidProfessor's biggest market opening. Julie Snyder (Micron): "I could count on two hands the number of portfolios I've seen in four years." Luke MacFarlan (Clorox): "25% of our roles need design or CAD proficiency -- people who've used the tools hands-on." Portfolios are rare, desperately wanted, and blocked by IP restrictions. A SolidProfessor proctored CAD exam creates the portfolio-equivalent that the market is asking for.
3

Question Bank Analysis

Both question banks were designed for learning reinforcement, not competency certification. We analyzed SP's full general bank (346-question sample from 12,183) and the complete Skills Assessment bank (1,501 questions from solid_career_skill_assessment_test_questions, full set) using Bloom's Taxonomy. The Skills Assessment bank is marginally better -- but the gap to certification-grade remains massive.

General Bank (12,183 questions, 346 sampled)

Course quizzes. Designed to verify students watched the video.

L1 - Remember
77.5%
L2 - Understand
11.3%
L3 - Apply
7.5%
L4 - Analyze
2.9%
L5 - Evaluate
0.9%
L6 - Create
0%

Lower-order: 88.7%  |  Higher-order: 11.3%

Skills Assessment Bank (1,501 questions, full analysis)

Skills Assessment product. More applied -- but still recall-dominated.

L1 - Remember
69.4%
L2 - Understand
11.3%
L3 - Apply
14.3%
L4 - Analyze
2.4%
L5 - Evaluate
2.5%
L6 - Create
0%

Lower-order: 80.7%  |  Higher-order: 19.3%  |  +8% improvement

Side-by-Side: Where the Assessment Bank Improved

Bloom's Level General Bank Assessment Bank Delta What It Means
L1 - Remember 77.5% 69.4% -8.0% Less pure recall, but still dominant
L2 - Understand 11.3% 11.3% -- No change
L3 - Apply 7.5% 14.3% +6.8% More "which tool for this task?" -- the biggest gain
L4 - Analyze 2.9% 2.4% -0.5% No meaningful change
L5 - Evaluate 0.9% 2.5% +1.7% Slightly more "best practice" questions
L6 - Create 0% 0% -- Still zero. Nobody asked to design anything.

The improvement is real but insufficient. The assessment bank shifted ~8% from recall to application -- mostly "which tool/option would you use for this scenario?" That's better than "press the _____ key" but still tests software operation, not engineering judgment.

Assessment Bank: Strongest Questions

The best questions in the assessment bank hint at what SolidProfessor needs more of.

L5 - Evaluate (2.5%)
"What is the most cost effective way to check mass produced parts?"
L5 - Evaluate
"According to best practices, when should non-structural fillets be added?"
L3 - Apply (14.3%)
"Which of the identified axes would make an ideal datum for the bolt-hole tolerances?"

Assessment Bank: Still Dominant (69.4%)

The vast majority still test recall of facts, T/F statements, and UI identification.

L1 - Remember (69.4%)
"What does GD&T stand for?"
"The minimum for a CNC mill is 5 axis." (T/F)
"A finer mesh will decrease the solution time." (T/F)

SP Current (Both Banks) vs. What Credible Certifications Require

Dimension General Bank Assessment Bank Credible Certs (CCNA, AWS, PMP, CSWP)
Lower-order (L1+L2) 88.7% 80.7% Minority of exam -- baseline screening only
Higher-order (L3-L6) 11.3% 19.3% Majority of exam -- scenario-based, applied
Evaluate + Create (L5+L6) 0.9% 2.5% Significant portion -- design, troubleshoot, justify
Performance-based tasks None None Lab simulations / hands-on scenarios
Proctoring None None Required (Pearson VUE, PSI, etc.)
What it proves "Watched the content" "Knows the tools" "Can perform the job"

Proof of Concept: Skills Analyzer (Voltron) — 73 Questions

The Skills Analyzer is a fixed-set, hard-coded assessment that SP customers already use today. Engineering managers have given it overwhelmingly positive feedback — and a Bloom's analysis reveals why. It's the best assessment content SP has, even though it still has vast room for improvement.

Basics Tier (24 questions)

L1 Remember
33%
L2 Understand
21%
L3 Apply
33%
L4 Analyze
8%
L5 Evaluate
4%

Higher-order: 46% — closest to certification-grade

Advanced Tier (25 questions)

L1 Remember
60%
L2 Understand
24%
L3 Apply
8%
L4 Analyze
4%
L5 Evaluate
4%

Higher-order: 16% — weakest tier, mostly recall

Expert Tier (25 questions)

L1 Remember
44%
L2 Understand
28%
L3 Apply
24%
L5 Evaluate
4%

Higher-order: 28% — strong L3 from applied scenarios

Side-by-Side: Three Assessment Products Compared

Metric General Bank
(12,183 questions)
Skills Assessment
(1,501 questions)
Skills Analyzer
(73 questions)
Certification Target
Lower-order (L1+L2) 88.7% 80.7% 71.2% Minority of exam
Higher-order (L3-L6) 11.3% 19.3% 28.8% Majority of exam
L3 Apply 7.5% 14.3% 21.9% Significant portion
L5 Evaluate 0.9% 2.5% 4.1% Significant portion
Customer reception Expected (quizzes) Neutral Overwhelmingly positive
We don't need to prove the approach works — we already have proof of concept. Engineering managers rave about the Skills Analyzer, and it's not even that good yet. The Basics tier — where questions are built around visual scenarios and applied reasoning — hits 46% higher-order thinking, approaching certification territory. The Advanced tier, which relies on fill-in-the-blank and T/F questions, drops to 16%. The pattern is clear: when questions test judgment instead of vocabulary, customers respond. A certification program takes what's already working and applies it with psychometric rigor, balanced Bloom's distribution, and proctored delivery. The gap from 28.8% to certification-grade (50%+) is a content quality problem, not a product concept problem — and it's a solvable one.
Existing data becomes the on-ramp, not the exam. All three banks serve different roles: the general bank (12,183 questions) as study prep, the Skills Assessment bank (1,501 questions) as diagnostics, and the Skills Analyzer (73 questions) as the closest thing we have to applied assessment. The Skills Analyzer proves the concept — customers want harder, more applied tests. The infrastructure is the foundation. The content layer is what must change.
4

Why the Content Layer Must Change

Repackaging existing data will not close the credibility gap. Technical certificates, course completions, video views, and streak data measure engagement with our platform. They do not measure engineering competency. Presenting them as if they do will reinforce the exact perception employers already have: "a plus, but not proof."

What Existing Data Actually Measures

Data Point What It Tells Us What It Does NOT Tell Us
Course completions User watched the videos and clicked through Whether they understood or can apply the content
Technical certificates User passed quizzes that are 81-89% recall questions Whether they can model a production-ready part
Video views / time on platform User consumed content (or left a tab open) Whether learning transferred to job performance
Skills Assessment scores User can identify tools and recall procedures (69% L1). Randomized per attempt but without Bloom's or difficulty controls. Whether they can make engineering judgments under constraints
Streaks / engagement metrics User is consistently active on the platform Anything about capability

A code-level review of our two existing assessment systems confirms neither was designed with certification-grade controls (Bloom's tagging, psychometric tracking, proctored delivery, blueprint-driven selection). See Appendix F for the full infrastructure audit.

The Temptation

It is natural to reach for what we already have. We have 12,183 questions, 1,501 assessment items, and a working assessment engine. The instinct is: "Let's package this into something employers can see."

But that approach has a ceiling. The data was collected to measure learning engagement, not professional competency. Repackaging it — even with better UI, dashboards, or badges — does not change what it fundamentally measures.

This is the difference between a Fitbit and a medical exam. Both use data. One tracks activity. The other diagnoses capability. Employers are asking for the medical exam.

What Customers Are Actually Asking For

Across 6 employer interviews, the request was consistent and specific:

  • "Can this person actually model?" — Not "did they complete a course"
  • "Show me measurable impact" — Not watch time or quiz averages
  • "We need something proven" — Not another self-issued certificate
  • "Hands-on experience, no fluff" — Not completion badges

Every one of these demands requires data we do not currently produce. The gap isn't in presentation — it's in what we're measuring.

The easiest path and the right path diverge here. Shipping existing data in a new wrapper is faster. But it produces exactly what employers told us they don't trust: another credential that signals effort without proving capability. The credibility gap isn't a packaging problem — it's a measurement problem. Solving it requires investing in what we measure, not how we display it.
5

Where SPCE Fits in the Engineering Credentialing Landscape

The SolidProfessor Certified Engineer (SPCE) is the credential we're evaluating — a vendor-neutral, tiered certification that proves an engineer can execute in modern CAD/CAM software. Existing certifications (CSWA/CSWP) verify that a candidate can arrive at correct geometry — but not how they built it (see Appendix G for a detailed grading comparison). Here's where SPCE fits relative to the credentials that already exist.

SPCE does not compete with the PE license. It complements it. The PE proves an engineer understands the physics. SPCE proves they can execute in the software. Together, they cover the full engineering lifecycle.
PE LICENSE Legal & Theoretical Standard

The Authority to Sign

A state-issued, legally binding credential that grants an engineer authority to prepare, sign, seal, and submit engineering plans. The PE shoulders ultimate legal responsibility for safety, structural integrity, and public welfare.

Requirements:
4-year ABET degree FE exam PE exam Supervised experience
Validates: Physics, math, ethics, legal accountability
Does NOT validate: CAD/CAM proficiency, digital twin workflows, software execution
SPCE Applied Execution Standard

The Ability to Build

A vendor-neutral credential that proves an engineer can build complex 3D parametric models, run finite element analysis (FEA), generate manufacturing toolpaths, and execute production-ready digital workflows in modern CAD/CAM software.

Requirements:
SP training Proctored exam CAD model grading Hands-on labs
Validates: Software mastery, parametric modeling, simulation, DFM, production-readiness
Does NOT validate: Legal authority, theoretical fundamentals, liability

The Vocational / CTE Pipeline

In the vocational and Career and Technical Education (CTE) market, the same complementary positioning applies — but with ASME and NIMS instead of the PE:

ASME Y14.5
The Design Language

The universal mathematical language for engineering drawings — GD&T, tolerancing, design intent communication. Dictates how a part must be specified.

Validates: Can you communicate design intent correctly?
SPCE
The Digital Bridge

Validates that the student has the digital fluency to operate CAD/CAM software — the tool that bridges ASME design intent to NIMS physical execution.

Validates: Can you execute in the software?
NIMS
The Execution Standard

Industry-recognized competency standards for CNC operation, machining, and CAM programming. Defines what a skilled manufacturer must be able to do on the shop floor.

Validates: Can you physically manufacture the part?
SPCE does not compete with NIMS or ASME — it's the missing middle. A student can pass NIMS machining on a manual lathe and still not know how to generate a CNC toolpath in Fusion 360. Neither ASME nor NIMS validates software proficiency. SPCE fills that gap — and by mapping exam domains to ASME Y14 clauses and NIMS duty areas, it becomes the digital prerequisite that schools can justify alongside the standards they already teach.

How SP Serves Both Audiences

A manufacturing firm needs both. Here's how SP integrates into the entire engineering department:

PE-Licensed Engineers
  • Assume legal liability
  • Sign off on final designs
  • Need 15 PDH/year to maintain license
  • SP provides approved CE credits
SP serves both
SPCE-Certified Engineers
  • Execute digital prototyping
  • Reduce software-related scrap
  • Accelerate time-to-market
  • SP provides training + certification
6

Technical Feasibility

SPCE is technically feasible. The investigation confirms we can build it, with known risks. The Foundry conducted a detailed technical feasibility assessment covering existing infrastructure, proctoring models, automated CAD grading, and human grading alternatives. Full analysis is available in the appendix for the pod team.

What Already Exists

We are not starting from scratch. The platform has production-ready assessment delivery (Skills Analyzer), event architecture, certification tracking, RBAC, LTI 1.3 integration, and a Vue 3 assessment UI. The engineering foundation is in place — what's missing is the content layer and exam controls.

See Appendix A for the full infrastructure inventory.

Proctored Exam Delivery

A hybrid proctoring model covers all tiers: third-party AI-recorded for high-volume Associate exams, Cloud VDI (Azure Virtual Desktop) for Expert/Master tiers where candidates interact with desktop CAD software. Standard lockdown browsers cannot support CAD applications — Cloud VDI solves this by streaming a locked-down VM to the candidate's browser.

See Appendix B for vendor comparison and architecture details.

Automated CAD Analysis Engine

Automated CAD model grading is technically feasible using the SOLIDWORKS API, with an existing commercial product (Graderworks) and published research (ASEE 2024) proving viability. A 4-layer scoring system (robustness, feature efficiency, constraint quality, design history) can objectively grade production-readiness — something no other certification does. CSWA/CSWP checks mass properties and dimensions; the CAD analysis engine checks feature tree quality, constraint health, and rebuild robustness (see Appendix G for the full comparison). Known risks include COM API instability and cold start latency, with identified mitigations. Estimated infrastructure cost: $18-37K/year.

See Appendix C for the scoring system, API methods, pipeline design, and cost estimates.

SME Grading as a Bridge

Human SME grading can replace the CAD engine at pilot scale, eliminating the riskiest technical dependency from the first bet. This is the insight that makes Bet 1 possible without solving the hardest technical problem first.

With SME grading, SPCE Associate could include a performance-based modeling task from day one — without the automated CAD analysis engine being ready. That immediately differentiates from CSWA/CSWP, which are multiple-choice only. The performance task becomes the credibility signal employers are asking for, graded by the kind of engineers who would be hiring the candidate.

"Graded by practicing engineers" is arguably more credible than "graded by algorithm" when the brand is unproven — this is how Cisco's CCIE lab exams work. SME scoring patterns become training data for the automated rubric in Phase 2, creating a smooth transition path: human grading validates the approach, then automation scales it.

See Appendix D for the full bridge strategy, phase diagram, and cost model.

7

What It Takes to Test This

If the problem is real -- and the interviews say it is -- what would a first bet look like? SPCE Associate: a single-tier, proctored assessment that answers "Can this person model?" Here's what's involved.

Bet 1
Proctored Assessment
Bloom's-balanced question bank (not our existing L1-heavy quizzes). Third-party proctoring integration. Credential verification portal. This is the minimum to test whether employers and candidates adopt it.
Bet 2
CAD Analysis Engine
The differentiator. Automated 4-layer CAD quality grading via SOLIDWORKS API. This is what makes SPCE more than another multiple-choice cert. Technically feasible (see Appendix C) but requires .NET/SolidWorks expertise.
If validated
Expand
Professional tier (design challenges). Expert/Master tiers (Cloud VDI proctoring, full-day labs). Each contingent on the previous bet proving demand.

What SPCE Associate Requires

Five things must exist before the first candidate sits for the exam:

1. Job Task Analysis (JTA)

Conducted with the advisory board and 200+ practitioners. Defines what the exam tests — without it, there is no defensible blueprint. Requires a JTA facilitator (external hire or consultant).

2. Bloom's-Balanced Question Bank

~300-400 items mapped to JTA domains. AI-assisted authoring compresses SME work from months to weeks, but psychometric validation — including formal cut-score setting via the Angoff Method — requires a psychometrician (external hire).

Longest pole — content and certification science, not engineering.

3. Third-Party Proctoring

Identity verification and exam security. Remote proctoring is acceptable under ISO 17024 as long as controls are documented and demonstrably equivalent to in-person testing.

4. Exam Delivery Upgrades

Timed sessions, question randomization, cooldown periods, and audit trails — built on the existing Skills Analyzer or Skills Assessment infrastructure.

5. Credential Issuance & Verification Portal

Employers validate certifications with one click. Open Badges 3.0 integration for LinkedIn, ATS systems, and digital portfolios.

The optional CAD analysis engine (Bet 2) could ship as a fast-follow rather than a day-one requirement. Alternatively, SME grading can deliver performance-based testing at pilot scale without the engine — eliminating the riskiest technical dependency from the first bet.

See Appendix E for the detailed implementation breakdown.

8

Credibility Roadmap

A certification is only as valuable as the trust behind it. SPCE must earn institutional credibility to move from "SolidProfessor's certification" to "the industry's certification." Five pillars make that happen.

1 ISO/IEC 17024 & ANSI Accreditation

Earn formal accreditation under ISO/IEC 17024 (the international standard for personnel certification bodies) and ANAB national accreditation. This proves SPCE is impartial, psychometrically valid, and governed by documented processes -- not just a marketing tool. Accreditation makes SPCE credible to employers, government workforce programs, and regulators worldwide.

Critical prerequisite: Because SP provides both training and certification, ISO 17024 requires a documented firewall policy — formal separation between training staff and certification/exam staff. The people who build SP courses cannot write exam questions or make certification decisions. ANAB also requires 6-12 months of operating data (real candidates, real exam results) before they will conduct an onsite assessment. This means Year 1 is build + launch + collect data; accreditation application happens in Year 2.

Year 1-2 Launch Year 1, ANAB application Year 2

2 Independent Standards Body

Assemble an advisory board of industry experts, academics, and professional associations (target: ASME, SME) to govern exam content. This board's first task: conduct a formal Job Task Analysis (JTA) — a structured study with 200+ practitioners that defines the actual tasks, knowledge, and skills required for the role. The JTA becomes the exam blueprint, with domains mapped to established competency frameworks: ASME Y14.5 (GD&T, tolerancing, design intent communication) and NIMS duty areas (CNC programming, machining execution, CAM proficiency). Starting from recognized standards rather than SP curriculum gives the advisory board a defensible foundation — and gives ISO 17024 auditors exactly what they want to see.

Year 1 Form board + complete JTA during Phase 1

3 Secure, Performance-Based Testing

Partner with enterprise proctoring infrastructure like Pearson VUE for global test center access alongside our online proctoring. Include hands-on lab challenges -- not just multiple choice -- at every tier. The automated CAD quality analysis engine for Expert/Master gives SPCE genuine weight with hiring managers that no other certification can match.

Year 1-2 Pearson VUE partnership

4 Continuing Education Provider

Get approved as a PDH/CPC continuing education provider so licensed Professional Engineers (PEs) can count SolidProfessor courses toward their mandatory renewal hours. This instantly embeds SP into the professional engineering ecosystem -- engineers must take CE courses anyway, and SP becomes an approved source. Creates a recurring touchpoint with every licensed engineer.

Year 1-2 State-by-state approval

5 Verifiable Digital Badges (Open Badges Standard)

Issue credentials using the Open Badges 3.0 standard -- the W3C-backed specification for verifiable digital credentials. Each SPCE badge is cryptographically signed, machine-readable, and one-click verifiable on LinkedIn, resumes, and employer ATS systems. Unlike a PDF certificate, an Open Badge can be independently verified by anyone without contacting SolidProfessor, creating a network effect: every badge shared is a trust signal that drives industry-wide recognition.

Shareable
LinkedIn, email signatures, portfolios
Verifiable
One-click validation, no phone calls
Machine-Readable
ATS/HRIS systems parse automatically
Network Effect
Every share increases industry visibility
Phase 1 Ships with Associate launch
Even if accreditation fails, the process wins. The work required to pursue ISO 17024 — Job Task Analysis, psychometric validation, Bloom's-balanced question banks, formal cut-score setting, firewall policies — produces assessment infrastructure that transforms every SP product regardless of the accreditation outcome. SP Develop gets real competency data instead of course completions. The Intelligence Engine gets signal worth selling. Employers get structured results they can use in hiring. Today, 81-89% of our questions test recall. Getting to certification-grade measurement is a massive upgrade from where we are — and that upgrade doesn't require ANAB's stamp to deliver value. Accreditation is the credibility multiplier on top. The foundation underneath it is worth building either way.
9

Problems We Must Solve

Five problems stand between the research findings and a credible certification. None are insurmountable, but all must be solved — and they are sequenced, not parallel.

1 Content, not engineering

No amount of infrastructure fixes a test where 81% of questions are recall. Building a Bloom's-balanced item bank requires SME authoring, psychometric validation, and pilot testing — content work that engineering can't shortcut.

2 No psychometric expertise in-house

Cut-score setting (Angoff Method), item analysis, and Bloom's blueprinting require a qualified psychometrician. Niche external hire, 3-6 month sourcing timeline.

3 ISO 17024 requires proof before accreditation

ANAB needs 6-12 months of operating data — real candidates, real results — before an onsite assessment. We can't apply until we've launched.

4 CAD engine requires niche talent

.NET/SolidWorks COM API expertise is rare. SME grading bridges this for Bet 1, but the automated engine is the long-term differentiator.

5 No proof of demand until the first cohort completes

Employer interviews say the need exists, but adoption is unproven until candidates sit for the exam and employers act on the results.

How They're Sequenced

Year 1
2026
Prove competency assessment works
JTA with 200+ practitioners Bloom's-balanced question bank SME-graded modeling tasks Third-party proctoring Advisory board + firewall policy ISO 17024 process begins
Year 2
2027
Scale trust beyond manual grading
CAD engine calibrated on SME data Automated scoring replaces human bottleneck Employer verification API ISO 17024 accreditation achieved
Year 3
2028
Assess expertise, not just competency
Expert + Master tiers Cloud VDI proctoring Multi-CAD coverage Employer analytics
Year 4-5
2029-30
Become the hiring standard
ATS/HRIS integration Curriculum crosswalk (ASME Y14.5 + NIMS) CTE/vocational partnerships "SPCE required" on job postings
10

Risks & Mitigations

RiskSeverityMitigation
SolidWorks COM crashes / memory leaks HIGH Process isolation per analysis, auto-restart, warm VM pool
Performance-based exam security CRITICAL Custom Electron lockdown is fundamentally bypassable. Pivot to Cloud VDI (Azure Virtual Desktop) — candidates receive only a video stream, zero local code execution. Lockdown enforced at cloud network level, not endpoint
Third-party vendor lock-in MEDIUM Strategy pattern for technical portability. But data/contractual lock-in is the bigger risk — MSA must require: non-exclusive data licensing, mandatory data portability (video/biometric export at no cost), no minimum volume commitments
SPCE credibility / adoption speed HIGH Risk-free pilots through VARs + enterprise customers are necessary but passive. Must also pursue: academic pipeline integration (subsidized university partnerships a la CompTIA), aggressive differentiation from CSWP ("production-ready modeling across platforms, not single-vendor output verification"), employer advisory board endorsements
AI proctoring false positives HIGH Post-exam human review is mandatory (industry best practice). But if AI flags 60% of exams, QA team is overwhelmed. Require demographic audits of vendor AI models before MSA signing. Build frictionless accommodation workflow for neurodivergent/disabled candidates. Implement transparent appeals process with guaranteed secondary human review
Vendor-neutral scope too broad MEDIUM Mechanical engineering is fragmented (mold design vs. structural vs. aerospace) unlike networking (standardized protocols). SPCE Associate starts narrow: SOLIDWORKS as the initial delivery platform (85% of SP users), parametric solid modeling for general mechanical design, industry-agnostic. Expansion to other CAD software (Fusion 360, Creo, NX) happens only as tiers mature and market feedback validates demand
Psychometrician & JTA facilitator sourcing HIGH Niche expertise with 3-6 month sourcing timeline. These roles are the longest pole — without them, there is no defensible exam blueprint or cut-score. Mitigation: begin sourcing in parallel with advisory board formation. Consider contract engagement for Phase 1
11

Conclusion

The Research Points in One Direction

Across quantitative analysis and qualitative interviews, every data point converges on the same conclusion: there is real, unmet demand for a credible way to prove mechanical engineering competency — and what we have today does not satisfy it.

The problem exists.

Six employers confirmed that existing credentials — including ours — don't influence hiring decisions. Engineering managers evaluate candidates on capability, not certificates. Experienced engineers can't demonstrate their skills because their best work is locked behind NDAs. The market has no trusted, standardized way to prove "this person can model."

Our current data doesn't solve it.

81-89% of our questions test recall. Zero questions ask anyone to design, evaluate, or create anything. No proctoring. No performance tasks. No external validation. This content was built for learning reinforcement — and it does that job well. But repackaging it as competency proof will not change what it measures.

We can solve it.

The infrastructure exists. The assessment engine, event architecture, credential system, and Vue 3 UI are production-ready. What's missing is the content layer: Bloom's-balanced questions, proctored delivery, and eventually a CAD analysis engine that no other certification provider has. The technical feasibility is there — this is an investment decision, not a capability question.

The bet is whether it's worth the investment.

That's what The Foundry is here to answer. The research says the demand is real and the existing market (CSWA/CSWP) leaves a gap. The question isn't whether the problem exists — it's whether SolidProfessor is willing to build the thing that actually solves it.

Appendix: Technical Feasibility Analysis

Reference Material for Pod

This section is reference material, not architectural decisions. The following sections document the Foundry's technical feasibility assessment. They are included as reference material for the pod, not as architectural decisions. The product trio owns solution design. This analysis confirms that SPCE is technically feasible and identifies known risks — it does not prescribe how the team should build it.
A

What Already Exists

We are NOT starting from scratch. The platform-backend has significant assessment, certification, and skills infrastructure already built and in production.
Built
Skills Analyzer
Test delivery, attempts, scoring, timing, events
Built
Assessments
Question bank, types, answers, categories
Built
Certifications
Certificate model, statuses, UserAchievement tracking
Built
Event Architecture
Event-driven with listeners, queued jobs
Built
RBAC
Permission-first authorization
Built
LTI 1.3
LMS integration for institutions
Built
Assessment UI
Complete Vue 3 assessment UX (Voltron)
Built
Auth/Authz Packages
@solidprofessorhub/auth and authz
Designed
CAD Analysis Schema
DB schema, job design, service interface
B

Proctored Exam Delivery

Users DO take tests on webcam. The model varies by tier. AI-recorded for high-volume Associate exams. Live human proctoring for high-stakes Expert and Master exams where candidates interact with CAD software.

Recommended: Hybrid Approach

Third-party proctoring for Associate/Professional (browser-based MCQ). Custom in-house proctoring for Expert/Master (requires desktop CAD software).

SPCE Tier What It Proves Proctoring Why This Model
Associate Can this person model? Core knowledge + fundamentals Third-party (AI-recorded) High-volume, browser-based MCQ + short tasks
Professional Can they solve design problems? Applied engineering judgment Third-party (hybrid AI + human) Design challenges need real-time monitoring
Expert Can they own a product? Complex assemblies, simulation, PDM Cloud VDI (Azure Virtual Desktop) CAD runs in locked-down cloud VM, streamed to candidate
Master Can they lead engineering? Full-day lab, production-grade output Cloud VDI + live proctor (8 hrs) CCIE-equivalent prestige. Proves senior capability
The CAD Problem Standard proctoring vendors use lockdown browsers that block all desktop apps. SPCE Expert/Master exams require running SolidWorks/Inventor/NX. Custom Electron lockdown apps are fundamentally bypassable (VM obfuscation, DLL injection, HDMI splitters) and building endpoint security is outside our core competency. The proven approach: Cloud VDI (Azure Virtual Desktop) — candidates log in via browser, SolidWorks runs in a locked-down cloud VM with zero outbound internet, no clipboard sync, and no local storage access. The candidate receives only a video stream. This also eliminates the need for candidates to own a SolidWorks license.

Options Compared

Dimension A: Third-Party Only B: Build In-House C: Hybrid (Recommended)
Relative speed Fastest (integration only) Slowest (full build) Fast start (third-party first, build later)
CAD software support Poor Excellent Excellent (Expert/Master)
8-hour Master exam Uncertain Full support Full support
C

Automated CAD Quality Analysis Engine

No other certification program in the world can objectively score CAD model production-readiness. This is the technological moat that makes SPCE unassailable.

4-Layer Weighted Scoring System

Robustness
30%
Feature Efficiency
25%
Constraint Quality
25%
Design History
20%
Layer Weight What It Measures
Robustness30%Does the model survive dimension changes without breaking?
Feature Efficiency25%Feature count vs benchmark, pattern usage, reference geometry
Constraint Quality25%Fully-defined sketches, under/over-constraint detection
Design History Clarity20%Feature naming, folder organization, documentation

PLM/CAD Connector Options Compared

Recommended: SOLIDWORKS API first -- the only option with 100% coverage of all 4 grading layers.

Approach 4-Layer Coverage Timeline Annual Infra Cost Verdict
SOLIDWORKS API 100% 10-15 weeks $18-37K 1 Recommended
Multi-CAD (SW+Autodesk+NX) 100% SW, 40-60% others 30-42 weeks $29-73K 2 Year 2+
STEP/Neutral format ~5% 8-12 weeks $1-4K 3 Supplement only
Cloud/Onshape ~65-75% 15-21 weeks $7-21K 4 Investigate Y2
Sources:
1 2-3 SW Pro licenses × $4,150/yr + 2-3 Azure NV6ads A10 v5 VMs @ $670/mo on-demand ($400/mo reserved)
2 SW licenses above + 1-2 Siemens NX seats @ $9-11.6K/yr + Inventor @ ~$2.4K/yr + additional VMs
3 Open-source parsers (STEPcode, BSD license) + Linux Docker containers
4 2-5 Onshape Pro seats @ $2,100/yr + lighter compute (cloud-native, no Windows VMs)
Why NOT STEP/neutral format? STEP files are "dead" geometry. No feature tree, no sketch constraints, no parametric rebuild. Cannot grade 3 of 4 layers. Would make SPCE no better than CSWP — which only validates output correctness, not modeling quality.
Why NOT Onshape/cloud? Translation fidelity risk. You'd grade a translated model, not the original. If a SolidWorks extrude becomes different operations in Onshape, the grading scores become unreliable. Investigate in Year 2.

Technical Feasibility: Layer-by-Layer

Each grading layer maps to specific SolidWorks API methods. An existing commercial product (Graderworks, official SolidWorks Solution Partner) and published research (ASEE 2024) prove automated SolidWorks grading is viable.

Layer Weight API Methods Difficulty Status
Feature Efficiency 25% GetFeatureTreeRootItem2, ITreeControlItem for tree traversal. Feature count, pattern detection, reference geometry all accessible. MEDIUM Proven
Constraint Quality 25% ISketch.GetConstrainedStatus returns fully-defined, under-defined, or over-defined per sketch. Direct API support. EASY Proven
Robustness 30% ForceRebuild3 / EditRebuild3 exist but have a known bug: they return true even when rebuild errors occur. Workaround: compare geometry before/after dimension changes, check for suppressed/failed features. HARD Needs workaround
Design History 20% Feature names, folder structure, comments all accessible via standard tree traversal APIs. EASY Proven

MVP approach: Ship with Layers 1, 2, and 4 (70% of weighted score) first. Add robustness testing as an iteration once the rebuild detection workaround is validated. Even without Layer 3, this scores more dimensions than CSWP.

Two-Tier Processing Pipeline

The full COM API is unstable at scale. A bifurcated architecture balances speed with depth — most files can be triaged without ever launching SolidWorks.

Tier Technology Function Speed Stability
Tier 1: Triage Document Manager API File integrity, mass properties, metadata, file references Milliseconds Very High (no GUI)
Tier 2: Deep Analysis Out-of-Process COM API Feature tree traversal, sketch constraints, model rebuilds Seconds-Minutes Low (forced termination per file)

The DM API reads file data without launching SolidWorks or its GUI. It cannot rebuild models or check constraint status, but it handles metadata extraction and file validation with extreme reliability. Files that fail Tier 1 checks never reach the expensive Tier 2 pipeline.

Infrastructure Cost Estimate

ComponentAnnual CostNotes
SolidWorks Professional licenses $8K-12.5K 2-3 seats × $4,150/yr subscription
Azure NV6ads A10 v5 VMs $10K-24K 2-3 VMs × $670/mo on-demand ($400/mo with 1-yr reserved)
Total infrastructure $18K-37K Scales with exam volume

MVP volumes (Associate only) likely land at the low end ($18K with 2 reserved VMs). Scale trigger: >50 concurrent analyses requires additional VM capacity.

Known Technical Risks

COM API Instability The full SolidWorks COM API leaks memory at the unmanaged C++ level (OpenDoc6, Save methods). Process isolation must mean spawning a fresh sldworks.exe per file and forcefully terminating the entire process tree after each analysis. AppDomain isolation alone is insufficient. Mitigation: two-tier pipeline — use the stable Document Manager API for fast triage (metadata, mass properties), reserve the full COM API only for deep analysis (feature tree traversal, constraint checks, rebuilds).
Cold Start Latency SolidWorks takes 30-60 seconds to launch per instance. One-file-per-process isolation amplifies this cost. Mitigation: warm VM pool with pre-launched instances, queue-based job distribution. Azure NV-series VMs with NVIDIA GRID drivers required — standard compute VMs will hang on certain API calls that invoke the OpenGL pipeline.
Hiring Risk .NET developer with SolidWorks API experience is a niche skill set. Estimated 4-8 weeks of senior developer time to build the wrapper. Finding that person is the bottleneck.
D

SME Grading as a Bridge to Automation

What if we don't need the CAD engine on day one? The automated CAD analysis engine is the riskiest, most expensive component of SPCE. But the same grading can be done by human subject matter experts — at least at pilot scale. This creates a path to performance-based testing without solving the COM API stability problem first.

Why SME Grading Works for the Pilot

  • Eliminates the COM API risk entirely. No Azure NV-series VMs, no SolidWorks licenses for grading infrastructure, no niche .NET hire.
  • SMEs evaluate what the API cannot. Design intent, manufacturing awareness, "would I trust this person on my team" — judgment calls that matter to employers.
  • Faster to market. The bottleneck becomes hiring/contracting graders, not building software.
  • Arguably more credible early on. "Graded by practicing engineers" carries more weight with employers than "graded by algorithm" when the brand is unproven.
  • This is how CCIE lab exams work. Cisco's most prestigious certification uses human graders, not automated scoring.

Where SME Grading Hits a Ceiling

  • Doesn't scale. 30K+ Associate exams/year would require an army of graders. Sustainable for hundreds, not thousands.
  • Introduces subjectivity. Two graders may score the same model differently without rigorous calibration and rubric enforcement.
  • Slower turnaround. Days instead of minutes. Candidates waiting for results is a UX problem.
  • Higher per-exam cost. Each grading event requires human time, which directly eats into margins as volume grows.

The Bridge Strategy

SME grading and automated analysis are not mutually exclusive. The smartest path uses one to build the other.

Phase 1: Pilot
SME-Graded
First 500-1,000 exams
Phase 2: Calibrate
SME + Engine
Validate algorithm vs. expert judgment
Phase 3: Scale
Automated
Engine handles volume, SMEs handle appeals

In Phase 1, SME scoring patterns become the training data for the automated rubric. You learn what matters before you encode it in software. In Phase 2, you run both in parallel — every automated score gets a human check — until the engine matches expert judgment at an acceptable threshold. In Phase 3, the engine handles volume while SMEs shift to edge cases, appeals, and rubric evolution.

This also answers the Associate "build component" question. With SME grading, SPCE Associate could include a small modeling task from day one — without the CAD analysis engine being ready. That immediately differentiates from CSWA/CSWP, which are multiple-choice only. The performance task becomes the credibility signal employers are asking for, graded by the kind of engineers who would be hiring the candidate.
E

SPCE Associate — Detailed Implementation Breakdown

What Must Exist Before the First Candidate Sits for SPCE Associate

A concrete breakdown of every component required.

1. New Question Bank

No candidate gets the same test. Each exam pulls a randomized subset from a large item bank, constrained by a fixed blueprint (domain distribution + Bloom's distribution). Industry standard requires 3-5x the exam length — for a 60-question Associate exam, that means 300-400 candidate items, each tagged by domain and Bloom's level.

AI-assisted authoring pipeline: AI generates 400-500 draft items from existing SP content → SME panel reviews and edits (~60% survive) → pilot with 200+ test-takers → psychometric item analysis cuts another ~20% → production bank of 200-250 validated items. This compresses SME authoring from 3-4 months to 3-4 weeks of review. Still the longest pole — content work, not engineering.

2. Proctoring Integration

Third-party vendor (ProctorU/Examity). Laravel module: schedule exam → launch proctored session → receive completion webhook → record results. Identity verification flow. Candidate UX.

3. Exam Delivery

Existing Skills Analyzer infrastructure needs: timed sessions, question randomization, no retake without cooldown, score thresholds per Bloom's level, proctor session tied to exam attempt, audit trail.

4. Credential Issuance & Verification

Certificate generation on pass. Public verification portal (employer enters cert ID → sees candidate, tier, date, score summary). Badge standard integration (Open Badges / Credly or self-hosted).

5. CAD Analysis Engine (if included)

Two-tier pipeline: Document Manager API for fast triage (metadata, mass properties) + .NET 8 wrapper for full COM API deep analysis (feature tree, constraints, rebuilds). Azure NV-series Windows VM with SolidWorks license + NVIDIA GRID drivers. File upload → queue → Tier 1 triage → Tier 2 analysis → score. Requires niche .NET/SolidWorks hire.

6. Admin & Operations

Admin dashboard: view exam results, flag reviews, manage question bank. Reporting: pass rates, score distributions, flagged proctoring incidents.

The bottleneck is the question bank, not the engineering. AI-assisted authoring significantly compresses the timeline, but SME review and psychometric validation still take time that engineering can't shortcut. Items 2-4 build on existing platform infrastructure. Item 5 (CAD engine) is the differentiator but could ship as a fast-follow rather than day-one requirement.

F

Assessment Infrastructure Audit

A code-level review of our two existing assessment systems reveals neither was designed with certification-grade controls.

Aspect Skills Analyzer (Voltron) Skill Assessment (Solid Career) SPCE Requirement
Question selection Fixed set, same for all users Random 30 per attempt Blueprint-driven: domain + Bloom's constraints
Bloom's tagging None None Every item tagged L1-L5
Difficulty control None None Psychometric item difficulty indices
Question distribution Sequential order Category-balanced (content areas only) Balanced by domain AND cognitive level
Exam security Same test every time Different per attempt, but unproctored Proctored, randomized, cooldown-gated
Psychometric tracking None None Item analysis after each pilot
Both systems were built to reinforce learning, not certify competency. This is expected — they serve their purpose well. But repurposing them as credentialing infrastructure without these controls would produce the exact credential employers told us they don't trust.
G

CSWA/CSWP: What Existing Certifications Actually Grade

A common assumption is that SOLIDWORKS' own certifications (CSWA/CSWP) already test CAD modeling capability. They do — but the grading method reveals a fundamental limitation.

What CSWP Does

Candidates model parts and assemblies in SOLIDWORKS under timed conditions (3 segments, ~200 minutes total). The exam then asks for mass properties, center of gravity coordinates, and dimensions. If the candidate's number matches within ~1% tolerance, the answer is correct.

What this validates: "Can you arrive at the correct geometry?"

What CSWP Cannot Detect

Two engineers can produce identical mass and center of gravity — one with a clean parametric tree that survives design changes, the other with hard-coded dimensions that break on the first modification. CSWP grades both as equally correct.

What this misses: Feature tree quality, parametric robustness, constraint health, design intent, rebuild stability.

Dimension CSWA / CSWP SPCE (Proposed)
What candidates do Model parts and assemblies in SOLIDWORKS Model parts and assemblies (initial platform: SOLIDWORKS)
How answers are graded Mass properties, CoG, dimensions — numeric match within ~1% 4-layer scoring: robustness, feature efficiency, constraint quality, design history
What "correct" means Correct geometry output Correct geometry + production-ready modeling approach
Feature tree analysis None — model internals are not inspected Full inspection via SOLIDWORKS API (feature order, sketch constraints, rebuild behavior)
Rebuild robustness Not tested Automated dimension perturbation — does the model survive design changes?
Proctoring Unproctored online exam Third-party proctored (identity verification, lockdown)
Vendor scope SOLIDWORKS only (Dassault product) Vendor-neutral framework; SOLIDWORKS as initial delivery platform
Psychometric validation None published Bloom's-mapped, JTA-driven blueprint, Angoff cut-scores
Accreditation None (vendor self-issued) ISO/IEC 17024 target
CSWP answers "did you get the right geometry in SOLIDWORKS?" SPCE answers "did you build it in a way that survives the next design change — in any tool?" The difference is the difference between checking that a document compiles and reviewing the code. Both matter — but only one tells you whether the person can maintain and extend what they built.

Sources: SOLIDWORKS CSWP Certification · CSWP Sample Exam (PDF) · Engineering.com Guide · GoEngineer CSWP Prep