This is a Foundry investigation into professional certification for mechanical engineers — whether the opportunity is real, whether we can build it, and whether the bet is worth making. It is not a product spec or a roadmap. It answers three questions:
Everything below is research, analysis, and a first-bet proposal. It is scoped to what The Foundry can validate — not what a product team would ship.
The problem isn't compensation — Ford's CEO can't fill 5,000 mechanic jobs at $120K/year. The problem is verification. Employers can't tell who is actually qualified, and qualified engineers can't prove what they know because their best work is locked behind NDAs. Existing credentials (CSWA, CSWP) test whether candidates arrive at correct geometry in a single vendor's tool — not whether their modeling approach is production-ready or transferable across platforms. The market lacks a credible signal.
Experienced engineers can't show their best work. Everything they've built is proprietary. This creates a credibility vacuum -- the people with the most skill have the least proof.
Certifications signal motivation, not capability. Every employer acknowledged them as "a plus" -- but not one said they trust certifications as proof that someone can do the job.
Organizations with training budgets and compliance requirements need external validation. They won't adopt a certification that exists only within SP's ecosystem -- no matter how good the content is.
The interviews revealed a clear split in how different personas evaluate candidates:
| Persona | What They Need |
|---|---|
| Recruiters Julie (Micron), Stella (Clorox) |
Screening signal -- quick filter for qualified candidates |
| Engineering Managers Dave (Wagstaff), Gautam (Karman), Brent (Aristocrat) |
Capability proof -- can this person actually engineer? |
Julie Snyder explicitly said she would pass structured assessment results to hiring managers for deeper evaluation.
Research observation: These two personas likely require different product surfaces. Recruiters need an ATS-visible, pass/fail screening signal. Engineering managers need granular capability data — domain-level scores, CAD analysis results, performance task breakdowns. A product team should investigate whether a single credential experience serves both, or whether different touchpoints are needed.
Today, SP adoption at a company depends on finding one internal champion willing to advocate up the chain. This is fragile.
If SolidProfessor has independent institutional credibility (ISO accreditation, standards body endorsement, employer advisory board), it doesn't need a champion at every company. The credential speaks for itself -- the way CCNA does for networking or PMP does for project management.
solid_career_skill_assessment_test_questions, full set) using Bloom's Taxonomy. The Skills Assessment bank is marginally better -- but the gap to certification-grade remains massive.
Course quizzes. Designed to verify students watched the video.
Lower-order: 88.7% | Higher-order: 11.3%
Skills Assessment product. More applied -- but still recall-dominated.
Lower-order: 80.7% | Higher-order: 19.3% | +8% improvement
| Bloom's Level | General Bank | Assessment Bank | Delta | What It Means |
|---|---|---|---|---|
| L1 - Remember | 77.5% | 69.4% | -8.0% | Less pure recall, but still dominant |
| L2 - Understand | 11.3% | 11.3% | -- | No change |
| L3 - Apply | 7.5% | 14.3% | +6.8% | More "which tool for this task?" -- the biggest gain |
| L4 - Analyze | 2.9% | 2.4% | -0.5% | No meaningful change |
| L5 - Evaluate | 0.9% | 2.5% | +1.7% | Slightly more "best practice" questions |
| L6 - Create | 0% | 0% | -- | Still zero. Nobody asked to design anything. |
The improvement is real but insufficient. The assessment bank shifted ~8% from recall to application -- mostly "which tool/option would you use for this scenario?" That's better than "press the _____ key" but still tests software operation, not engineering judgment.
The best questions in the assessment bank hint at what SolidProfessor needs more of.
The vast majority still test recall of facts, T/F statements, and UI identification.
| Dimension | General Bank | Assessment Bank | Credible Certs (CCNA, AWS, PMP, CSWP) |
|---|---|---|---|
| Lower-order (L1+L2) | 88.7% | 80.7% | Minority of exam -- baseline screening only |
| Higher-order (L3-L6) | 11.3% | 19.3% | Majority of exam -- scenario-based, applied |
| Evaluate + Create (L5+L6) | 0.9% | 2.5% | Significant portion -- design, troubleshoot, justify |
| Performance-based tasks | None | None | Lab simulations / hands-on scenarios |
| Proctoring | None | None | Required (Pearson VUE, PSI, etc.) |
| What it proves | "Watched the content" | "Knows the tools" | "Can perform the job" |
The Skills Analyzer is a fixed-set, hard-coded assessment that SP customers already use today. Engineering managers have given it overwhelmingly positive feedback — and a Bloom's analysis reveals why. It's the best assessment content SP has, even though it still has vast room for improvement.
Higher-order: 46% — closest to certification-grade
Higher-order: 16% — weakest tier, mostly recall
Higher-order: 28% — strong L3 from applied scenarios
| Metric | General Bank (12,183 questions) |
Skills Assessment (1,501 questions) |
Skills Analyzer (73 questions) |
Certification Target |
|---|---|---|---|---|
| Lower-order (L1+L2) | 88.7% | 80.7% | 71.2% | Minority of exam |
| Higher-order (L3-L6) | 11.3% | 19.3% | 28.8% | Majority of exam |
| L3 Apply | 7.5% | 14.3% | 21.9% | Significant portion |
| L5 Evaluate | 0.9% | 2.5% | 4.1% | Significant portion |
| Customer reception | Expected (quizzes) | Neutral | Overwhelmingly positive | — |
| Data Point | What It Tells Us | What It Does NOT Tell Us |
|---|---|---|
| Course completions | User watched the videos and clicked through | Whether they understood or can apply the content |
| Technical certificates | User passed quizzes that are 81-89% recall questions | Whether they can model a production-ready part |
| Video views / time on platform | User consumed content (or left a tab open) | Whether learning transferred to job performance |
| Skills Assessment scores | User can identify tools and recall procedures (69% L1). Randomized per attempt but without Bloom's or difficulty controls. | Whether they can make engineering judgments under constraints |
| Streaks / engagement metrics | User is consistently active on the platform | Anything about capability |
A code-level review of our two existing assessment systems confirms neither was designed with certification-grade controls (Bloom's tagging, psychometric tracking, proctored delivery, blueprint-driven selection). See Appendix F for the full infrastructure audit.
It is natural to reach for what we already have. We have 12,183 questions, 1,501 assessment items, and a working assessment engine. The instinct is: "Let's package this into something employers can see."
But that approach has a ceiling. The data was collected to measure learning engagement, not professional competency. Repackaging it — even with better UI, dashboards, or badges — does not change what it fundamentally measures.
This is the difference between a Fitbit and a medical exam. Both use data. One tracks activity. The other diagnoses capability. Employers are asking for the medical exam.
Across 6 employer interviews, the request was consistent and specific:
Every one of these demands requires data we do not currently produce. The gap isn't in presentation — it's in what we're measuring.
The SolidProfessor Certified Engineer (SPCE) is the credential we're evaluating — a vendor-neutral, tiered certification that proves an engineer can execute in modern CAD/CAM software. Existing certifications (CSWA/CSWP) verify that a candidate can arrive at correct geometry — but not how they built it (see Appendix G for a detailed grading comparison). Here's where SPCE fits relative to the credentials that already exist.
A state-issued, legally binding credential that grants an engineer authority to prepare, sign, seal, and submit engineering plans. The PE shoulders ultimate legal responsibility for safety, structural integrity, and public welfare.
A vendor-neutral credential that proves an engineer can build complex 3D parametric models, run finite element analysis (FEA), generate manufacturing toolpaths, and execute production-ready digital workflows in modern CAD/CAM software.
In the vocational and Career and Technical Education (CTE) market, the same complementary positioning applies — but with ASME and NIMS instead of the PE:
The universal mathematical language for engineering drawings — GD&T, tolerancing, design intent communication. Dictates how a part must be specified.
Validates that the student has the digital fluency to operate CAD/CAM software — the tool that bridges ASME design intent to NIMS physical execution.
Industry-recognized competency standards for CNC operation, machining, and CAM programming. Defines what a skilled manufacturer must be able to do on the shop floor.
A manufacturing firm needs both. Here's how SP integrates into the entire engineering department:
We are not starting from scratch. The platform has production-ready assessment delivery (Skills Analyzer), event architecture, certification tracking, RBAC, LTI 1.3 integration, and a Vue 3 assessment UI. The engineering foundation is in place — what's missing is the content layer and exam controls.
See Appendix A for the full infrastructure inventory.
A hybrid proctoring model covers all tiers: third-party AI-recorded for high-volume Associate exams, Cloud VDI (Azure Virtual Desktop) for Expert/Master tiers where candidates interact with desktop CAD software. Standard lockdown browsers cannot support CAD applications — Cloud VDI solves this by streaming a locked-down VM to the candidate's browser.
See Appendix B for vendor comparison and architecture details.
Automated CAD model grading is technically feasible using the SOLIDWORKS API, with an existing commercial product (Graderworks) and published research (ASEE 2024) proving viability. A 4-layer scoring system (robustness, feature efficiency, constraint quality, design history) can objectively grade production-readiness — something no other certification does. CSWA/CSWP checks mass properties and dimensions; the CAD analysis engine checks feature tree quality, constraint health, and rebuild robustness (see Appendix G for the full comparison). Known risks include COM API instability and cold start latency, with identified mitigations. Estimated infrastructure cost: $18-37K/year.
See Appendix C for the scoring system, API methods, pipeline design, and cost estimates.
Human SME grading can replace the CAD engine at pilot scale, eliminating the riskiest technical dependency from the first bet. This is the insight that makes Bet 1 possible without solving the hardest technical problem first.
With SME grading, SPCE Associate could include a performance-based modeling task from day one — without the automated CAD analysis engine being ready. That immediately differentiates from CSWA/CSWP, which are multiple-choice only. The performance task becomes the credibility signal employers are asking for, graded by the kind of engineers who would be hiring the candidate.
"Graded by practicing engineers" is arguably more credible than "graded by algorithm" when the brand is unproven — this is how Cisco's CCIE lab exams work. SME scoring patterns become training data for the automated rubric in Phase 2, creating a smooth transition path: human grading validates the approach, then automation scales it.
See Appendix D for the full bridge strategy, phase diagram, and cost model.
If the problem is real -- and the interviews say it is -- what would a first bet look like? SPCE Associate: a single-tier, proctored assessment that answers "Can this person model?" Here's what's involved.
Five things must exist before the first candidate sits for the exam:
Conducted with the advisory board and 200+ practitioners. Defines what the exam tests — without it, there is no defensible blueprint. Requires a JTA facilitator (external hire or consultant).
~300-400 items mapped to JTA domains. AI-assisted authoring compresses SME work from months to weeks, but psychometric validation — including formal cut-score setting via the Angoff Method — requires a psychometrician (external hire).
Longest pole — content and certification science, not engineering.
Identity verification and exam security. Remote proctoring is acceptable under ISO 17024 as long as controls are documented and demonstrably equivalent to in-person testing.
Timed sessions, question randomization, cooldown periods, and audit trails — built on the existing Skills Analyzer or Skills Assessment infrastructure.
Employers validate certifications with one click. Open Badges 3.0 integration for LinkedIn, ATS systems, and digital portfolios.
See Appendix E for the detailed implementation breakdown.
Earn formal accreditation under ISO/IEC 17024 (the international standard for personnel certification bodies) and ANAB national accreditation. This proves SPCE is impartial, psychometrically valid, and governed by documented processes -- not just a marketing tool. Accreditation makes SPCE credible to employers, government workforce programs, and regulators worldwide.
Critical prerequisite: Because SP provides both training and certification, ISO 17024 requires a documented firewall policy — formal separation between training staff and certification/exam staff. The people who build SP courses cannot write exam questions or make certification decisions. ANAB also requires 6-12 months of operating data (real candidates, real exam results) before they will conduct an onsite assessment. This means Year 1 is build + launch + collect data; accreditation application happens in Year 2.
Assemble an advisory board of industry experts, academics, and professional associations (target: ASME, SME) to govern exam content. This board's first task: conduct a formal Job Task Analysis (JTA) — a structured study with 200+ practitioners that defines the actual tasks, knowledge, and skills required for the role. The JTA becomes the exam blueprint, with domains mapped to established competency frameworks: ASME Y14.5 (GD&T, tolerancing, design intent communication) and NIMS duty areas (CNC programming, machining execution, CAM proficiency). Starting from recognized standards rather than SP curriculum gives the advisory board a defensible foundation — and gives ISO 17024 auditors exactly what they want to see.
Partner with enterprise proctoring infrastructure like Pearson VUE for global test center access alongside our online proctoring. Include hands-on lab challenges -- not just multiple choice -- at every tier. The automated CAD quality analysis engine for Expert/Master gives SPCE genuine weight with hiring managers that no other certification can match.
Get approved as a PDH/CPC continuing education provider so licensed Professional Engineers (PEs) can count SolidProfessor courses toward their mandatory renewal hours. This instantly embeds SP into the professional engineering ecosystem -- engineers must take CE courses anyway, and SP becomes an approved source. Creates a recurring touchpoint with every licensed engineer.
Issue credentials using the Open Badges 3.0 standard -- the W3C-backed specification for verifiable digital credentials. Each SPCE badge is cryptographically signed, machine-readable, and one-click verifiable on LinkedIn, resumes, and employer ATS systems. Unlike a PDF certificate, an Open Badge can be independently verified by anyone without contacting SolidProfessor, creating a network effect: every badge shared is a trust signal that drives industry-wide recognition.
Five problems stand between the research findings and a credible certification. None are insurmountable, but all must be solved — and they are sequenced, not parallel.
No amount of infrastructure fixes a test where 81% of questions are recall. Building a Bloom's-balanced item bank requires SME authoring, psychometric validation, and pilot testing — content work that engineering can't shortcut.
Cut-score setting (Angoff Method), item analysis, and Bloom's blueprinting require a qualified psychometrician. Niche external hire, 3-6 month sourcing timeline.
ANAB needs 6-12 months of operating data — real candidates, real results — before an onsite assessment. We can't apply until we've launched.
.NET/SolidWorks COM API expertise is rare. SME grading bridges this for Bet 1, but the automated engine is the long-term differentiator.
Employer interviews say the need exists, but adoption is unproven until candidates sit for the exam and employers act on the results.
| Risk | Severity | Mitigation |
|---|---|---|
| SolidWorks COM crashes / memory leaks | HIGH | Process isolation per analysis, auto-restart, warm VM pool |
| Performance-based exam security | CRITICAL | Custom Electron lockdown is fundamentally bypassable. Pivot to Cloud VDI (Azure Virtual Desktop) — candidates receive only a video stream, zero local code execution. Lockdown enforced at cloud network level, not endpoint |
| Third-party vendor lock-in | MEDIUM | Strategy pattern for technical portability. But data/contractual lock-in is the bigger risk — MSA must require: non-exclusive data licensing, mandatory data portability (video/biometric export at no cost), no minimum volume commitments |
| SPCE credibility / adoption speed | HIGH | Risk-free pilots through VARs + enterprise customers are necessary but passive. Must also pursue: academic pipeline integration (subsidized university partnerships a la CompTIA), aggressive differentiation from CSWP ("production-ready modeling across platforms, not single-vendor output verification"), employer advisory board endorsements |
| AI proctoring false positives | HIGH | Post-exam human review is mandatory (industry best practice). But if AI flags 60% of exams, QA team is overwhelmed. Require demographic audits of vendor AI models before MSA signing. Build frictionless accommodation workflow for neurodivergent/disabled candidates. Implement transparent appeals process with guaranteed secondary human review |
| Vendor-neutral scope too broad | MEDIUM | Mechanical engineering is fragmented (mold design vs. structural vs. aerospace) unlike networking (standardized protocols). SPCE Associate starts narrow: SOLIDWORKS as the initial delivery platform (85% of SP users), parametric solid modeling for general mechanical design, industry-agnostic. Expansion to other CAD software (Fusion 360, Creo, NX) happens only as tiers mature and market feedback validates demand |
| Psychometrician & JTA facilitator sourcing | HIGH | Niche expertise with 3-6 month sourcing timeline. These roles are the longest pole — without them, there is no defensible exam blueprint or cut-score. Mitigation: begin sourcing in parallel with advisory board formation. Consider contract engagement for Phase 1 |
Across quantitative analysis and qualitative interviews, every data point converges on the same conclusion: there is real, unmet demand for a credible way to prove mechanical engineering competency — and what we have today does not satisfy it.
The problem exists.
Six employers confirmed that existing credentials — including ours — don't influence hiring decisions. Engineering managers evaluate candidates on capability, not certificates. Experienced engineers can't demonstrate their skills because their best work is locked behind NDAs. The market has no trusted, standardized way to prove "this person can model."
Our current data doesn't solve it.
81-89% of our questions test recall. Zero questions ask anyone to design, evaluate, or create anything. No proctoring. No performance tasks. No external validation. This content was built for learning reinforcement — and it does that job well. But repackaging it as competency proof will not change what it measures.
We can solve it.
The infrastructure exists. The assessment engine, event architecture, credential system, and Vue 3 UI are production-ready. What's missing is the content layer: Bloom's-balanced questions, proctored delivery, and eventually a CAD analysis engine that no other certification provider has. The technical feasibility is there — this is an investment decision, not a capability question.
The bet is whether it's worth the investment.
That's what The Foundry is here to answer. The research says the demand is real and the existing market (CSWA/CSWP) leaves a gap. The question isn't whether the problem exists — it's whether SolidProfessor is willing to build the thing that actually solves it.
Reference Material for Pod
Third-party proctoring for Associate/Professional (browser-based MCQ). Custom in-house proctoring for Expert/Master (requires desktop CAD software).
| SPCE Tier | What It Proves | Proctoring | Why This Model |
|---|---|---|---|
| Associate | Can this person model? Core knowledge + fundamentals | Third-party (AI-recorded) | High-volume, browser-based MCQ + short tasks |
| Professional | Can they solve design problems? Applied engineering judgment | Third-party (hybrid AI + human) | Design challenges need real-time monitoring |
| Expert | Can they own a product? Complex assemblies, simulation, PDM | Cloud VDI (Azure Virtual Desktop) | CAD runs in locked-down cloud VM, streamed to candidate |
| Master | Can they lead engineering? Full-day lab, production-grade output | Cloud VDI + live proctor (8 hrs) | CCIE-equivalent prestige. Proves senior capability |
| Dimension | A: Third-Party Only | B: Build In-House | C: Hybrid (Recommended) |
|---|---|---|---|
| Relative speed | Fastest (integration only) | Slowest (full build) | Fast start (third-party first, build later) |
| CAD software support | Poor | Excellent | Excellent (Expert/Master) |
| 8-hour Master exam | Uncertain | Full support | Full support |
| Layer | Weight | What It Measures |
|---|---|---|
| Robustness | 30% | Does the model survive dimension changes without breaking? |
| Feature Efficiency | 25% | Feature count vs benchmark, pattern usage, reference geometry |
| Constraint Quality | 25% | Fully-defined sketches, under/over-constraint detection |
| Design History Clarity | 20% | Feature naming, folder organization, documentation |
Recommended: SOLIDWORKS API first -- the only option with 100% coverage of all 4 grading layers.
| Approach | 4-Layer Coverage | Timeline | Annual Infra Cost | Verdict |
|---|---|---|---|---|
| SOLIDWORKS API | 100% | 10-15 weeks | $18-37K 1 | Recommended |
| Multi-CAD (SW+Autodesk+NX) | 100% SW, 40-60% others | 30-42 weeks | $29-73K 2 | Year 2+ |
| STEP/Neutral format | ~5% | 8-12 weeks | $1-4K 3 | Supplement only |
| Cloud/Onshape | ~65-75% | 15-21 weeks | $7-21K 4 | Investigate Y2 |
Each grading layer maps to specific SolidWorks API methods. An existing commercial product (Graderworks, official SolidWorks Solution Partner) and published research (ASEE 2024) prove automated SolidWorks grading is viable.
| Layer | Weight | API Methods | Difficulty | Status |
|---|---|---|---|---|
| Feature Efficiency | 25% | GetFeatureTreeRootItem2, ITreeControlItem for tree traversal. Feature count, pattern detection, reference geometry all accessible. |
MEDIUM | Proven |
| Constraint Quality | 25% | ISketch.GetConstrainedStatus returns fully-defined, under-defined, or over-defined per sketch. Direct API support. |
EASY | Proven |
| Robustness | 30% | ForceRebuild3 / EditRebuild3 exist but have a known bug: they return true even when rebuild errors occur. Workaround: compare geometry before/after dimension changes, check for suppressed/failed features. |
HARD | Needs workaround |
| Design History | 20% | Feature names, folder structure, comments all accessible via standard tree traversal APIs. | EASY | Proven |
MVP approach: Ship with Layers 1, 2, and 4 (70% of weighted score) first. Add robustness testing as an iteration once the rebuild detection workaround is validated. Even without Layer 3, this scores more dimensions than CSWP.
The full COM API is unstable at scale. A bifurcated architecture balances speed with depth — most files can be triaged without ever launching SolidWorks.
| Tier | Technology | Function | Speed | Stability |
|---|---|---|---|---|
| Tier 1: Triage | Document Manager API | File integrity, mass properties, metadata, file references | Milliseconds | Very High (no GUI) |
| Tier 2: Deep Analysis | Out-of-Process COM API | Feature tree traversal, sketch constraints, model rebuilds | Seconds-Minutes | Low (forced termination per file) |
The DM API reads file data without launching SolidWorks or its GUI. It cannot rebuild models or check constraint status, but it handles metadata extraction and file validation with extreme reliability. Files that fail Tier 1 checks never reach the expensive Tier 2 pipeline.
| Component | Annual Cost | Notes |
|---|---|---|
| SolidWorks Professional licenses | $8K-12.5K | 2-3 seats × $4,150/yr subscription |
| Azure NV6ads A10 v5 VMs | $10K-24K | 2-3 VMs × $670/mo on-demand ($400/mo with 1-yr reserved) |
| Total infrastructure | $18K-37K | Scales with exam volume |
MVP volumes (Associate only) likely land at the low end ($18K with 2 reserved VMs). Scale trigger: >50 concurrent analyses requires additional VM capacity.
SME grading and automated analysis are not mutually exclusive. The smartest path uses one to build the other.
In Phase 1, SME scoring patterns become the training data for the automated rubric. You learn what matters before you encode it in software. In Phase 2, you run both in parallel — every automated score gets a human check — until the engine matches expert judgment at an acceptable threshold. In Phase 3, the engine handles volume while SMEs shift to edge cases, appeals, and rubric evolution.
A concrete breakdown of every component required.
No candidate gets the same test. Each exam pulls a randomized subset from a large item bank, constrained by a fixed blueprint (domain distribution + Bloom's distribution). Industry standard requires 3-5x the exam length — for a 60-question Associate exam, that means 300-400 candidate items, each tagged by domain and Bloom's level.
AI-assisted authoring pipeline: AI generates 400-500 draft items from existing SP content → SME panel reviews and edits (~60% survive) → pilot with 200+ test-takers → psychometric item analysis cuts another ~20% → production bank of 200-250 validated items. This compresses SME authoring from 3-4 months to 3-4 weeks of review. Still the longest pole — content work, not engineering.
Third-party vendor (ProctorU/Examity). Laravel module: schedule exam → launch proctored session → receive completion webhook → record results. Identity verification flow. Candidate UX.
Existing Skills Analyzer infrastructure needs: timed sessions, question randomization, no retake without cooldown, score thresholds per Bloom's level, proctor session tied to exam attempt, audit trail.
Certificate generation on pass. Public verification portal (employer enters cert ID → sees candidate, tier, date, score summary). Badge standard integration (Open Badges / Credly or self-hosted).
Two-tier pipeline: Document Manager API for fast triage (metadata, mass properties) + .NET 8 wrapper for full COM API deep analysis (feature tree, constraints, rebuilds). Azure NV-series Windows VM with SolidWorks license + NVIDIA GRID drivers. File upload → queue → Tier 1 triage → Tier 2 analysis → score. Requires niche .NET/SolidWorks hire.
Admin dashboard: view exam results, flag reviews, manage question bank. Reporting: pass rates, score distributions, flagged proctoring incidents.
The bottleneck is the question bank, not the engineering. AI-assisted authoring significantly compresses the timeline, but SME review and psychometric validation still take time that engineering can't shortcut. Items 2-4 build on existing platform infrastructure. Item 5 (CAD engine) is the differentiator but could ship as a fast-follow rather than day-one requirement.
A code-level review of our two existing assessment systems reveals neither was designed with certification-grade controls.
| Aspect | Skills Analyzer (Voltron) | Skill Assessment (Solid Career) | SPCE Requirement |
|---|---|---|---|
| Question selection | Fixed set, same for all users | Random 30 per attempt | Blueprint-driven: domain + Bloom's constraints |
| Bloom's tagging | None | None | Every item tagged L1-L5 |
| Difficulty control | None | None | Psychometric item difficulty indices |
| Question distribution | Sequential order | Category-balanced (content areas only) | Balanced by domain AND cognitive level |
| Exam security | Same test every time | Different per attempt, but unproctored | Proctored, randomized, cooldown-gated |
| Psychometric tracking | None | None | Item analysis after each pilot |
A common assumption is that SOLIDWORKS' own certifications (CSWA/CSWP) already test CAD modeling capability. They do — but the grading method reveals a fundamental limitation.
Candidates model parts and assemblies in SOLIDWORKS under timed conditions (3 segments, ~200 minutes total). The exam then asks for mass properties, center of gravity coordinates, and dimensions. If the candidate's number matches within ~1% tolerance, the answer is correct.
What this validates: "Can you arrive at the correct geometry?"
Two engineers can produce identical mass and center of gravity — one with a clean parametric tree that survives design changes, the other with hard-coded dimensions that break on the first modification. CSWP grades both as equally correct.
What this misses: Feature tree quality, parametric robustness, constraint health, design intent, rebuild stability.
| Dimension | CSWA / CSWP | SPCE (Proposed) |
|---|---|---|
| What candidates do | Model parts and assemblies in SOLIDWORKS | Model parts and assemblies (initial platform: SOLIDWORKS) |
| How answers are graded | Mass properties, CoG, dimensions — numeric match within ~1% | 4-layer scoring: robustness, feature efficiency, constraint quality, design history |
| What "correct" means | Correct geometry output | Correct geometry + production-ready modeling approach |
| Feature tree analysis | None — model internals are not inspected | Full inspection via SOLIDWORKS API (feature order, sketch constraints, rebuild behavior) |
| Rebuild robustness | Not tested | Automated dimension perturbation — does the model survive design changes? |
| Proctoring | Unproctored online exam | Third-party proctored (identity verification, lockdown) |
| Vendor scope | SOLIDWORKS only (Dassault product) | Vendor-neutral framework; SOLIDWORKS as initial delivery platform |
| Psychometric validation | None published | Bloom's-mapped, JTA-driven blueprint, Angoff cut-scores |
| Accreditation | None (vendor self-issued) | ISO/IEC 17024 target |
Sources: SOLIDWORKS CSWP Certification · CSWP Sample Exam (PDF) · Engineering.com Guide · GoEngineer CSWP Prep