Physical inference substrate
Computation can be embodied in material behavior.
CommonAccess builds hardware where learned behavior exists physically. Models are compiled into photonic structures, and inference emerges through propagation, measurement, and readout rather than repeated digital execution.
01
At a glance
Core ideas in one screen for quick orientation.
PiP devices encode model behavior in physical structure.
Inference combines propagation, sensing, and calibrated decode.
Training and optimization happen before fabrication release.
Deployment emphasizes stable latency, energy, and continuity.
02
Start here
Choose the path that matches your objective. Each entry links to a dedicated page.
Technical Documentation
Deep technical explanations, architecture, and benchmark commitments.
Formal Specifications
Math-oriented specification language, notation, and validation gates.
Implementation Roadmap
Phase-by-phase execution, tools, artifacts, and release criteria.
Foundation
Vision, thesis, and boundaries
Computation model
PiP definition and hybrid layer model
ML primitive mapping
How weights, tokens, softmax, loss, and gradients map physically
Device architecture views
Cross-sections, top views, and readout planes
Optical mode rollout
Mode progression from first release to full production
Motives
Why this computing shift is necessary now
Training and compilation
Dataset to simulation to fabrication flow
Materials and fabrication
Encoded material classes and process limits
Verification and workloads
Validity gates, methods, workload fit
Benchmarks and promises
Comparative baselines and delivery commitments
Industries and use cases
Applied use cases and disruption vectors
Interoperability and stack
Compiler, SDK, runtime, framework contracts
Collaboration
Technical collaboration channels and expectations
03
System topology
High-level architecture remains visible on the landing for orientation.
System topology: model definitions are compiled into physical operators, then exposed through runtime interfaces that preserve standard ML integration.
Research and collaboration
The landing page stays intentionally compact. Use the linked resources for full technical depth, formal methods, and implementation detail.