Documentation

CommonAccess technical documentation

This page consolidates the full explanatory content for CommonAccess: thesis, computation model, training pipeline, materials, verification, interoperability, and collaboration pathways.

Foundation

CommonAccess builds a computing layer where machine learning inference is executed by physical dynamics rather than repeated digital instruction execution.

Core thesis: digital inference repeatedly reconstructs state; CommonAccess persists state physically and performs inference as response to perturbation.

Mechanism

Matter to response to result.

Boundary

Not a GPU FLOPS competitor, not quantum computing, not neuromorphic electronics, and not SaaS AI.
ModelCompilerFabricated PiPRuntime

System topology: model definitions are compiled into physical operators, then exposed through runtime interfaces that preserve standard ML integration.

FlowStateMeasurementConstraint

Computation model

A Photonic Inference Plate (PiP) is a fabricated operator that maps input tensors to output tensors by wave propagation through learned material structure.

Model parameters are encoded as refractive distribution, scattering coupling, and threshold behavior. Inference is physical transformation plus readout.

Mechanism

Tensor input to physical transform to measured state to tensor output.
Index geometryCoupling structureNonlinear thresholdReadout region

PiP cross-section: model parameters are represented as refractive distribution, coupling geometry, and threshold behavior. Weight storage and transform behavior are unified in structure.

FlowStateMeasurementConstraint

ML primitive mapping into PiP substrate

PiPs do not remove machine learning blocks; they relocate where they are executed. Tokens and encoded activations enter as optical boundary conditions, structural equivalents of weights are embedded in material geometry, and heavy transforms are executed by propagation.

Forward and backward roles are separated by phase: forward inference is physical on fabricated PiPs, while backward optimization and gradient updates are handled in differentiable simulation before fabrication. Softmax, temperature control, and some normalization remain digital at the readout interface in early releases.

Mechanism

Encode input to propagate through geometry to sample readout to decode and normalize.
ML primitive to physical implementation mapDigital ML primitivePiP physical counterpartToken embedding x_tInput field amplitude/phase boundary E_alpha(x_t)Weight matrix WRefractive and coupling geometry parameters thetaLinear transform W xWave propagation through PiP transfer operator P_thetaSoftmax(z / T)Analog score capture to digital normalization with temperature TLoss L(y_hat, y)Objective over readout mismatch and continuity penaltiesGradient dL/dthetaAdjoint simulation gradients before fabrication freeze

Training still uses gradients and loss optimization in differentiable simulation. At inference time, heavy transforms are executed by fabricated structure and only normalization, control, and system orchestration remain digital.

FlowStateMeasurementConstraint

Complete ML primitive coverage matrix

Beyond weights, tokens, softmax, gradients, forward/backward, loss, and temperature, the primitives below define full-stack coverage and hybrid boundaries for PiP deployments.

PrimitiveSubstrate mappingHybrid boundaryPrimary validation metric
Attention (Q/K/V, masks)Projection and score-forming transforms mapped to PiP propagation pathsMask enforcement and final score normalization remain runtime-controlledAttention quality parity and context-length stability
KV-cache semanticsPersistent optical-state and readout-assisted memory tracesCache policy and compaction logic stay digitalUpdate latency and continuity error vs baseline caches
Positional encodingPhase/amplitude modulation at encoder boundary conditionsPosition scheme switching and runtime policy remain software-drivenLong-context degradation slope
Normalization (LayerNorm/RMSNorm)Preconditioned physical response with calibrated readout statisticsFinal normalization constants applied digitally in early releasesDistribution drift and output quality parity
Activations (GELU/SiLU/ReLU)Nonlinear response regions and thresholded transfer bands in material designFallback nonlinear shaping available in runtime decode pathApproximation error and robustness under perturbations
Residual/skip connectionsInterference-based merging channels and parallel propagation pathsSafety combine path at readout for controlled blendingStability of deep-stack composition
MLP/FFN blocksExpansion and projection transforms partitioned across PiP operator stagesBlock orchestration and fallback routing remain in runtimeThroughput per token and quality retention
Optimizer states (Adam moments, clipping)Design-time differentiable simulation and compiler parameter updatesNot in deployed PiP inference loopConvergence speed and fabrication-ready candidate quality
Quantization and precision policyReadout dynamic-range design with calibrated sampling pointsADC/DAC precision policy controlled in runtime stackAccuracy vs power and thermal envelope
Sampling policy (top-k/top-p, penalties)PiP provides logits and stateful score trajectoriesSampling and policy logic remain digital by designResponse quality and controllability metrics
Batching and sequence schedulingShared propagation windows and readout cadence planningScheduler remains software-controlled for QoS guaranteesLatency distribution under mixed-load scenarios
Sparse routing / MoE and specialty blocksRoute-specific operator banks and selective mode activationRouter policy and expert arbitration remain runtime-managedRouting overhead and expert quality parity

PiP device architecture views and detailed representation

Device-level understanding is managed through multiple synchronized views: top routing, cross-sectional material stack, and readout planes. Each view contributes independent constraints to fabrication readiness.

This section tracks how encoding and decoding hardware interfaces are physically realized, where mode coupling is intentional, and where energy and thermal controls are implemented.

PiP device detail: top routing, cross-section, and readout planeTop view: mode routing and coupling islandsInput couplersScattering cellsOutput tapsCross-section: stack and thermal pathCladding and confinementActive refractive mapSubstrate and heat extraction pathReadout plane view: multi-tap sensing and calibration channelsshared calibration busADC + decode bridge

Device-level representations include top routing geometry, material stack cross-section, and readout plane instrumentation. Each view is used to track fabrication readiness and runtime calibration reliability.

FlowStateMeasurementConstraint

Encoding interface

Input encoders map digital vectors to optical excitation envelopes through calibrated couplers. Launch conditions are version-controlled to preserve reproducibility.

Decoding interface

Readout taps are sampled by sensing arrays, then passed through calibrated ADC and decode software to produce logits or task outputs.

Energy and latency controls

Sparse readout cadence, bounded perturbation updates, and thermal-aware scheduling reduce peak energy spikes and protect latency stability.

Efficiency safeguards

Runtime guardrails detect drift, crosstalk growth, or calibration decay and automatically shift to conservative sampling profiles before quality is affected.

Optical mode progression from first release to full-scale rollout

PiP rollout is staged by mode complexity and control maturity. Early releases prioritize deterministic single-mode behavior, while later releases expand to multi-mode fabrics only after crosstalk and thermal bounds are consistently controlled.

Full-scale production is defined by stable yield, calibration repeatability, and benchmark commitments under deployment envelopes, not by maximum mode count.

Optical mode progression from first release to production rolloutR1Single-mode pilotTE0 dominant, fixed spectral bandExit criterion:deterministic mapping and thermal stabilityR2Constrained multimodeTE0/TE1 with bounded couplingExit criterion:throughput lift with controlled crosstalkR3Programmable mode familiesmode-banked operators with switchingExit criterion:workload-specific deployment profilesR4Full production mode fabricdense multi-mode + calibrated readout meshExit criterion:high-volume reliability and yield

Mode expansion is staged. Production-scale rollouts are gated by crosstalk bounds, calibration stability, yield, and thermal resilience rather than mode count alone.

FlowStateMeasurementConstraint

Release gates by mode class

Each release class defines allowed mode families, coupling bounds, and crosstalk ceilings. Promotion requires all stability metrics to pass.

Production specifications

Rollout specs include mode isolation error, calibration retention window, thermal drift tolerance, and minimum manufacturing yield threshold.

Latency and throughput at scale

Throughput scaling is accepted only when latency distribution remains bounded across context growth and multi-mode operation.

Energy stability requirements

Energy-per-update slope and peak transient limits are tracked per mode class; noncompliant profiles block release escalation.

Motives behind the shift

The objective is not novelty for its own sake. PiPs are pursued because several inference bottlenecks in conventional digital execution are structural and persist even with faster GPU or NPU generations.

Motive 01: Repeated state reconstruction is expensive

Many modern workloads repeatedly rebuild context at each step. This creates avoidable compute and memory traffic overhead in long-running inference paths.

Motive 02: Energy is dominated by movement, not only arithmetic

Even optimized digital stacks spend significant budget moving activations and parameters. PiPs target reduced movement by embedding transform behavior in physical structure.

Motive 03: Persistent workloads need persistent substrates

Robotics, edge sensing, and long-context agents operate continuously. A substrate with embodied state is better aligned with these always-on operating conditions.

Motive 04: Scaling requires new physics-aware paths

The next efficiency gains are unlikely to come from execution speed alone. This program explores a compute path where part of inference is delegated to material dynamics.

Training and compilation

Hardware is not trained directly. A differentiable physical simulation is trained and optimized, then transformed into fabrication-ready structure.

Training path: dataset to simulation to optimized material geometry to fabrication.

Mechanism

Training complexity at design time; inference cost reduced at runtime.
DatasetDifferentiable simulationMaterial optimizationFabrication filePhysical validation

Training complexity remains at design time. Fabricated operators contain learned mappings without retaining datasets or digital weight files at inference time.

FlowStateMeasurementConstraint

Materials and fabrication

CommonAccess competes on manufacturing class and stable physical response, not transistor density or clock speed.

Material and geometry choices are advanced only when repeatability, separability, and drift tolerances pass validation gates.

Verification and workloads

Development proceeds only when validity gates are satisfied: deterministic mapping, high-rank dimensionality, linear separability, and temporal stability.

Workload fit: continuous perception, persistent agents, long-context reasoning, robotic cognition, and always-on inference systems.

Evidence

Input patterns to output patterns to SVD rank to classifier accuracy.

Boundary

Not general-purpose arbitrary compute for all kernel classes.

Deterministic mapping

Test method

repeatability run set

Gate

Input class variance < 1.5% across 1000 cycles

High-rank dimensionality

Test method

SVD on output state matrix

Gate

Effective rank exceeds benchmark floor per task family

Linear separability

Test method

readout classifier probe

Gate

Probe accuracy above digital baseline threshold

Temporal stability

Test method

drift and hysteresis monitoring

Gate

Stability window remains within calibration band

Benchmarks and promise commitments

PiPs are positioned to outperform conventional inference pipelines under persistent, context-heavy workloads. Commitments below define how those claims are tested against GPU, NPU, and digital accelerator baselines.

Promise 01: Context-stable throughput

Claim: PiP token throughput degrades materially less with context growth than conventional GPU/NPU inference loops.

Benchmark: fixed model family, progressive context ladder, compare slope of tokens-per-second decay.

Promise 02: Lower update energy

Claim: PiP inference energy tracks perturbation magnitude, delivering lower cost than repeated full recomputation on digital accelerators.

Benchmark: joules per update across identical workload traces on PiP, GPU, and NPU paths.

Promise 03: Stateful continuity

Claim: PiPs maintain usable response continuity between inputs without full hidden-state rebuild at each step.

Benchmark: multi-step agent/perception traces, measure recovery cost and continuity error vs conventional pipelines.

Promise 04: Better edge deployment profile

Claim: PiP systems sustain useful inference under tighter power and thermal envelopes than comparable GPU/NPU deployments.

Benchmark: constrained edge environments, compare sustained performance at fixed thermal and power ceilings.

Benchmark protocol and reporting

Baselines: representative GPU, NPU, and digital inference configurations for the same task class.

Inputs: shared datasets and workload traces, identical output quality targets.

Metrics: tokens/sec, energy per update, latency distribution, continuity error, thermal stability.

Publication: each promise reported with reproducible setup and baseline-relative delta.

Industries, applied use cases, and disruption vectors

CommonAccess is not aimed at all computing tasks. The strongest fit is in persistent, context-heavy systems where repeated recomputation dominates energy and latency budgets.

Robotics and autonomy

Applied use case: always-on perception and control loops with persistent scene context.

Disruption vector: shift from repeated frame-level context reconstruction to stateful response updates.

Long-context language systems

Applied use case: persistent multi-turn reasoning agents with large evolving context windows.

Disruption vector: reduce token-by-token full-context recomputation pressure and context-length sensitivity.

Industrial sensing and monitoring

Applied use case: continuous anomaly detection on streaming sensor fields.

Disruption vector: transition from batch-like periodic inferencing to continuous physical state tracking.

Edge intelligence infrastructure

Applied use case: local inference nodes under strict power and thermal limits.

Disruption vector: replace repeated digital heavy transforms with fabricated physical operators for persistent workloads.

Interoperability and stack

Developers should use a standard ML workflow: model to compile to run. CommonAccess is a new backend, not a new programming model.

Required software layers: model compiler, simulation SDK, hardware runtime driver, and framework integration adapters.

Collaboration

Open to research labs, robotics developers, model developers, and hardware partners.

Contact: agni@comac.network

Include research focus, interface requirements, and expected collaboration window.

Stage 1

Physical operator validation

Validate operator fidelity across benchmarked transforms and stability windows.

ExitDeterministic mapping and rank threshold confirmed

Statusin validation

Stage 2

Programmable fabrication

Establish repeatable fabrication pathways for tunable and task-specific structures.

ExitFabrication variance remains within acceptable tolerance band

Statusplanned

Stage 3

Developer toolchain

Expose compiler interfaces for mapping model structure to physical geometry.

ExitCompiler to runtime handoff validated across test suites

Statusplanned

Stage 4

Ecosystem deployment

Integrate substrate, readout, and software interfaces into deployable systems.

ExitEnd-to-end integration passes persistent workload trials

Statusplanned