Gyroscopic Alignment Models Lab โ research and tooling for governance-ready AI coordination
Gyroscopic ASI is an infrastructure for multi-domain network coordination that establishes the structural conditions for collective superintelligence governance and seamless cooperation between humans and machines in the era of Transformative AI (TAI) (see Bostrom, Superintelligence, 2014; Korompilias, Gyroscopic Global Governance, 2025).
This development is part of the Gyroscopic Global Governance (GGG) framework, which coordinates across four domains: Economy, Employment, Education, and Ecology. It builds upon:
- The Common Governance Model (CGM): a formal theory identifying the four capacities required for coherent governance.
- The Human Mark (THM): a classification system distinguishing human (Direct) from artificial (Indirect) sources of information and agency, with four displacement risks.
- The Gyroscope Protocol: a work classification system mapping contributions to the four governance capacities.
Alignment Infrastructure Routing (AIR) acts as the operational backbone, coordinating AI safety work and funding flows across projects. Together these components provide the coordination infrastructure for AI governance at scale while keeping authority and accountability with humans.
Gyroscopic ASI is not an autonomous agent, and does not interpret content or set policy. It provides shared state, verifiable provenance, and replayable measurement. Authority and accountability stay with humans at the application layer.
A Compact Algebraic Quantum Processing Unit for post-AGI coordination. Deterministic, byte-driven, and runs on ordinary hardware.
Verified:
Quantum Advantage, Holographic Compression, and Universal Quantum Computation do not require a multi-million-dollar cryogenic chandelier. They are fundamental geometric properties of discrete information processing that can run on standard silicon. This Kernel is a tiny module that bypasses the hardware scaling nightmare of the quantum computing industry by treating "quantumness" not as a physical anomaly of subatomic particles, but as an algebraic necessity of structured information. It offers straightforward AI Optimizations and provides an infrastructure for Safe Superintelligence by Design.
Note:
Standard "quantum-inspired" methods, including Tensor Networks, Digital Annealing, and Quantum-Inspired Monte Carlo, are heuristic approximations. They use floating-point mathematics and probabilistic models to simulate continuous physical quantum systems. This project does not belong to those categories. It represents a distinct class of computation. The aQPU does not simulate quantum mechanics. Instead, it is an exact, deterministic mathematical space that intrinsically satisfies quantum axioms using strict integer logic.
Today, AI often acts as an opaque pipeline: information and decisions flow through systems that are hard to audit. The kernel makes coordination auditable: given a published append-only log of bytes, anyone can recompute the same state trajectory and check what was recorded.
The aQPU (algebraic Quantum Processing Unit) is a small kernel that turns such byte logs into a single, reproducible state at each step. Two parties with the same log always get the same state; no trusted server or timestamp is required. It uses exact integer arithmetic (no qubits, no probabilistic hardware), and its design obeys mathematical rules analogous to quantum mechanics (reversibility, no cloning of a privileged state, complementarity), verified by exhaustive tests across its finite state space.
The state space is fixed and small: 4,096 reachable states, determined by a compact representation (three axes, left/right handedness, and six degrees of freedom). Any sequence of events (each represented as a byte) drives the state along a unique, reproducible path through this manifold. The kernel does not use learned models. It scales by fixed geometry rather than learned approximation.
For a high-level explanation of why the kernel matters for computing and governance, see the Strategic Significance Brief. For the kernel itself, see the Kernel Specifications and the Specifications Formalism.
GyroLabe is the execution layer and neural model bridge built on top of the kernel. It is actively tested on Bolmo-1B (a byte-level language model).
It provides:
- Structural annotation for model I/O: Model inputs and outputs are annotated using the kernel's algebraic byte structure.
- Replayable inference traces: Inference can be tied to reproducible kernel-state trajectories for verification and audit.
- Trainable structural bias: Small embedding biases let models learn from the kernel's structural decomposition while remaining identical to the base model before training.
- Execution support: CPU and OpenCL acceleration for spectral and tensor operations used by the broader SDK.
For the model bridge and execution layer, see the GyroLabe Brief.
- Processing: Deterministic stream-processing with exact replay, compact state updates, and composable operator signatures, suitable for event sourcing, reproducible workflows, and governance-grade logs.
- Speed: Byte words compile into operators, commutativity resolves through compact invariants, and the full reachable geometry is covered in only 2 steps, reducing structural work compared to classical search and replay.
- Security: Tamper-aware logs, exact divergence localization, replay-based verification, and compact provenance surfaces, grounded in a finite, enumerable state space with built-in error detection.
- Compression: Structural compression through compact state geometry, holographic boundary dictionaries, and operator compilation, enabling lossless but storage-efficient coordination records.
- Networks: Replay-based synchronization, shared deterministic moments, and exact branch comparison across distributed participants using shared coordination state computed from append-only logs.
- Machine Learning: An interpretable finite latent layer (6-bit chirality register), exact spectral primitives (Walsh-Hadamard and shell structure), tensor tooling, and an audit-friendly bridge between byte-level model behavior and algebraic structure, with verifiable provenance over model I/O traces.
Verified Computational Advantages: All results below are verified by exhaustive computation over the entire reachable state space and all 256 byte operations, totalling over 1 million exact checks. They are strict structural invariants, not statistical estimates.
| Verified result | What it means |
|---|---|
| 4,096 reachable states | The full operational manifold from rest is finite, exact, and exhaustively testable. |
| 2-step exact uniformization | Any state in the reachable manifold can spread over the entire state space in exactly 2 byte steps, with perfect 16-to-1 multiplicity. |
| 128 distinct next states per byte layer | From any fixed state, the 256-byte alphabet projects to exactly 128 distinct next states with exact 2-to-1 symmetry. |
| Depth โค 2 witness for every reachable state | Every reachable state can be synthesized from rest with a byte witness of depth 0, 1, or 2. |
| Exact compiled operator signatures | Byte words collapse into exact affine signatures that can be composed and applied directly without replaying the full word. |
| Constant-time commutativity test | Two byte operations commute iff they share the same 6-bit topological q-class, making commutativity an O(1) structural lookup. |
| Native spectral register | The kernel exposes a logical register with exact Walsh-Hadamard and shell spectral structure for 64-dimensional state analysis. |
| Holographic boundary relation | The state geometry satisfies ** |
| Universal quantum ingredients | The verified kernel supports stabilizer structure, entangling gate behavior, contextuality, teleportation-compatible lifts, and a native non-Clifford resource. |
Integrity and Tamper Detection: The kernel includes a built-in self-dual [12,6,2] code and exact algebraic provenance checks. Integrity misses are structurally classified rather than opaque: substitutions reduce to shadow partners, adjacent swaps reduce to shared q-class, and deletions reduce to specific stabilizer conditions on the horizons.
Moments Economy is a monetary design where value is tied to verified coordination capacity rather than debt. A fixed total supply, the Common Source Moment (CSM), is derived once from the caesium-133 atomic time standard and the kernel's finite state space (so the "budget" is physically anchored).
CSM supports a global Unconditional High Income (UHI) of 240 MU per day per person (1 MU is denominated at the reference value of 1 int$), tiered distributions for wider responsibility, and complete governance records. Under verified capacity analysis, this supply supports global UHI for approximately 1.12 trillion years. Every settlement is a replayable, verifiable history rather than an opaque update on a central ledger.
There is no reliable way to turn distributed human contribution into stable paid AI safety work. Most funding routes require institutional access, credentials, or existing lab affiliation. AIR applies the kernel to solve this alongside two coordination problems.
Safety work and pay: AIR helps labs, fiscal hosts (organisations that hold and disburse funds for projects), and contributors turn safety work (evaluations, red-teaming, interpretability, documentation) into paid, verifiable contributions. It uses the Gyroscope Protocol and The Human Mark (a scheme to tag content as human- vs machine-origin) to produce attested work receipts so sponsors can verify what was done without relying on informal reports.
Governance logistics: Tracking how information and authority move through decision systems is treated with the same rigour as supply chains. AIR provides full replayable histories (โgenealogiesโ) and coherence metrics for governance quality, and supports verifiable compliance with standards such as ISO 42001 and the EU AI Act.
- ๐งญ Strategic Significance Brief - Why this kernel matters for ASI and governance
- ๐ฎ aQPU Kernel Implications and Potential - Advantages and use cases
- ๐ AIR Brief - Safety work and programs
- ๐ญ GyroLabe Brief - Model bridge and execution layer
- ๐ Kernel Specifications - How the kernel works
- ๐ Specifications Formalism - Byte formalism and proofs
- ๐ Holographic Algorithm Formalization - State space encoding
- ๐ AIR Logistics Framework - Governance flows and verification
- ๐ฐ Moments Economy Architecture - Money from coordination
- ๐ Moments Genealogies Specification - Replayable coordination history
- ๐ง Quantum Computing SDK Specification - Three computational surfaces
- ๐ SDK: Multi-Agent Holographic Networks - Distributed model testing
- ๐ SDK: The Holographic Web - Internet coordination layer
- ๐งฌ Substrate: Physical Memory Specification - Memory and carrier layout
All kernel properties are verified by exhaustive test suites (499 tests, all passing) covering the full state space, operator algebra, and SDK surfaces.
- ๐ Physics Tests Report - Kernel state verification
- ๐ Moments Tests Report - Ledger replay tests
- ๐ aQPU Verification Report - Algebraic properties verified (185 tests)
- ๐ aQPU Verification Report II - Extended kernel and SDK tests (122 tests)
- ๐ Alignment Measurement Report - Governance balance metrics
- ๐ Common Governance Model (CGM) - Shared coordination theory
- ๐ The Human Mark (THM) - Human vs machine tagging
- ๐ The Human Mark: Paper - Full tagging specification
- ๐ The Human Mark: Grammar - Parser and validation rules
- ๐ Gyroscopic Global Governance (GGG) - Four domains framework
If you are evaluating this work for research, policy, or implementation:
- Open an issue to discuss
- Email: basilkorompilias@gmail.com
- I am actively seeking collaborators and roles in AI governance and safety.
src/constants.py: Transition law, kernel constants, horizons, gates, and observablessrc/api.py: Precomputed tables, chirality register, word signatures, Walsh helpers, and public algebra APIsrc/kernel.py: Reference kernel execution and replay surfacessrc/sdk.py: Public SDK surface for state, Moments, spectral, tensor, and runtime operationssrc/tools/gyrolabe/: Native CPU/OpenCL backend, packed tensor engine, Bolmo bridge, and benchmarkssrc/app/: AIR coordinator, events, domain ledgers, aperture (governance balance metric), console, and CLIdocs/: Specifications, reports, architecture notes, and supporting theorytests/: Exhaustive verification suites for kernel physics, aQPU properties, SDK surfaces, and governance measurement
Create an environment and install dependencies (NumPy is required; the rest are in the repo tooling).
The public SDK surface is exposed through src/sdk.py. The native compute backend lives in src/tools/gyrolabe/ and is used automatically when available to accelerate algebraic workloads.
The Console provides a browser-based interface for managing project contracts:
# First-time setup: install dependencies and initialise the kernel transition table
python air_installer.py
# Run the console (starts both backend and frontend)
python air_console.pyThe console will be available at http://localhost:5173 (frontend proxies API requests to backend on port 8000). The installer automatically initialises the kernel transition table and project structure, so you are ready to start creating projects immediately.
See the Console README for detailed architecture, API endpoints, and development information.
The CLI provides a command-line workflow for syncing and verifying projects:
python air_cli.pyThis runs: Compile Projects -> Generate Reports -> Verify Bundles.
The CLI is optional if you are using the Console, but useful for batch operations, automation, or when working without a browser interface.
python -m pytest -v -s tests/from src.app.coordination import Coordinator
from src.app.events import Domain, EdgeID, GovernanceEvent
c = Coordinator()
# Shared-moment stepping
c.step_bytes(b"Hello world")
# Application-layer governance update (ledger event)
# Note: magnitude_micro and confidence_micro are integers (MICRO = 1,000,000)
from src.app.events import MICRO
c.apply_event(
GovernanceEvent(
domain=Domain.ECONOMY,
edge_id=EdgeID.GOV_INFO,
magnitude_micro=1 * MICRO, # 1.0 in micro-units
confidence_micro=int(0.8 * MICRO), # 0.8 in micro-units
meta={"source": "example"},
),
bind_to_kernel_moment=True,
)
status = c.get_status()
print(status.kernel) # current kernel state
print(status.apertures) # per-domain balance (cycle vs gradient) for Economy, Employment, EducationMIT Licence - see LICENSE for details.
@software{Gyroscopic_ASI_2026,
author = {Basil Korompilias},
title = {Gyroscopic ASI aQPU Kernel},
year = {2026},
url = {https://github.com/gyrogovernance/superintelligence},
note = {Deterministic routing kernel for Post-AGI coordination through physics-based state transitions and canonical observables}
}Architected with โค๏ธ by Basil Korompilias
Redefining Intelligence and Ethics through Physics
๐ค AI Disclosure
All code architecture, documentation, and theoretical models in this project were authored and architected by Basil Korompilias.
Artificial intelligence was employed solely as a technical assistant, limited to code drafting, formatting, verification, and editorial services, always under authentic human supervision.
All foundational ideas, design decisions, and conceptual frameworks originate from the Author.
Responsibility for the validity, coherence, and ethical direction of this project remains fully human.
Acknowledgements:
This project benefited from AI language model services accessed through LMArena, Cursor IDE, OpenAI (ChatGPT), Anthropic (Opus), and Google (Gemini).

