Project NOVA Co.

Keep the future human. While reaching for the Stars.

Building AI systems that augment human capability, not replace it.

About

Founded by George Sandoval

Our mission is intelligence through relationship, not replacement. We believe AI should close the gap between what you can do alone and what you're capable of with the right support.

Project NOVA Co. is built from the ground up by a solo founder—no faceless fund, no committee. Every product is designed to augment human decision-making and creativity, not substitute it.

Project NOVA is proudly built with QTPOC and QTBIPOC communities in mind. We believe the future of AI must reflect the full diversity of humanity—including those at the intersection of sexual orientation, gender identity, and racial identity.

Our technology is designed to serve all communities: QTPOC (Queer and/or Trans People of Color), QTBIPOC (Queer, Trans, Black, Indigenous, and People of Color), LGBTQIA+ communities and allies, and anyone who has been underserved by technology. “Keep the future human” means keeping the future inclusive.

GS

Products

LIVE

NOVA Council

Your AI never disagrees with you. Ours does.

12 AI advisors that argue before they answer. Multi-perspective synthesis with disagreements surfaced. Persistent memory. Value calibration. Decision tracking.

IN DEVELOPMENT

Nova Relics

AI-powered gaming experience

Wield Relic Power across 12 Specialized Advisors. Every decision feeds Living Memory with confidence self-calibration.

Coming Soon
COMING SOON

NOVA Tech

AI-powered security and bug bounty operations

Automated vulnerability scanning and security consulting

Coming Soon
COMING SOON

NOVA Industrial

Intelligent systems consulting

Helping businesses integrate AI that augments their workforce

Coming Soon

Technology

Powered by GMAS — Geometric Multi-Agent System

The intelligence behind NOVA Council

Drag to rotate · Scroll to zoom

GMAS 2.0 — Technical coordination

Vertical floor architecture — Each of the 12 Pillars contains 100–1000 processing floors. Queries traverse floors with adaptive routing — simple queries use fewer floors, complex queries use more. Creates genuine synthesis rather than single-perspective output.

Stateful processing units — Outcome-based lifecycle management: units process per query component with confidence-threshold survival. Successful patterns persist via evolutionary pattern caching; failed paths are discarded.

Confidence self-calibration — Per-entity SGD adjusts confidence multipliers based on acceptance feedback. The system learns without degradation.

Parallel processing — Consulted pillars run simultaneously, not sequentially. Multi-perspective queries complete 2–4× faster with no race conditions.

Parameterized cognitive processing — Each entity has tunable risk_tolerance, decision_speed, and language_patterns — e.g. The Analyst (precision-weighted, low risk tolerance), The Innovator (divergent exploration, high risk tolerance), Sentinel (5-factor risk evaluation), Arbiter (constraint validation).

Built-in governance — Sentinel (5-factor weighted risk evaluation) and Arbiter (4-factor alignment scoring) are wired into the floor architecture — core to every query.

Built-in conflict detection — The system doesn't just average perspectives — it identifies genuine disagreements between advisors and presents them to the user. Knowing where experts differ is often more valuable than the final answer.

The GMAS Architecture — 6 Layers of Geometric Intelligence

Every decision begins at a single point. NOVA Core sits at the center of all computation and synthesis.

6 geometric layers

NOVA Nexus — 17 AI entities with parameterized cognitive processing

Parallel processing (2–4× faster, no race conditions)

~2× overhead vs single-model baseline

GPU-accelerated at scale (124× at 4096 entities)

Shape summary

LayerShapePurposeEntities
1PointOriginNOVA Core
2TetrahedronCore coordinationNOVA Core, Sentinel, Arbiter, Living Memory
3CubeStructural stabilityBridges core ↔ Pillars
4IcosahedronAdvisory network12 Pillars (100–1000 floors each)
5SphereBalance & containmentParallel processing, fair weighting
6HypersphereSystem boundaryThe Orchestrator
  • Vertical floor architecture — each Pillar has 100–1000 processing floors; adaptive routing uses fewer floors for simple queries, more for complex. Creates genuine synthesis rather than single-perspective output.
  • Stateful processing units — outcome-based lifecycle management with confidence-threshold survival. Successful patterns persist via evolutionary pattern caching; failed paths are discarded.
  • Confidence self-calibration — per-entity SGD adjusts confidence multipliers based on acceptance feedback. The system learns without degradation.
  • Parallel processing — consulted pillars run simultaneously, not sequentially. Multi-perspective queries complete 2–4× faster with no race conditions.
  • Parameterized cognitive processing — each entity has tunable risk_tolerance, decision_speed, and language_patterns: e.g. The Analyst (precision-weighted, low risk tolerance), The Innovator (divergent exploration, high risk tolerance), Sentinel (5-factor risk evaluation), Arbiter (constraint validation).
  • Living Memory — context that grows with every interaction; GPU-accelerated geometric routing; distributed-ready architecture.

GPU-Accelerated Geometric Routing — Real Benchmarks

GMAS geometric routing engine benchmarked on NVIDIA RTX 5070 Ti (Blackwell). Entity similarity computation — the core operation behind multi-agent coordination and optimal routing path selection.

Matrix Operations (Foundation)

1000×10004.5ms0.3ms0.0×
CPU
GPU
5000×5000255ms25ms0.0×
CPU
GPU
10000×100001.87s195ms0.0×
CPU
GPU

GMAS Entity Routing — Scaling

16 entities0.01ms0.03msCPU faster (GPU overhead)
CPU
GPU
64 entities0.02ms0.05msCPU faster (GPU overhead)
CPU
GPU
256 entities0.30ms0.05ms0×
CPU
GPU
1024 entities4.02ms0.07ms0×
CPU
GPU
4096 entities69.7ms0.56ms0×
CPU
GPU

At 16 entities, CPU handles GMAS routing efficiently. But as the system scales to thousands of coordinated agents — for distributed computing, space infrastructure, or enterprise deployment — GPU acceleration becomes essential. At 4,096 entities, CUDA delivers 124× faster routing.

Benchmarked on NVIDIA GeForce RTX 5070 Ti · 16GB GDDR7 · Blackwell Architecture · CUDA 12.6 · 8,960 CUDA Cores

Built for scale. Built for the future.

Research foundations →

For Investors

Why Project NOVA?

Key metrics

  • Live product deployed at novacouncil.ai
  • Solo-founded, capital efficient
  • Multi-product ecosystem in development
  • 6-layer GMAS architecture — fully built
  • Multi-factor risk evaluation, 4-factor alignment scoring & incentive alignment for multi-agent cooperation
  • CUDA-ready geometric computation engine

Market opportunity

  • AI advisory market projected to reach $XX billion by 2030
  • Multi-agent systems: fastest growing AI category
  • Strategic alignment with NVIDIA's distributed computing vision

For Partners

Partner with Project NOVA

  • Technology partners (NVIDIA, cloud providers)
  • Integration partners (API access)
  • Resellers and distributors
  • Research collaborations

GMAS creates new categories of GPU workloads, distributed computing applications, and AI coordination systems.