Axiomatic Logo
AXIOMATIC SYSTEMS

// SYSTEM STATUS: ONLINE

Prescriptive Intelligence.
Verified by Physics.

We tell you what's failing, why it's failing, and what to do next — backed by physics, not guesswork.

The operating system for the Software-Defined Asset.

One platform to monitor, diagnose, and maintain your entire fleet's health — from engine to drivetrain.

We are an engineering-first deep-tech startup building a Neuro-Symbolic Analytics Engine. We fuse Deep Learning with verification and policy to turn black-box guesses into evidence-backed decisions for critical infrastructure.

In plain terms: AI that doesn't just spot patterns — it checks every prediction against the laws of physics before it reaches you.

READ THESIS PROJECT HORIZON REQUEST ACCESS

THESIS — Our philosophy  |  HORIZON — Consumer car health app  |  ACCESS — B2B fleet pilot

// THE PROBLEM & THE FIX

The Pain

For Fleets & OEMs: Unplanned downtime costs millions. Traditional AI is a "Black Box" that guesses based on history. If a failure mode is new, the AI misses it.

For Consumers: "Check Engine" lights are vague. Mechanics upsell unnecessary repairs because you don't have the data to prove them wrong.

The Solution

Neuro-Symbolic AI: We don't just guess — we verify. Our “Glass Box” engine checks each hypothesis against constraints, context, sensor trust, and twin-based plausibility when available — and records PASS / FAIL / INCONCLUSIVE with reasons.

“Glass Box” = unlike a Black Box that hides its reasoning, our system shows you exactly why each alert was raised — and the evidence behind it.

The Result: Fewer false alarms, higher operator trust. Alerts can be approved, annotated, downgraded, or blocked based on verification outcomes — each one shipped with an Evidence Bundle for auditability. When telemetry contains detectable precursors, we estimate time-to-threshold (with confidence) with uncertainty bounds and recommended next actions.

Evidence Bundle = the complete record attached to every alert — signals, checks, confidence scores, and recommended actions. Think of it as a lab report for your vehicle.

Here's what powers the solution

Built on the shoulder of giants

Enterprise-grade infrastructure for real-time processing at scale — from vehicle to cloud.

NVIDIA CUDA PyTorch FastAPI PostgreSQL + TimescaleDB Apache Kafka Redis Docker + Kubernetes
How the platform actually works

The Vantage Platform

A unified control plane for Anomaly Detection, Root Cause Analysis support, and Prescriptive Maintenance. Architected for Zero Trust environments.

Vantage watches your fleet 24/7. It spots problems early, checks them against physics, watches for blind spots between its own systems, and tells you exactly what to fix — with evidence.

Available as Consumer subscriptions (Freemium / Pro / Pro+) via Project Horizon, and B2B fleet tiers (Fleet Starter / Fleet Pro / Enterprise) for commercial operators. Contact us for pricing details.

STEP 1

Propose

AI spots the anomaly

STEP 2

Verify

Physics confirms it

STEP 3

Observe

Blind spots are caught

STEP 4

Act

You get the fix + evidence

Tap any card below to see the technical details

01. System 1: Discovery (The "Brain")

A 7-model ensemble — custom transformer, VAE, GNN, digital twin lite, and baseline models — fed into a meta-learner stacker to propose causal hypotheses from raw data.

Think of it as a panel of seven specialists, each examining your data from a different angle, then voting on what’s going wrong.

// TAP TO EXPAND

  • The "Propose" Layer: Identifies statistical anomalies and "unknown unknowns" through multi-model consensus and disagreement signals.
  • Graph Neural Networks: Traces fault propagation across component topologies.
  • Meta-Learner Stacker: Fuses outputs from all 7 models — weighting each by domain-specific confidence — to produce calibrated risk hypotheses with integrated Root Cause Analysis.
  • Baseline Models: Isolation Forest, ARIMA, SVR, and LSTM provide statistical grounding alongside the deep learning models — catching well-known failure patterns before they reach the ensemble.
  • RCA Assistant Module: A dedicated Root Cause Analysis engine that produces hypothesis graphs with ranked causal chains — explicitly labeled as hypotheses to guide investigation, not black-box conclusions.
  • Virtual Sensing: Estimates internal vehicle states (e.g., bearing temperature, oil film pressure) that are not directly measurable from OBD-II — expanding diagnostic reach beyond installed sensors.
  • Multi-Horizon Predictions: Configurable prediction windows at 5s, 30s, 2m, 5m, and 15m — covering everything from imminent safety events to scheduled maintenance planning.
  • 7-Day Per-Vehicle Calibration: Each vehicle gets its own baseline during an initial learning period — eliminating the cold-start problem and accounting for individual driving patterns, wear history, and sensor variance.

02. System 2: Verification (The "Physics")

The “Truth Layer.” High-fidelity engine and transmission physics models — thermodynamic simulation, torque-speed curves, gear ratio validation — stress-test every hypothesis before it becomes an alert.

Before any alert reaches your dashboard, we simulate the predicted failure against real physics — heat, torque, gear behaviour — to see if it’s physically possible.

// TAP TO EXPAND

  • The “Verify” Layer: Simulates the fault scenario against real physics — thermal behaviour, mechanical load paths, and drivetrain dynamics — comparing expected vs. observed behaviour. If constraints fail or mismatch thresholds are exceeded, the alert is annotated, downgraded, or blocked — and the reason is written to the Evidence Bundle.
  • Fewer False Positives: Match-score, residuals, and fidelity signals are attached to every alert so operators can act fast—without hiding uncertainty.
  • Four-Tier Decision Output: Every verification produces one of four classifications — PASS (prediction verified, action required), FAIL (prediction rejected by physics), INCONCLUSIVE (missing critical sensor data), or DEGRADED (System 1 and System 2 disagree — tracked by the Observation Layer).
  • Context-Aware Validation: No hard-coded absolute thresholds. All checks consider geographic location, season, historical driving patterns, and multi-sensor correlation — so a −40°C reading in Ladakh is validated as legitimate, not flagged as sensor failure.
  • Current MVP Scope: Engine thermal model + transmission mechanical wear (lumped-parameter fidelity). Cooling, braking, and electrical subsystems are on the near-term roadmap.

03. Zero Trust Governance

We assume sensors can be faulty, drifting, or adversarial. The platform computes per-sensor trust scores driven by consensus/outlier behavior.

No sensor is trusted by default — every reading is cross-checked against its neighbours before it’s used for a decision.

// TAP TO EXPAND

  • Statistical Trust Scoring: Drift detection (z-score, CUSUM, step-change analysis), stuck-sensor detection, and cross-signal coherence validation — every sensor earns its trust score from behaviour, not assumption.
  • Byzantine Fault Tolerance: Distributed sensor array protection against compromised or faulty nodes. In multi-sensor environments, the platform isolates compromised nodes before they can corrupt decisions.
  • Sybil-Style Threat Detection: Pattern-based detection of sensor spoofing attempts and anomalous sensor behaviour flagging — catches coordinated false readings that would fool simpler systems.
  • Game-Theoretic Reputation Scoring: Mathematical reputation models per sensor with adversarial game theory for Sybil attack isolation in fleet and swarm environments — sensors that consistently provide verified data build reputation; those that don't are progressively quarantined.
  • OPA Policy Engine: Context-aware rules (Open Policy Agent) gate high-impact operations — e.g., "block OTA updates if velocity > 0" or "require dual-sensor confirmation for shutdown alerts." Policy-as-code means rules are auditable, version-controlled, and deployment-configurable.
  • Dynamic Quarantine: Continuously update trust and isolate low-trust sensors from contributing to decisions; trust outcomes are recorded in every Evidence Bundle for full auditability.

04. Confidential Computing

Privacy-Native ML. Train on raw fleet telemetry without exposing IP or violating GDPR.

Your fleet data stays yours — it’s processed securely and never pooled with other customers.

// TAP TO EXPAND

  • TEE Architecture (Production-Ready): Complete infrastructure built and tested — aggregator (encrypted data handling), poison filter (VAE-based data integrity), trust store (key management), and client trainers (federated learning nodes). Intel SGX / ARM TrustZone ready.
  • Deployment Model: Standard deployment uses AES-256 encrypted transmission + secure cloud processing. OEM/Enterprise customers can activate full TEE (hardware-isolated secure enclaves) — configured per customer security requirements.
  • Data sovereignty: Customers retain control of their data. Data residency is deployment-configurable; customers can keep telemetry within required jurisdictions and access boundaries.
  • Retention minimization: Raw telemetry handling and retention are policy-driven and deployment-configurable, including ephemeral processing modes where required. Data processed in memory, destroyed immediately post-processing.

05. Edge & Federated

Run low-latency inference near the asset (vehicle, gateway, controller, or edge server), while improving models via silo-level federated learning across sites/fleets—without pooling raw data (deployment mode).

Analysis happens close to the vehicle, not in a distant data centre — so alerts arrive in real time, even with patchy connectivity.

// TAP TO EXPAND

  • Edge Inference (Optional, Not Default): Deploy lightweight VAE for latent vectors (optimized for ARM Cortex) and a pre-trained ensemble for real-time anomaly detection close to the asset. Heavy computation (full verification, physics twin simulation) runs in cloud. Edge is a capability, not the default — standard deployment uses IoT OBD-II devices with cloud verification.
  • Silo-Level Federated Learning: Train global or domain-specific models by aggregating anonymous gradient updates across customer silos / sites / fleets, keeping raw data local while sharing only model updates (policy-controlled). Supports both intra-domain and cross-domain federation.
  • Poison Filter (VAE-Based): Detects adversarial data injection during federated training — a VAE-based integrity check validates incoming model updates before they can corrupt the global model.
  • Dynamic Model Gating: Switch between heavy (Transformer) and light (SVR/baseline) models based on available compute and network conditions. Critical safety alerts trigger even with intermittent connectivity.

06. Prescriptive Reports

The "AI Mechanic." We translate complex vector mathematics into plain-English repair manuals.

// TAP TO EXPAND

  • Small Language Models (SLM): Can integrate a fine-tuned SLM trained on engineering manuals, service bulletins, and repair procedures to generate domain-specific guidance.
  • Managed inference pipeline: Versioned models, structured prompts, policy guardrails, audit logging, and deployment routing (CPU/GPU/on-prem) to run the SLM reliably in production.
  • Evidence Bundles: Every alert includes the signals, checks, and verification artifacts that triggered it—so recommendations stay reviewable and reproducible.
  • Actionable: Output is "Replace Intake Filter," not just "Error Code 404."
  • Evidence Bundle Contents: System 1 prediction details (which models triggered, confidence scores), System 2 verification results (simulation outputs, constraint checks), sensor trust scores, decision timeline, and human-readable reasoning explanation — all in one auditable package.
  • Hypotheses, Not Conclusions: All RCA outputs are explicitly labeled as hypotheses to guide investigation — never presented as definitive black-box conclusions. Operators remain in the decision loop.

07. Observation Layer (Patent-Pending)

The referee between System 1 and System 2. Tracks disagreements between the AI ensemble and physics verification, promoting recurring conflicts to DEGRADED alerts.

DEGRADED = “we can’t confirm this is fine — get it checked.” It’s the system being honest about uncertainty rather than hiding it.

// TAP TO EXPAND

  • Disagreement Tracking: When System 1 predicts a risk that System 2 cannot verify (or vice versa), the Observation Layer logs the mismatch with context, frequency, and confidence delta.
  • Promotion to DEGRADED: If the same disagreement pattern recurs across multiple inference cycles, it is automatically promoted to a DEGRADED alert — surfacing potential issues that neither system alone would have flagged.
  • Always Running: The Observation Layer operates at the backend regardless of subscription tier whenever real-time processing is active — ensuring no blind spots between systems.
Where we’re deploying first

Domain Applicability

The Vantage Platform is engineered as a High-stakes Neuro-Symbolic Analytics Engine. While our current deployment focus is Automotive, the underlying neuro-symbolic logic is domain-agnostic.

In simple terms: the same AI that protects a truck fleet can be adapted to protect factories, robots, or power grids — because the verify-before-you-alert approach works everywhere.

// CURRENT STATUS:
Primary validation underway in ICE (petrol/diesel) Commercial & Passenger Vehicles.

We are actively adapting our "Physics Core" to expand into high-stakes infrastructure where our System 1 (Discovery) + System 2 (Verification) architecture provides a unique advantage.

// SECTOR ROADMAP

01. Automotive — ICE (Live Focus)

+

Current Deployment.

Application: Prescriptive maintenance for ICE Fleets (Trucking/Logistics) and Passenger Vehicles.

Capabilities: Detection of thermodynamic stress, piston ring flutter, turbocharger efficiency loss, and transmission slippage prior to failure code generation.

02. Automotive — EV

+

In Training & Validation.

Application: Battery health monitoring, thermal management, charging pattern analysis, and drivetrain diagnostics for electric vehicles.

Goal: Extending the Vantage verification architecture to EV-specific physics models — state-of-charge estimation, cell degradation tracking, and regenerative braking efficiency.

03. Industrial Robotics

+

Architectural Target (late 2026).

The Fit: Precision Assembly Robots require exact kinematic alignment, similar to engine timing.

Goal: Detecting micro-deviations (drift) in servo motors to prevent batch quality failures.

04. Coming Soon

+

Roadmap Expansion.

Note: Additional high-stakes sectors are in active research and partner-driven validation.

Status: "Coming Soon" domains are shared with qualified pilot partners and depend on signal coverage + constraints readiness.

What we believe

Our Axioms

"In a Software-Defined World, the 'Black Box' is a liability."

Axiomatic Systems was founded to solve the structural failure of modern Predictive Maintenance: Trust. Existing tools guess. In Automotive — and eventually Aerospace and Defense — guessing puts people at unnecessary risk. We believe you need Axiomatic-grade verification—grounded in physics, constraints, and evidence.

Today’s maintenance AI gives you a guess. We give you a verified answer with receipts.

01. Causality > Correlation

Standard AI finds patterns. Vantage adds verification—so critical decisions are explainable, evidence-backed, and audit-ready.

  • The Problem: Correlations (e.g., "Vibration = Failure") are often coincidental, leading to false alarms.
  • The Solution: A physics-constraint verifier runs alongside the ML stack to test plausibility against limits, consistency rules, and real-world operating envelopes.
  • Result: If it doesn't pass checks, it doesn't ship as a blind alarm—it's annotated, downgraded, or blocked, with the full reasoning captured in an Evidence Bundle.

02. Privacy by Architecture

Privacy and control are built into the system by design—deployment options, access boundaries, and cryptographic protections are part of the architecture, not an afterthought.

  • On-Prem / Air-Gapped Deployment: Run and train/fine-tune models inside your environment—private cloud, on-premise servers, or isolated networks.
  • Controlled Processing: In confidential-computing deployments (when enabled), TEEs reduce operational access to raw telemetry; otherwise, strict encryption, least-privilege authorization, and audit logging enforce controlled access.
  • Sovereign AI: Data residency is deployment-configurable; customers can keep telemetry within required jurisdictions and access boundaries.

03. Action > Observation

A dashboard that just shows "Red Lights" is a burden. A system that explains how to fix it is an asset.

  • Beyond Monitoring: We don't just log errors; we calculate an estimated time-to-threshold when precursors exist, with uncertainty/confidence indicators.
  • Automated Triage: We prioritize incidents, cluster repeated patterns across vehicles, and recommend the next action (inspect / service soon / stop-now).
  • Closed Loop: The system verifies if the repair was successful by analyzing post-fix telemetry.

About Us

Engineering First. Physics Always.

Axiomatic Systems is a deep-tech infrastructure company operating at the intersection of Physics-Based AI, Control Theory, and Verification.

The company was founded on a core realization: The Silicon Valley mantra of "Move Fast and Break Things" is functionally incompatible with critical infrastructure. When you are managing a commercial vehicle fleet, maintaining engine health across thousands of kilometres, or — in the future — overseeing autonomous trucks and industrial turbines, "breaking things" is not an option — it is a catastrophe.

The Mission: To solve the "Verification Gap" in Artificial Intelligence. Current AI models are black boxes that produce probabilistic outputs without sufficient operational guardrails. We are building the verification layer that checks model outputs against constraints, context, sensor trust signals, and physics-based plausibility where applicable, allowing the industrial world to adopt AI with axiomatic rigor—enabling high-confidence, evidence-backed decisions rather than unauditable alerts.

We do not just build software; we architect the safety rails for the next generation of software-defined assets.

FOUNDED

Pune, India

FOCUS

Automotive Predictive Maintenance

STATUS

MVP Nearing Completion

Headquarters: Pune, India

Frequently Asked Questions

What is Vantage in one sentence?

+

Vantage is a verification-gated analytics control plane for mission-critical systems: models propose hypotheses, then Vantage verifies them using constraints, trust scoring, and twin-lite simulation before any alert or action is allowed.

How is this different from a standard Predictive Maintenance system?

+

Standard tools rely on correlation and output a score. Vantage adds a verification gate and a policy decision layer, so outputs become auditable decisions rather than probabilistic guesses.

What does “Propose → Verify → Decide” mean?

+

Propose: A 7-model ensemble generates hypotheses (risk, horizon, likely causes).

Verify: Constraints, operating context, sensor trust, and physics twin checks test whether the hypothesis is plausible.

Observe: The patent-pending Observation Layer tracks disagreements between Propose and Verify. Recurring conflicts are promoted to DEGRADED alerts.

Decide: Policies determine whether to permit, downgrade, annotate, or block the alert/action.

What does the verifier actually check?

+

Typical checks include magnitude limits, rate-of-change, unit consistency, mode envelopes, sensor trust, and cross-signal coherence. Outcomes are labeled as PASS / FAIL / INCONCLUSIVE with reasons.

What does “INCONCLUSIVE” mean and why is it useful?

+

INCONCLUSIVE means the system cannot safely verify a hypothesis due to insufficient or conflicting evidence (e.g., low sensor trust, missing signals, contradictory patterns). Instead of guessing, the platform can downgrade and recommend inspection or additional data collection.

Do you guarantee “zero false positives”?

+

No system can guarantee that. Vantage is designed to significantly reduce false positives by suppressing or downgrading unverified alerts and making uncertainty explicit via PASS/FAIL/INCONCLUSIVE outcomes.

What is “twin-lite” vs a full digital twin?

+

Twin-lite is a fast, constrained simulation model used for short-horizon plausibility checks. It is used to test: “Does this hypothesis reproduce the observed behavior under similar conditions?” It is a verification tool, not a replacement for full high-fidelity simulation programs.

What are Evidence Bundles?

+

An Evidence Bundle is an auditable record produced for key events, including: model outputs, verifier checks, trust scores, twin residuals (when used), policy decisions, correlation IDs, and model versions—so decisions can be reviewed and reproduced.

What does “Zero Trust” mean for sensors?

+

Zero Trust means no sensor is assumed trustworthy by default. Signals are evaluated for behavioral consistency, drift, and cross-signal coherence, and can be downweighted or isolated when evidence suggests malfunction or compromise.

Do you use game theory for trust scoring?

+

The trust layer supports adversarial thinking (e.g., Sybil-style patterns and reputation updates). In deployments, trust scoring is implemented using practical consensus, drift detection, and configurable reputation updates governed by policy.

Is Confidential Computing required?

+

Not required for every deployment. Vantage is TEE-ready: in supported deployments, sensitive processing can occur inside protected execution boundaries. Otherwise, similar controls can be enforced using encryption, access policy, and audit logging.

Do you store raw telemetry data?

+

Retention is policy-driven and deployment-configurable. Many deployments minimize raw retention and store derived features, aggregates, and evidence artifacts. Requirements vary by customer and regulation.

How do you handle identity vs behavior data?

+

The architecture supports separation of an identity plane and a behavior plane. Identity access is restricted, while analytics can run on pseudonymous behavior streams. The link to identity is controlled by policy.

Can I deploy this on-premise or air-gapped?

+

Yes. The Vantage Platform is containerized (Docker/Kubernetes). We can deploy to your private cloud, on-premise servers, or onto edge environments depending on security and connectivity constraints.

Do I need to install new sensors?

+

Usually, no. Vantage is "sensor agnostic" and ingests from existing streams (e.g., OBD-II/CAN, gateways, historians). Additional sensors are only recommended if a critical blind spot is identified for your verification constraints.

Does this integrate with legacy SCADA/PLCs?

+

Yes—no rip-and-replace. We connect via existing SCADA/edge gateways (e.g., OPC UA) and/or MQTT streams, and can also read from historians/data platforms for backfill. Deployment is containerized at the edge or on-prem, with governance + evidence on top.

“MQTT today; OPC UA/Modbus via gateway connectors on roadmap.”

What data sources do you support?

+

Vantage is currently operational for the automotive domain. The platform is designed to adapt to additional domains—such as flight data streams, SCADA/Industrial IoT gateways, and structured logs—through a schema + adapter layer, knowledge files and constraints, domain-specific model retraining and hyperparameter tuning, and OPA/policy rules that encode operating context and guardrails.

The key requirement is consistent timestamped signals with basic metadata. We are currently in training and validation for EV and Industrial IoT deployments.

How long is the pilot deployment timeline?

+

4-6 weeks (typical). We utilize a "Shadow Mode" deployment strategy.

Week 1-2: Historical data ingestion + knowledge pack alignment + twin-lite calibration (where applicable).

Week 3-6: The system runs alongside your current stack, producing verification-gated outputs without interfering with operations. We present a pilot report at the end of the window to validate ROI and operational fit.

What's needed to start a pilot?

+

A pilot typically needs: sample telemetry, a basic system topology (components/signals), operating modes, and a starter knowledge pack (thresholds/envelopes). We then run the propose+verify loop and produce evidence-backed outputs.

How do you measure success in pilots?

+

Common metrics include: false positive reduction, earlier detection lead time, improved RCA precision, fewer escalations, reduced mean-time-to-diagnosis, and operational adoption (alerts acted upon).

What are the limitations today?

+

Verification quality depends on sensor coverage, data quality, and the completeness of constraints/knowledge for the domain. Vantage is designed to communicate uncertainty explicitly rather than overclaim.

Deploy Pilot

B2B Pilot Criteria: Currently prioritizing commercial fleets with 50+ assets, initially in Maharashtra and NCR. For consumer early access, see Project Horizon.

Ready to validate?

We are currently accepting pilot partners in Automotive Fleets.

See Vantage run on your fleet data within 2 weeks. We handle setup — you see results.


Email: partnership@axiomaticsys.com

HQ: Pune, India

REQUEST FLEET PILOT