LLMs think. Pramana proves. Add deterministic constraint verification to any AI agent — sub-millisecond, zero hallucination risk, works alongside your existing stack.
LLMs are great at reasoning about context. But numeric constraints need a different kind of check.
Language models excel at reasoning, intent, and context. But constraint verification needs deterministic math, not probabilistic language.
LLMs approximate. "850 mg looks reasonable for a 70 kg patient" — but 850 > 700 is a math problem, not a language problem.
Using a $0.01 LLM call to verify "is X > Y?" is like hiring a lawyer to check your arithmetic. Use the right tool for the job.
Your LLM owns intent, context, and nuance. Pramana owns boundaries, constraints, and numeric proof. Together: production-ready.
A lightweight verification layer that sits downstream of your LLM — three deterministic gates powered by mathematical constraint checking.
Classifies evidence by type (direct, inference, analogy, testimony), detects logical fallacies, computes weighted reliability.
16 mathematical verification methods — boundary checks, equilibrium, proportionality, completeness, gap analysis.
Risk assessment using strategic postures, resource evaluation, and cost-benefit analysis. Context-aware action gating.
Nyaya (Sanskrit: nyāya, "method of reasoning") is a 2,500-year-old system of formal logic that classifies knowledge claims by their pramana (means of valid knowledge): pratyaksha (direct observation), anumana (inference), upamana (analogy), and shabda (testimony). Gate 1 applies these classifications to weight evidence reliability. Gate 2 uses Vedic mathematical sutras as constraint-satisfaction patterns — each sutra maps to a verification method (boundary check, equilibrium, proportionality). Gate 3 draws from Kautilya's Arthashastra for strategic risk assessment: shadgunya (six postures) and shakti traya (three powers) determine whether an action's risk profile warrants execution. This isn't decoration — it's the actual logic the engine runs.
Each sutra maps to a verification method that checks a different property of your data. The engine auto-detects which sutras apply based on your data shape, then computes an "energy score" (lower = better).
Add Pramana downstream of your LLM in three lines. Works with LangChain, CrewAI, AutoGen, or raw API calls.
pip install pramana-enginefrom pramana import Pramana engine = Pramana() # Verify constraints before acting result = engine.verify_quick( data={"cpu": 0.82, "memory": 0.71}, constraints={"cpu": 0.85, "memory": 0.90} ) if result.verdict == "VERIFIED": execute_action() else: halt(result.reasoning) # LangChain integration: from pramana.integrations.langchain import PramanaGuard chain = my_llm | PramanaGuard( banned_patterns=["password", "ssn"] )
Paste JSON data and constraints. Hit Verify. See the result in real-time from our API.
LLMs and math-based verification each have strengths. The question isn't which one — it's how to combine them.
Pre-built verification profiles with industry-specific thresholds. Deploy in minutes, not months.
Deterministic constraint layer for every AI action. Your LLM reasons, Pramana enforces the boundaries.
Auditable verification for regulated environments. Every decision traced back to evidence and rules.
Zero-tolerance patient safety verification. Deterministic checks that regulators can audit line by line.
Sub-millisecond pre-action verification for systems where latency kills. Runs on-device, no cloud required.
Automated building code compliance. IBC, OSHA, and ADA checks encoded as verifiable constraints.
Evidence-based incident response gating. Classify, verify, and gate security actions with forensic-grade provenance.
No per-token pricing. No surprise bills. Predictable costs at any scale.
Open source, self-hosted
Hosted API for teams
Enterprise & regulated industries
We run a multi-agent AI system in production — autonomous agents handling content generation, data pipelines, and external communications. Our LLMs were great at reasoning about what to do. But when it came to numeric constraints — dosage limits, budget thresholds, resource caps — they got it wrong 40% of the time.
So we built Pramana as a complementary layer — grounded in mathematical constraint verification. The LLM still owns intent and context. Pramana owns the numbers. Together: 100% recall on constraint violations, sub-millisecond, and our LLMs are free to do what they're actually good at.
Now we're open-sourcing it so every team shipping AI agents can add a math co-pilot that never hallucinates.
We work with teams to build custom verification profiles for their domain — from healthcare dosage checks to financial compliance rules. If you have a use case where AI decisions need to be provably correct, let's talk.
Add your math copilot in under 5 minutes.