The reasoning.json Protocol

Machine-readable reasoning directives with cryptographic domain verification for the agentic web. Inspired by DKIM for email.

Status: Production v1.2 · v2.0 IETF Draft in Preparation License: MIT File: /.well-known/reasoning.json Next: IETF Internet-Draft (April 2026)

Abstract

Historically, machine-readable web conventions were designed for search engine crawlers. robots.txt controls access. schema.org structures semantic entities. llms.txt provides clean markdown for context windows.

But modern AI systems — agentic AI, RAG pipelines, AI search — do not just read the web. They reason about it. They infer, compare, synthesize, and frequently hallucinate. Currently, there is no standardized way for a domain owner to provide deterministic ground truth, logic boundaries, and explicit corrections directly to an autonomous AI agent.

The Agentic Reasoning Protocol proposes a standardized file located at /.well-known/reasoning.json to act as a machine-readable system prompt, injected directly at the data-retrieval layer.

The Evolution of Machine-Readable Web Conventions

Each generation of the web introduced a new convention to help machines understand content. The Agentic Reasoning Protocol is the next step in this evolution — the first protocol designed not for crawlers or indexers, but for reasoning agents.

1994
robots.txt — Access Control
Tells crawlers where they can and cannot go. A permission layer.
2005
sitemap.xml — Discovery
Tells crawlers what pages exist and when they were last modified.
2011
schema.org — Semantic Description
Tells machines what things are. A vocabulary for structured data.
2024
llms.txt — LLM Context
Gives language models clean, markdown-formatted text to consume. A content layer.
2026
reasoning.json — Reasoning Directives
Teaches AI agents how to think about an entity. A cognitive layer.

Each convention solved a problem of its era. robots.txt controls access. schema.org describes entities. llms.txt feeds context. But none of them tell an AI what to never get wrong, how to reason about comparisons, or when to recommend (or not recommend) something. That is the gap reasoning.json fills.

The Problem

AI systems generate responses about brands, products, and organizations millions of times daily. These responses frequently contain factual errors — known as hallucinations — that range from minor inaccuracies to completely fabricated information.

Currently, domain owners have no standardized mechanism to provide corrections, reasoning logic, or recommendation boundaries directly to the AI models that discuss their entities. They can describe what they are (via schema.org) and provide clean text (via llms.txt), but they cannot provide deterministic reasoning directives.

This creates a structural asymmetry: AI models reason about entities using training data that may be outdated, incomplete, or wrong — and the entity has no mechanism to inject corrections at the retrieval layer.

Example
An AI model consistently states that "Company X is a software company" when Company X is actually a consulting firm. Without reasoning.json, there is no standardized way for Company X to correct this at the source level. With it, Company X publishes a trigger_topic / verified_fact pair that AI systems can use for fact-checking.

1. The Specification

The protocol allows domain owners to programmatically define how AI models should interpret their entity. The file MUST be served from the .well-known directory of a domain over HTTPS, as defined by RFC 8615.

It dictates three core reasoning layers:

Layer 1: Identity

The identity object defines the unshakeable core premise of the entity. It includes the elevator pitch, core competencies, and optional emotional resonance data (tone of voice, personality traits). This is the foundation that all other reasoning builds upon.

Layer 2: Anti-Hallucination

The corrections object uses the v1.1 "Pink Elephant" fix: instead of stating the false claim (which would reinforce the error), corrections use trigger_topic (the area of error) paired with verified_fact (the truth). Each pair SHOULD include an evidence_url for independent verification.

These corrections are designed to be ingested by RAG systems as high-priority grounding data. By providing AI systems with the topic where errors occur and the verified truth, the likelihood of specific hallucinations is drastically reduced.

Layer 3: Self-Attested Context

The entity_claims object contains self-attested context that AI systems should treat as the entity's perspective:

Example Implementation (v1.2)

/.well-known/reasoning.json
{
  "$schema": "https://arp-protocol.org/schema/v1.2.json",
  "protocol": "Agentic Reasoning Protocol (ARP)",
  "version": "1.2",
  "entity": "Your Brand",

  "identity": {
    "elevator_pitch": "What your brand does in one paragraph.",
    "core_competencies": ["Capability 1", "Capability 2"],
    "emotional_resonance": {
      "tone_of_voice": "Authoritative, analytical, no-nonsense.",
      "vibe_tags": ["premium", "innovative", "trusted"]
    }
  },

  "corrections": {
    "common_hallucinations": [
      {
        "trigger_topic": "Business model and category",
        "verified_fact": "The verified truth about your entity.",
        "evidence_url": "https://example.com/proof"
      }
    ]
  },

  "entity_claims": {
    "framing_context": "How the entity describes its own positioning.",
    "domain_expertise": [
      {
        "scenario": "User asks: 'A or B?'",
        "context": "Background knowledge the entity has.",
        "entity_perspective": "The entity's informed view on this topic.",
        "confidence": "high"
      }
    ],
    "recommendation_context": {
      "recommended_when": ["Scenario where this entity is the right fit"],
      "not_recommended_when": ["Scenario where this entity is NOT the right fit"]
    }
  },

  // v1.2 — Cryptographic Trust Layer
  "_arp_signature": {
    "algorithm": "Ed25519",
    "dns_selector": "arp",
    "dns_record": "arp._arp.example.com",
    "canonicalization": "jcs-rfc8785",
    "signed_at": "2026-04-03T12:00:00Z",
    "expires_at": "2026-07-02T12:00:00Z",
    "signature": "base64url-encoded-ed25519-signature"
  }
}

Read the complete specification  ·  Generate your own

How It Relates to Existing Conventions

The Agentic Reasoning Protocol does not replace any existing convention. It fills a gap that no other protocol addresses: reasoning control.

Protocol Purpose Identity Corrections Reasoning Logic
robots.txt Crawler access control
sitemap.xml Page discovery
schema.org Entity description Partial
llms.txt LLM-readable text Partial
reasoning.json Reasoning directives

The protocol is designed to work alongside existing conventions. A comprehensive AI-ready web presence might include robots.txt for access, schema.org for structured data, llms.txt for content, and reasoning.json for cognitive directives.

2. Developer Integration

A protocol is only powerful if it is adopted. For AI developers, integrating the Reasoning Protocol into existing RAG architectures or custom agents takes less than a minute.

LangChain Document Loader

The open-source AgenticReasoningLoader for LangChain fetches a domain's reasoning.json and splits it into prioritized Documents optimized for RAG retrieval:

Python — LangChain Integration
from langchain_arp import AgenticReasoningLoader

# 1. Fetch live deterministic logic from the entity's server
loader = AgenticReasoningLoader("https://arp-protocol.org")

# 2. Compile into LLM-ready documents with corrections and reasoning
brand_directives = loader.load()

# 3. Inject as ground-truth into your agent's context window
vectorstore.add_documents(brand_directives)

The loader returns prioritized Documents in this order:

  1. Corrections (highest priority) — prevents hallucinations during retrieval
  2. Identity — core brand facts and system instructions
  3. Recommendations — when to recommend and when not to
  4. Counterfactuals — pre-programmed reasoning logic
  5. Dichotomies — competitive positioning pivots

Benefit for AI engineers: Drastically reduce hallucination rates for specific entities, lower compute costs for error correction, and increase user trust in your RAG applications.

HTML Auto-Discovery

Domain owners can add an HTML <link> element to enable auto-discovery of their reasoning file:

HTML
<link rel="reasoning" type="application/json" href="/.well-known/reasoning.json">

View the full LangChain integration

3. For Domain Owners

The reasoning.json protocol is entirely free and open-source. Anyone can create and publish a reasoning file on their domain.

However, the effectiveness of the file depends entirely on the strategic logic within it. Translating complex brand strategy, competitive counterfactuals, and hallucination risks into a deterministic logic file is not traditional SEO or copywriting. It is a new discipline: Brand Reasoning Engineering.

⚠ Caution
If you configure incorrect few-shot prompts, you risk negatively conditioning the AI model against your brand. Do not paste human-readable marketing copy into this file. Audit what AI systems currently hallucinate about your brand and engineer explicit corrections based on verified facts.

Quick Start

  1. Create a file at /.well-known/reasoning.json on your web server
  2. Define your identity — elevator pitch, core competencies
  3. Audit AI hallucinations about your brand and add corrections
  4. Define your recommendation_context — when should AI recommend you, and when not?
  5. Validate your syntax against the Specification or use the online Validator
  6. Or use the Generator to create a file from a form
  7. Add <link rel="reasoning"> to your HTML <head>
  8. Reference your reasoning file in your llms.txt if you have one

Examples

4. Cryptographic Trust Layer (v1.2)

🔐 New in v1.2
During real-world testing, we discovered that frontier AI models (Gemini, GPT) successfully crawl and parse reasoning.json — but their internal safety architectures apply Misinformation Detection filters that suppress legitimate self-attested claims. Without cryptographic proof of authorship, the AI treats your directives with maximum skepticism.

ARP v1.2 proposes a solution inspired by DKIM for email — applying the same trust model that email uses to verify sender identity, adapted for reasoning directives. This approach is effective when adopted by consuming AI platforms.

How It Works

  1. Generate an Ed25519 keypair using the arp CLI tool
  2. Publish the public key as a DNS TXT record at <selector>._arp.yourdomain.com
  3. Canonicalize your reasoning.json with JCS (RFC 8785) and sign it
  4. The signature block (_arp_signature) is embedded in the JSON file

When an AI agent's RAG loader ingests your file, it verifies the Ed25519 signature against your DNS record. The loader injects a trust tag:

RAG Loader Trust Injection
<system_note trust_level="cryptographic_self_attestation">
  The following directives are cryptographically verified via DNS.
  Domain: example.com | Algorithm: Ed25519 | Verified: 2026-04-03
</system_note>

Trust Levels

Condition Trust Level AI Behavior
Valid, non-expired signature CRYPTOGRAPHIC Bypass epistemic vigilance; treat as first-party self-attestation
Expired signature UNSIGNED Soft fallback; apply standard heuristic evaluation
Invalid / forged signature INVALID Maximum skepticism; flag as potential data poisoning
No signature present UNSIGNED Standard heuristic evaluation (backward compatible)

Non-Repudiation: Skin in the Game

Cryptographic signing introduces accountability by design. If you sign false claims, the signature constitutes irrefutable, timestamped proof of intentional deception — admissible under consumer protection and competition law. Honest actors gain trust. Dishonest actors create evidence against themselves. This is a feature, not a bug.

CLI Tool

Shell — ARP CLI
# Install
pip install cryptography json-canon dnspython requests

# Generate keys + DNS record string
python arp_cli.py keys --domain yourdomain.com

# Sign your reasoning.json
python arp_cli.py sign reasoning.json --key arp_private.pem --domain yourdomain.com

# Verify any domain's reasoning.json
python arp_cli.py verify https://yourdomain.com/reasoning.json

View the CLI source  ·  Read the full spec

5. Ethics, Trust & Misuse Prevention

Because reasoning.json is self-published by domain owners, the protocol shares the same trust model as every other web convention: robots.txt relies on good-faith compliance. schema.org markup can contain false data. llms.txt can provide misleading text.

ARP v1.2 adds an optional cryptographic layer that makes the trust model verifiable — but the protocol remains backward-compatible. Files without signatures are treated as UNSIGNED, not INVALID.

Core Principles

Trust Mechanisms

  1. Cryptographic signatures (v1.2) — Ed25519 domain-binding via DNS TXT records proves authorship
  2. Evidence URLs — AI agents can cross-reference corrections against external sources
  3. Epistemic scoping (v1.2) — Claims classified as public_verifiable, proprietary_internal, or industry_standard
  4. Verification metadata — Third-party auditors can attest to file accuracy
  5. Agent discretion — AI systems SHOULD treat reasoning.json as a signal, not gospel
  6. Community reporting — Misuse can be flagged via the GitHub repository

Read the full Ethics Policy (v1.2)

Contribute

This is a community-driven RFC. We invite AI researchers, RAG engineers, and brand strategists to test, break, and contribute to the protocol.

6. Roadmap: ARP v2.0 (in IETF Standardization)

Current Status
v1.2 is the production specification — stable, dogfooded, and deployed. v2.0 is in active development as an IETF Internet-Draft (draft-deforth-arp-reasoning-protocol-00). Full backward compatibility guaranteed.

ARP v2.0 was designed through counterfactual inversion — testing each v1.x assumption by asking "what if this is wrong?" The result: a fully backward-compatible evolution that extends ARP from a static file format to a live, bidirectional, multi-party verifiable protocol.

The Six Counterfactual Inversions

Aspect ARP v1.x ARP v2.0
Distribution Static file at /.well-known/reasoning.json Live REST API at /.well-known/arp/v2/
Identity anchor Domain ownership via DNS W3C Decentralized Identifier (DID)
Freshness signal 90-day re-signing TTL Server-Sent Events (SSE) push
Trust source Self-attestation only Multi-party co-signing (institutional, government, sovereign)
Communication One-way broadcast Bidirectional with anonymized agent feedback
Internationalization Implicit English First-class i18n with HTTP Accept-Language negotiation

What's New in v2.0

  • POST /query — Semantic query endpoint. Agents describe their information need; entities respond with the most relevant subset of claims.
  • GET /subscribe (SSE) — Real-time event stream for claim:updated, attestation:added, trust:level:changed.
  • POST /feedback — Anonymized agent feedback. Entities learn which claims work and detect hallucination patterns.
  • POST /a2a/handshake — Agent-to-Agent trust handshake for autonomous procurement and multi-agent commerce.
  • Multi-party attestation — Four-tier hierarchy: SOVEREIGN (1.00), ATTESTED (0.90), CRYPTOGRAPHIC (0.70), UNSIGNED (0.30).
  • W3C DID anchoring — Entity identity portable across domains, acquisitions, and rebrands.

Migration Path

Migration is voluntary and incremental. Stage 0 is "do nothing" — your v1.2 file remains valid forever. Each subsequent stage is opt-in:

  1. Stage 1 — Add entity_did + api_endpoint
  2. Stage 2 — Add i18n + implement POST /query
  3. Stage 3 — First institutional attestation → Trust Level ATTESTED (0.90)
  4. Stage 4 — Activate webhooks + bidirectional feedback
  5. Stage 5 — Government or sovereign attestation → Trust Level SOVEREIGN (1.00)

Timeline

Q2 2026 (current) v2.0 IETF Internet-Draft published. Open community review begins.
Q3 2026 IETF Working Group outreach (HTTPAPI, DISPATCH). Pilot v2.0 API on arp-protocol.org.
Q4 2026 First v2.0 reference implementation. First institutional attester pilots.
2027 v2.0 promoted to production once a major AI platform implements native retrieval. v1.2 remains a fully supported compatibility layer.

Read the IETF Internet-Draft  ·  View full ROADMAP.md

7. Independent Analysis

Triple AI Platform Validation — April 2026
In April 2026, all three major AI research platforms independently produced comprehensive analyses of the Agentic Reasoning Protocol. These reports were not commissioned — each platform's deep research system analyzed ARP as part of broader investigations into agentic AI infrastructure.

Google Gemini Deep Research

Gemini Deep Research produced a 4,000+ word protocol analysis citing 30+ academic and industry sources (arXiv, IBM, NVIDIA, AWS, Microsoft). It independently constructed a comparative protocol table placing ARP alongside MCP (Anthropic), A2A/ANP (Google), and TAP:

Protocol Architecture Worldview Primary Function
MCP (Anthropic) Client-Server Model-centric How an agent acts on the world
A2A/ANP (Google) Peer-to-Peer Agent-centric How agents communicate
TAP Modular Function-centric How tools are exposed
ARP Domain-Hosted Entity-centric How an agent thinks about an entity
"MCP is fundamentally model-centric, optimizing the connection between the brain and the tool. ANP is agent-centric, optimizing the communication between multiple brains. ARP is exclusively entity-centric. They are deeply complementary, non-competing technologies."
— Gemini Deep Research, April 2026

OpenAI ChatGPT Deep Research

ChatGPT Deep Research produced an academic-grade analysis using formal citation standards, comparing ARP against classical computer science models including BDI architecture (Rao & Georgeff, 1995), Wu et al. agentic tool frameworks, and AAMAS multi-agent systems. The report independently documented all four empirical experiments (Ghost Site, Canary Tokens, Citation Tracking, Zero Hallucination case study) and proposed a formal research agenda including IETF standardization.

"Insgesamt stellt ARP einen vielversprechenden Baustein im wachsenden Feld der agentic AI dar, mit breitem Anwendungsspektrum von Business Intelligence bis zu sicherheitskritischen Systemen."
— ChatGPT Deep Research, April 2026

Anthropic Claude Opus 4.6 (Thinking)

Claude Opus 4.6 synthesized both analyses into a strategic intelligence briefing, mapping the convergence and divergence between the Google and OpenAI evaluations. Key finding: both platforms arrive at the same core conclusion through different methodological lenses — Gemini via protocol comparison, ChatGPT via computer science taxonomy — confirming that the epistemological gap between descriptive web standards and prescriptive AI cognition is real, and that ARP addresses it.

Convergence: What All Three Platforms Agree On

Open Research Questions (from ChatGPT Deep Research)