CAI core

contradictions

CAI treats contradiction as a first class signal. This page defines the math, the data model, a validation schema, a diagram of the flow, and a minimal implementation so engineers can detect, localize, and resolve contradictions in text pipelines today.

What is contradiction engineering

Contradiction engineering finds, scores, and resolves conflicts created or amplified by compression. Instead of smoothing them away, CAI surfaces conflicts so systems can abstain, re-retrieve evidence, or present qualified answers with provenance.

  • Detect conflicts across candidate claims and sources
  • Map the earliest compression site that induces the conflict
  • Resolve with abstention, re-retrieval, or side-by-side presentation

Why it is unique

  • Compression-aware: each conflict links to a concrete compression step on the path
  • Actionable thresholds: conflicts drive abstain and re-check rules, not only scores
  • Provenance: every resolution cites sources and replayable steps
  • Human compatible: supports side-by-side answers when the world is genuinely ambiguous

See also ContradictionEngineering.com.

Data flow

Goal: produce contradiction objects with minimal scopes and actionable edits that reduce C on re-evaluation.

Definitions

Pairwise contradiction score

For claims $c_1, c_2$ with NLI probabilities $p_E$ and $p_C$: $$ \mathrm{CS}(c_1,c_2) = p_C - p_E \in [-1,1] $$
  • CS > 0: net contradiction. CS < 0: net entailment
  • Attach evidence spans for both claims
  • Global conflict: max pairwise CS or graph aggregate over claims

Contradiction magnitude and strain

Contradiction magnitude $$ C = a\,\overline{P_{\text{NLI}}(contr)} + b\,U_{SAT} + c\,H_{halluc} \in [0,1] $$
Compression tension $$ CTS = S \cdot C $$

Use CTS to rank where resolution work matters most. Common compression sites: retriever cutoffs, reranker pruning, prompt bias, sampler settings.

Workflow

  1. Detect: generate claims and run pairwise NLI to compute CS
  2. Map: attribute each conflict to the earliest path site whose removal drops the spike
  3. Resolve:
    • Abstain if entailment fails or scores exceed thresholds
    • Re-retrieve targeted evidence for high CS pairs
    • Present both with provenance when the world is split

CE contract

Emit only if all checks pass:

$$ \big(V(E,\hat{y})=1\big)\;\land\;\big(\mathrm{CS}_{\max}\le\tau_{\text{cs}}\big)\;\land\;\big(\mathrm{CTS}(\mathcal{P})\le\tau_{\text{cts}}\big)\;\land\;\big(\mathrm{Audit}(\mathcal{P})\ge\tau_{\text{audit}}\big) $$

Fail any check → abstain or present both claims with provenance and a short note on the implicated compression site.

Data model

Claim

{
  "id": "c_102",
  "text": "The vaccine reduces transmission by 60 percent.",
  "spans": [[4, 10], [20, 33]],
  "supports": ["src_7#p2"],
  "modality": "text",
  "certainty": 0.72,
  "time": "2025-10-28T19:00:00Z"
}

Contradiction object

{
  "id": "co_9",
  "type": "nli|symbolic|grounding",
  "scope_claims": ["c_102", "c_141"],
  "unsat_core": ["rate_def", "time_window"],
  "score": 0.81,
  "suggested_edits": [
    "disambiguate time window",
    "define transmission vs infection"
  ]
}

JSON schema

Use this to validate contradiction objects across services.

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "$id": "https://compressionawareintelligence.com/schemas/contradiction-object.schema.json",
  "title": "ContradictionObject",
  "type": "object",
  "required": ["id", "type", "scope_claims", "score"],
  "properties": {
    "id": { "type": "string", "pattern": "^co_[A-Za-z0-9_-]+$" },
    "type": { "type": "string", "enum": ["nli", "symbolic", "grounding"] },
    "scope_claims": {
      "type": "array",
      "items": { "type": "string", "pattern": "^c_[A-Za-z0-9_-]+$" },
      "minItems": 2
    },
    "unsat_core": {
      "type": "array",
      "items": { "type": "string" }
    },
    "evidence": {
      "type": "array",
      "items": { "type": "string" }
    },
    "score": {
      "type": "number",
      "minimum": 0,
      "maximum": 1
    },
    "suggested_edits": {
      "type": "array",
      "items": { "type": "string" }
    },
    "meta": {
      "type": "object",
      "additionalProperties": true
    }
  },
  "additionalProperties": false
}

Engineer API

POST/contradiction/score{ text, sources[], constraints[][] }returns { C, details }
POST/contradiction/extract{ text }returns claims[]
POST/contradiction/resolve{ claims[], co }returns edit plan

The reference backend in cstc_service.py exposes compatible fields through /compute. Mirror these if you want a dedicated contradictions service.

Minimal code path

# 1) run the reference service
uvicorn cstc_service:app --reload

# 2) client request (pseudo)
payload = {
  "text": doc,
  "sources": [doc_a, doc_b],
  "constraints": [[1, -3], [2, 3]]  # optional CNF
}
res = POST("/compute", json=payload)
C = res["contradiction"]["C"]  # fields include nli_contradiction, unsat_core, hallucination_risk

Browser demo

Heuristic only. The server gives stronger results. Paste a paragraph with conflicting statements and optionally add short source snippets separated by commas.

Good patterns

  • Keep CS and CTS side by side in logs
  • Store the earliest site that resolves a conflict when removed
  • Show both claims when evidence is genuinely split, with sources

Anti-patterns

  • Silencing contradictions with heavier sampling filters
  • Cherry picking evidence to force entailment
  • Emitting confident answers when audit is low

More on CE

For extended examples, visit ContradictionEngineering.com.