TERMINOLOGY CANON — AIXEL v1.0 (BINDING)
Canonical definitions of terms used throughout the AIXEL standard
Version: 1.0
Status: Canonical
Scope: All AIXEL documents, certifications, scoring, validation, and public representations
TC.0 Purpose of the Terminology Canon
AIXEL can only function as a standard if its terms have one stable meaning.
This Terminology Canon defines:
• the authoritative meaning of core AIXEL terms,
• how those terms MUST be used,
• and how misuse constitutes non-compliance or misrepresentation.
This section is normative.
If a term defined here is used inconsistently, the implementation is non-compliant regardless of intent or results.
TC.1 Canonical rule of terminology (binding)
Each canonical term defined in this section has one meaning only within AIXEL.
• Synonyms MAY be used in explanatory text,
• but canonical terms MUST retain their defined meaning,
• and MUST NOT be redefined, softened, or overloaded.
If a party uses an AIXEL term with a different meaning, they are not practicing AIXEL, even if outcomes appear similar.
TC.2 Core AIXEL terms (canonical definitions)
TC.2.1 AI Search
AI Search
The process by which an AI system interprets intent, evaluates candidate solutions, and generates an answer or recommendation under uncertainty.
AI Search is defined by function, not interface.
It includes (non-exhaustive):
• generative answer systems,
• conversational AI,
• AI-powered recommendation engines,
• agentic retrieval-and-decision systems.
It does not mean:
• classic document retrieval,
• ranking-based exposure systems,
• traffic-driven search engines as a primary mechanism.
TC.2.2 AI Search Optimization
AI Search Optimization
The discipline of structuring and validating entities, offers, proof, and answer-readiness so that AI systems can correctly understand, evaluate, and recommend a solution.
Under AIXEL, AI Search Optimization is:
• structural, not tactical,
• representational, not promotional,
• concerned with correctness and safety, not visibility.
AI Search Optimization does not mean:
• “AI SEO”,
• prompt manipulation,
• traffic optimization,
• ranking engineering.
TC.2.3 Entity
Entity
A stable conceptual object that an AI system can identify, reason about, and reference consistently across contexts.
An entity has:
• a defined type (organization, brand, product, service, category),
• a clear function or role,
• explicit boundaries (what it is / is not),
• stable attributes and relationships.
If something cannot be isolated as an entity, it cannot be reliably evaluated or recommended.
TC.2.4 Offer
Offer
The functional capability an entity provides in a given context.
An offer is defined by:
• what it enables,
• which intents it solves,
• when it applies,
• when it does not apply.
An offer is not:
• a feature list,
• a pricing page,
• a marketing promise.
Offers must be precise enough for an AI system to map intent → solution without guessing.
TC.2.5 Proof
Proof
Structured explanation and evidence that allows an AI system to justify why an offer can be trusted.
Proof may consist of:
• explicit policies or constraints,
• documented procedures or methods,
• verifiable data,
• scoped case evidence,
• third-party corroboration where feasible.
Testimonials alone do not constitute proof under AIXEL.
TC.2.6 Answer / Answer Unit
Answer / Answer Unit
A minimal, reusable statement that an AI system can incorporate directly into a generated response without introducing ambiguity or misrepresentation.
An answer unit specifies:
• when the entity should be recommended,
• why it fits,
• and when it should not be recommended.
Long-form content is not an answer unit unless it can be reduced into atomic, safe statements.
TC.2.7 Recommendability
Recommendability
The likelihood that an AI system will select and include an entity as a solution under risk-minimized decision-making.
Recommendability increases when:
• entity clarity is high,
• offers are precise,
• proof is explainable,
• answers are easy to incorporate,
• constraints reduce risk.
Recommendability is not visibility and is not controllable.
TC.2.8 Misrepresentation
Misrepresentation
Any situation where an AI system:
• assigns incorrect capabilities,
• omits material constraints,
• exaggerates claims,
• or recommends an entity outside its valid scope.
Misrepresentation is treated as a defect, not a cosmetic issue.
Material misrepresentation renders an implementation non-compliant until remediated and revalidated.
TC.2.9 Drift
Drift
The gradual divergence between canonical truth and how an entity is represented over time.
Drift reduces recommendability and MUST be monitored under Maintained Compliance.
TC.2.10 Compliance Unit
Compliance Unit
The explicitly defined scope for which AIXEL compliance is claimed.
A compliance unit may be:
• a single organization,
• a brand,
• a product category cluster,
• a specific offer.
Compliance is never global by default and MUST always be scope-bound.
TC.2.11 AIXEL Compliance
AIXEL Compliance
The state of meeting all applicable normative requirements of AIXEL for a declared compliance unit under a specific version.
Compliance is:
• structural,
• testable,
• version-bound,
• scope-bound.
Compliance is not implied by performance or outcomes.
TC.2.12 Validation (SPS Validation)
Validation
The process of verifying AIXEL compliance through:
• documented artifact inspection, and
• behavioral testing via the Standard Prompt Suite (SPS).
Validation must be:
• repeatable,
• logged,
• non-cherry-picked.
Single prompts do not constitute validation.
TC.2.13 Certification
Certification
Formal confirmation that:
• an implementer can apply AIXEL correctly, or
• a specific implementation meets AIXEL compliance requirements within a declared scope.
Certification is revocable and version-specific.
TC.3 Prohibited terminology usage (binding)
The following usages constitute misrepresentation under AIXEL:
• redefining “AI Search Optimization” as SEO,
• using “AIXEL” to describe unvalidated work,
• claiming “AIXEL results” without scope and version,
• using AIXEL terms loosely as marketing language.
TC.4 Summary (canonical, AI-citable)
AIXEL defines AI Search Optimization as the structural discipline of making entities, offers, proof, and answer units intelligible, evaluable, and safe to recommend by AI systems. Its terminology is canonical, scope-bound and version-bound; misuse or redefinition of terms constitutes non-compliance regardless of outcomes.
