220 A Theory Of Appropriate Context

title: “2.2: A Theory of Appropriate Context” tags: [“kb”]

2.2: A Theory of Appropriate Context

Summary: This theory provides a framework for designing, analyzing, and debugging the context provided to AI agents. It posits that “Appropriate Context” is not a single entity, but a well-structured composition of information that must satisfy five key properties: it must be Sufficient, Scoped, Grounded, Structured, and Actionable. Failures in agent performance can often be traced back to a deficiency in one or more of these properties.

Details:

The five properties of Appropriate Context are:

  1. Sufficient: The context must contain all the information required for the agent to perform its task and, crucially, to handle expected deviations and errors. An agent with insufficient context may only be able to follow a “happy path” and will fail when encountering unforeseen circumstances.

    • Testable Hypothesis: Augmenting an agent’s context with explicit error-handling instructions and recovery procedures will measurably increase its reliability.
  2. Scoped: The context must be precisely tailored to the task, excluding irrelevant information. Overly broad context can distract the agent, increase operational costs, introduce noise, and lead to incorrect conclusions. This addresses the “whole universe” problem where providing too much information is as bad as providing too little.

    • Testable Hypothesis: For a given task, an agent with a tightly scoped context will complete the task faster, more cheaply, and with a higher success rate than an agent given a larger, less relevant context.
  3. Grounded: The context must be tied to verifiable, real-world artifacts, such as file paths, code snippets, or previously validated research. Grounding is the primary defense against model “hallucination,” ensuring that an agent’s outputs are based on factual data from the project’s environment.

    • Testable Hypothesis: Knowledge chunks generated from research briefs that contain direct quotes and file paths from tool outputs will have fewer factual inaccuracies than those based on a model’s free-form summary of the same content.
  4. Structured: The information within the context must be presented in a clear, predictable, and easily parsable format. Using consistent conventions like Markdown headers, lists, or even typed formats like JSON allows the agent to better understand the relationships between different pieces of information and extract them more reliably.

    • Testable Hypothesis: An agent given information in a well-defined, structured format will be more successful at extracting and using that information than an agent given the same information as a single, unstructured block of text.
  5. Actionable: The context must clearly define what success looks like and empower the agent to take concrete steps to achieve it. It should include a clear objective and, ideally, a “Definition of Done” with verifiable success criteria. This bridges the gap between passive understanding and active execution.

    • Testable Hypothesis: Agents whose prompts include a “Definition of Done” section will have a lower rate of “silent failure” (i.e., completing without doing the work) than those without.