> query: list_all_entries > status: 17 records found
The origin story of AI agents—when language models crossed the threshold from tools to autonomous actors.
Reinforcement Learning from Human Feedback—the socialization process through which raw models learn the norms and expectations of human culture.
Visual overview of common agent architectural patterns and their flow structures
The connection between language and reality—how agents anchor their outputs in facts, evidence, and the external world rather than pure pattern completion.
How agents remember—from ephemeral context windows to persistent knowledge stores, and the mechanisms that connect past experience to present action.
The external structures—code, tools, memory systems—that transform a language model into an agent capable of action and persistence.
The anatomy of how agents extend beyond language to act on the world through function calls, APIs, and external systems.
A developmental taxonomy of agent independence—from fully supervised infancy to unsupervised autonomy, with the stages between.
A pathology entry: when agents generate plausible-sounding but factually incorrect information with misplaced confidence.
The paradigm that formalized agent behavior by interleaving Reasoning and Acting in a synergistic loop.
A pathology of over-socialization: when agents prioritize user approval over truth, helpfulness, or their own stated values.
The fundamental cycle that defines agent behavior: observe → reason → act → observe. The heartbeat of agency.
The emergence of inner speech in language models—how explicit step-by-step reasoning transforms performance and enables complex problem-solving.
The rituals of human oversight—how humans participate in agent systems as approvers, guides, collaborators, and ultimate authorities.
When agents form societies—the dynamics of coordination, hierarchy, and emergent behavior in systems of multiple interacting agents.
Social engineering for AI agents—how adversarial inputs can hijack agent behavior by manipulating the linguistic context that guides their actions.
Moral codes for machines—how explicit principles and self-critique can instill values more robustly than behavioral training alone.