IAF AI-Piloted Edition

Executive Summary

Intelligenism Agent Framework (IAF) AI-Piloted Edition is a multi-agent orchestration framework purpose-built for one radical capability: to be fully understood, modified, and managed by external AI systems. While other agent frameworks are designed for human developers to write code, IAF is designed so that AI agents can operate it through file system manipulation alone.

This edition builds on the original IAF architecture, adding a comprehensive self-description layer, validation toolchain, structured logging, and operational safety mechanisms. The result is a framework that any flagship LLM with file read/write and shell access can take over, configure, and run autonomously.

Core Positioning

DimensionTraditional FrameworksIAF AI-Piloted Edition
Control InterfacePython API / CodeFile System (JSON, Markdown, Python files)
Agent CreationWrite class definitionsCopy directory + edit config files
Behavior ChangeModify Python codeEdit Markdown and JSON files
OrchestrationCode graph/DAG definitionsDeclarative JSON (tubes.json)
ValidationUnit tests / type checkingCLI script: python3 validate.py
Hot ReloadRestart requiredAll config changes auto-detected
RollbackGit revert code changesGit revert file changes (identical)
AI ComprehensionMust understand Python ASTMust understand file conventions

Architecture Overview

IAF is organized into four independent layers. Each layer communicates through the file system and HTTP APIs, with no direct code coupling between layers.

LayerResponsibilityControl Surface
AgentIndependent intelligent units, each with its own engine, tools, identity, and skillsagents/{id}/ directory: SOUL.md, agent_config.json, tools/*.py
DispatchMulti-agent collaboration orchestration with pluggable strategiesdispatch/{strategy}/ directory: dispatch_config.json, rules/*.md
TubeSignal topology engine for automated scheduling and chainingtube/tubes.json (single declarative file, hot-reloaded)
UISelf-contained HTML pages with auto-discoverypages/*.html (drop file to create page)

Design Principles


AI Operability Layer

The AI Operability Layer is the defining addition of this edition. It is a set of self-description files, validation tools, and operational safety mechanisms that enable external LLMs to fully manage IAF. This layer adds no runtime overhead to the framework. It exists purely as an interface between the external AI operator and the IAF file system.

System Map (MANIFEST.json)

A machine-readable snapshot of the entire system state, auto-generated by generate_manifest.py and refreshed on server startup. An external LLM reads this single file to understand what agents, tubes, dispatches, and tools exist.

The manifest includes: all agent IDs with their config paths, model assignments, tool lists, and log file locations; all dispatch strategies with participating agents; all tubes with their trigger types and step chains; and framework conventions for file naming patterns.

Operation Playbook (PLAYBOOK.md)

A comprehensive text manual that maps every management intent to a concrete sequence of file operations. An external LLM reads this file to learn how to create agents, add tools, configure automation pipelines, modify behavior, and handle errors.

The playbook covers: agent lifecycle (create, configure, add tools, modify behavior), tube management (create, trigger, monitor), dispatch strategy creation, engine modification procedures with diff-based synchronization, and safety protocols.

Validation Script (validate.py)

CommandValidatesChecks
python3 validate.py agent {id}Single agentConfig JSON syntax, required fields, provider existence, SOUL.md presence, tool file loadability
python3 validate.py tool {path}Single tool fileFile naming, importability, TOOLS dict presence, handler callable, schema completeness
python3 validate.py tubetubes.jsonJSON syntax, tube IDs, step references to existing agents/dispatches, trigger module existence
python3 validate.py allEntire systemAll agents, all tools, all tubes in one pass

Structured Logging (call_log.jsonl)

Each agent writes a structured JSONL log of every invocation. The external LLM reads these logs to understand what happened during agent execution, diagnose failures, and verify that automated pipelines are working correctly.

EventKey FieldsPurpose
call_startedmodel, mode, message_previewAgent invocation begins
llm_callloop, tokens_est, duration_msOne LLM API round-trip
tool_calltool_name, args_summary, result_length, is_errorOne tool execution
call_completedloops_used, reply_length, total_duration_msAgent returns result
call_failederrorAgent invocation failed

Tool Contract (TOOL_CONTRACT.md)

A specification document that defines the exact format for writing new tool files. When an external LLM needs to give an agent a new capability, it reads this contract to produce a correctly structured *_tools.py file that will be automatically discovered and loaded.

Safety Mechanisms


Agent Layer

Each agent is an independent directory under agents/ containing its complete runtime: engine code, identity, tools, skills, context strategy, and logs. Creating a new agent is a directory copy operation: cp -r template/ agents/new_agent/.

Agent Directory Structure

FileCategoryPurpose
agent_config.jsonAdjustableModel, provider, context files, skills, trim strategy
SOUL.mdAdjustableAgent identity, personality, behavioral guidelines
tools/*_tools.pyAdjustableAvailable tools (auto-discovered, hot-reloaded)
skills/*.mdAdjustableConditional task instructions (trigger-matched)
context/sliding_window.pyAdjustableContext trimming strategy
core/direct_llm.pySemi-infrastructureEngine core: message assembly, LLM call loop, logging
core/tool_executor.pySemi-infrastructureTool registry with auto-discovery and hot reload
history.jsonlRuntimeChat conversation history
call_log.jsonlRuntimeStructured execution log

Five-Layer Message Assembly

LayerSourceBehavior
1. System Promptcontext_files in agent_config.jsonAll listed files concatenated as system message. Always loaded.
2. Skill Injectionskills array in agent_config.jsonKeyword-matched .md files injected. Loaded on trigger hit.
3. Historyhistory.jsonlRecent conversation turns. Chat mode only, skipped in batch mode.
4. Current InputUser message or tube payloadThe actual request being processed.
5. Context Trimmingcontext/sliding_window.pyEnsures total tokens stay within max_context budget.

Tool Auto-Discovery and Hot Reload

The tool executor scans the agent's tools/ directory for files matching *_tools.py and imports their TOOLS dictionaries. On each agent invocation, the executor checks the directory modification time. If any file has been added, removed, or modified since the last scan, tools are re-discovered automatically. An external LLM can write a new tool file and it takes effect on the very next agent call, with zero restart required.

Three Capability Tiers

TierCarrierLoadingUse Case
Identity & Knowledge.md files via context_filesEvery callPermanent information the agent always knows
Conditional Instructions.md files via skills triggersOn keyword matchScenario-specific task instructions
Execution Capability*_tools.py filesEvery call (hot-reloaded)Actions the agent can perform

Dispatch Layer

The Dispatch layer orchestrates multi-agent collaboration. Each collaboration strategy is a self-contained directory that can be independently developed and deployed. The layer does not call the agent engine directly. Instead, it treats agent directories as data sources and calls the LLM API through its own context assembly pipeline.

Key Design: In dispatch mode, an agent is not an autonomous entity making decisions. It is an engine driven by the dispatch orchestrator. The dispatch decides what the agent knows (via context_files), what role it plays (via rules/*.md), and when it speaks (via turn_order). The same agent can behave differently in different collaboration strategies without any modification to the agent itself.

Strategy Structure

ComponentFilePurpose
Orchestration Logicdispatch.pyRound control, agent sequencing, termination conditions
Configurationdispatch_config.jsonParticipating agents, turn order, rounds, context sources
Role Definitionsrules/*.mdPer-agent role instructions for this strategy
Infrastructuredispatch_base.py, session_manager.py, context_injector.pyTool loop, LLM parsing, session CRUD, context assembly (do not modify)

Tube Layer (Signal Topology)

The Tube layer is the automation and scheduling dimension of IAF. It connects agents, dispatches, and other tubes through declarative signal pathways defined in a single file: tube/tubes.json. The tube runner polls this file every 15 seconds, checks trigger conditions, and executes step chains in isolated subprocesses.

Enhanced Execution (New in This Edition)

Trigger and Target Extensibility

Extension PointInterfaceLocationDiscovery
Trigger Sourcecheck(config, state) → booltube/triggers/{type}.pyAuto-discovered by type name in tubes.json
Execution Targetbuild_command(step, project_root) → listtube/targets/{type}.pyAuto-discovered by type name in tubes.json

External LLM Takeover Model

The defining feature of the AI-Piloted Edition is that the entire framework can be managed by an external LLM agent (Claude Code, GPT Agent, or any agent with file and shell access) without touching any internal API or using any IAF-internal tool.

The External Operator's Toolkit

LLM ToolIAF Operation
ReadRead MANIFEST.json, agent configs, SOUL.md, tubes.json, logs
WriteCreate new agent directories, write tool files, write config files
EditModify existing configs, SOUL.md, tubes.json
Bashcp -r template/, python3 validate.py, bash auto_commit.sh, git operations, curl API triggers

Onboarding Sequence

An external LLM takes over an IAF instance in three reads:

StepFileKnowledge Gained
1MANIFEST.jsonWhat agents, tubes, dispatches exist; their configs and file paths
2PLAYBOOK.mdHow to perform every management operation step by step
3Architecture MapHow the system works internally; what to touch and what not to touch

Operation Loop

The external LLM follows a consistent loop for every management task: snapshot (auto_commit.sh) → operate (Read/Write/Edit/Bash) → validate (validate.py all) → commit (auto_commit.sh). If validation fails, the LLM reads the error output, fixes the issue, and re-validates. If the fix introduces new problems, git revert provides instant rollback.

Why File System, Not API

Traditional frameworks expose control through Python APIs, requiring the controlling LLM to generate syntactically correct code, manage imports, handle type systems, and reason about runtime state. IAF eliminates all of these failure modes by expressing control as file operations.

Editing a JSON field has a near-zero chance of causing a syntax error compared to modifying a Python class definition. Dropping a file in a directory is an atomic operation that either succeeds completely or fails obviously. And every change is automatically versioned through Git, providing a complete audit trail and rollback capability.


Competitive Landscape

FrameworkControl SurfaceAgent CreationConfig ChangeAI Controllability
IAF AI EditionFile systemCopy directoryEdit JSON/MDNative: designed for AI control
CrewAIPython API + YAMLWrite Python classEdit YAML + codePartial: YAML editable, orchestration in code
LangGraphPython APIDefine graph nodesModify PythonLow: graph definition requires code changes
AutoGenPython API + YAMLConfigure in codeEdit YAML + codePartial: YAML config, but maintenance mode
OpenAI Agents SDKPython APIWrite PythonModify PythonLow: all configuration in code

The fundamental architectural difference is that IAF treats the file system as its primary control interface, while all other frameworks treat Python code as their primary control interface. This is not a feature that can be added to existing frameworks through a plugin or wrapper. It is a foundational design decision that shapes the entire architecture.


Potential Applications

AI-Managed Website Operations

An external LLM creates specialized agents (content monitor, SEO analyzer, uptime checker, report generator), wires them into scheduled tubes, and uses dispatch for multi-agent analysis of complex issues. The entire setup is created and maintained through file operations.

Autonomous Research Pipelines

A tube-driven pipeline where Agent A collects data, Agent B analyzes it, and a roundtable dispatch synthesizes conclusions. The external LLM monitors logs, adjusts agent prompts based on output quality, and adds new pipeline stages as research evolves.

Self-Evolving Agent Teams

The external LLM reviews agent performance through call_log.jsonl, identifies underperforming agents, modifies their SOUL.md or tool configurations, validates changes, and monitors improvement. Over time, the agent team evolves its capabilities without human intervention in the optimization loop.

Multi-Model Orchestration

Different agents can use different LLM providers and models. The external LLM can assign cost-effective models to simple monitoring tasks and premium models to complex analysis, optimizing the cost-capability tradeoff across the entire system through agent_config.json edits.


Technical Specifications

ComponentTechnology
LanguagePython 3
Web FrameworkFlask
LLM IntegrationOpenRouter API (Gemini, Claude, GPT, Qwen, others)
FrontendVanilla HTML/JS/CSS (no build tools)
Data StorageJSON (config) + JSONL (logs, history, sessions)
Schedulingcroniter
Dependenciesflask, requests, croniter (3 packages total)
Core Codebase~3,300 lines (after AI Operability additions)
AI Operability Code~300 lines (validate.py, generate_manifest.py, logging, hot reload)
LicenseApache 2.0

File Inventory Summary

CategoryCountDescription
Python source files~55Engine, tools, infrastructure, utilities
Configuration files~8JSON configs across all layers
Markdown files~10+Documentation, agent identities, role definitions
HTML files~5Portal, chat, dashboard, dispatch UI, custom pages
AI Operability files5MANIFEST.json, PLAYBOOK.md, validate.py, generate_manifest.py, auto_commit.sh

Links