IAF v1.0 — Four-Layer Architecture Complete
Agent. Dispatch. Tube. UI. All four layers shipped.
What changed since v0.9
v0.9 shipped the Agent Layer and UI Layer. Dispatch and Scheduler were architecture-only. v1.0 completes the entire four-layer stack:
- Dispatch Layer — Multi-agent collaboration is live. The Roundtable strategy ships as the first complete implementation.
- Tube Layer — The signal topology layer (previously called "Scheduler") has been redesigned from a simple cron trigger into a full declarative wiring system with pluggable triggers and targets.
- Agent Toolset — Four tool files now available: file_tools, shell_tools, tube_tools, dispatch_tools. Agents can trigger tubes, initiate collaboration, and execute shell commands.
- CLI Entry Points — Standardised
run_agent.pyandrun_dispatch.pyfor subprocess execution. - Tube Dashboard — Real-time monitoring panel with manual trigger buttons, execution logs, and per-tube status.
- context_files — Agent identity system upgraded from single SOUL.md to configurable multi-file loading.
Four-Layer Architecture
The framework now consists of four fully independent layers. Units within each layer are also isolated from each other. The four layers communicate through Flask APIs and the filesystem — no layer invades another.
| Layer | What It Does | Unit Granularity | Self-Containment |
|---|---|---|---|
| Agent | Complete runtime for a single intelligent agent | One folder = one agent | Copy the folder to deploy |
| Dispatch | Multi-agent collaboration orchestration | One folder = one collaboration strategy | Copy the folder to deploy |
| Tube | Signal topology — wiring between building blocks | One JSON entry = one signal pathway | Edit JSON to activate |
| UI | Human-machine interaction interfaces | One HTML file = one feature page | Drop the file to use |
Agent Layer
The core of IAF: the Fundamental Loop. Each agent is a self-contained runtime (~267 lines of engine code):
- LLM Communication — Message assembly, API calls via OpenRouter, structured response handling
- Tool Executor — Auto-discovery registry, whitelist-based execution, tool call loop
- Context Management — Sliding-window trimming strategy, chat history persistence
- Identity & Knowledge — Configurable
context_filesloads any .md files as system prompt - Skills — Trigger-based dynamic injection of task instructions
Five-Layer Message Assembly
| Layer | Source | When Loaded | Purpose |
|---|---|---|---|
| 1 | All files in context_files | Every call | Agent's identity, knowledge, routing |
| 2 | Skill .md files matched by trigger | On keyword match | Scenario-specific task instructions |
| 3 | history.jsonl | Chat mode only | Conversation memory |
| 4 | Current user message | Every call | This turn's input |
| 5 | Trim pass | Every call | Ensure context fits the window |
Agent Toolset
| File | Tools | Capability | Default |
|---|---|---|---|
| file_tools.py | read_file, write_file, list_dir | Read/write files, list directories | Yes |
| shell_tools.py | run_shell | Execute terminal commands | No |
| tube_tools.py | trigger_tube, list_tubes, tube_log | Trigger tubes, check status, read logs | No |
| dispatch_tools.py | run_dispatch, list_dispatch_strategies | Initiate multi-agent collaboration | No |
Drop a tool file into agents/xxx/tools/ — auto-discovered, zero code changes.
Dispatch Layer
The Dispatch Layer implements multi-agent collaboration. It calls lib/llm_client.call_llm() directly — it does not go through the Agent engine. Agent folders are treated as "asset libraries + model config", not as executors.
Analogy: Agents are actors, Dispatch is the director. The director has actors read their own persona (SOUL.md) but can also hand them an entirely new script. The performance records on set belong to the director (sessions/), not the actor's personal diary (history.jsonl).
Dispatch Folder Structure
| File | Role |
|---|---|
| dispatch.py | Orchestration logic: round control, agent call order, termination conditions |
| dispatch_base.py | Infrastructure: tool loop, LLM parsing, staging |
| context_injector.py | Context assembly: reads agent files per config |
| session_manager.py | Session CRUD |
| dispatch_config.json | Participating agents, context_files, round limits |
| rules/*.md | Agent role definitions within the collaboration |
| sessions/ | Complete records of collaboration processes |
| *.html | (Optional) Dedicated UI page |
Three-Layer Context Isolation
| Context Type | Owned By | Storage Location | Purpose |
|---|---|---|---|
| Agent chat history | Agent | agents/xxx/history.jsonl | User-agent interaction memory |
| Dispatch collaboration | Dispatch | dispatch/xxx/sessions/*.jsonl | Multi-agent collaboration process |
| Per-call context | context_injector | In memory (not persisted) | Assembly result for a single LLM call |
Agents are unaware they participated in a Dispatch. Dispatch never touches an agent's history.jsonl. Isolation is an architectural guarantee.
Tube Layer
Tube is the framework's third dimension — the signal topology layer. It describes signal pathways between building blocks: when a condition is met, trigger a target to execute.
Same three agents plus two dispatches — different Tube configurations produce entirely different system behaviour: serial pipelines, parallel fan-out, feedback loops. The blocks haven't changed; the wiring has, and so the system behaves completely differently.
Tube Anatomy
| Element | Description |
|---|---|
| Triggers | What condition activates it (pluggable: cron, manual, API, file watch, etc.) |
| Steps | What to do sequentially after activation (Agent, Dispatch, another Tube) |
| Payload | Data passed from trigger source to target (prompt, file path, upstream output) |
tubes.json Example
[
{
"id": "doc_analysis_pipeline",
"enabled": true,
"triggers": [
{ "type": "cron", "config": { "expr": "30 9 * * *" } },
{ "type": "manual" }
],
"steps": [
{ "type": "agent", "id": "doc_processor", "mode": "batch",
"payload": { "prompt": "Process and analyse documents" } },
{ "type": "dispatch", "id": "roundtable",
"payload": { "message": "Brainstorm based on analysis results" } },
{ "type": "tube", "id": "send_report" }
]
}
]
Execution Model
- Subprocess isolation — tube_runner only spawns subprocesses, never imports business code
- Serial steps — Next step runs only if previous exits with code 0
- Parallel tubes — Different tubes execute in independent threads, non-blocking
- Duplicate prevention — running_tubes dictionary prevents re-firing the same tube
- Hot reload — tubes.json is re-read every polling cycle, no restart needed
- Fire-and-forget — Triggering does not block the main loop
Tube API Endpoints
| Method | Path | Function |
|---|---|---|
| GET | /api/tubes | List all tubes + real-time status |
| GET | /api/tube/status | Lightweight status query |
| POST | /api/tube/trigger | Manual trigger |
| GET | /api/tube/log | Query logs (supports filtering) |
| GET | /api/tube/log/grouped | Logs grouped by tube |
| DELETE | /api/tube/log | Clear logs |
UI Layer
A browser-based interface following the yellow-pages architecture. Every HTML page is fully self-contained. Zero coupling between pages. No unified SPA shell, no shared router.
| Type | Location | Route |
|---|---|---|
| Yellow pages | index.html | GET / |
| Basic chat | chat.html | GET /chat |
| User pages | pages/*.html | GET /pages/<name> |
| Dispatch UI | dispatch/xxx/*.html | GET /dispatch/<name> |
| Tube dashboard | pages/tube-dashboard.html | GET /pages/tube-dashboard |
The Tube Dashboard provides: tube list with live status, manual trigger buttons, expandable config details and step payloads, real-time scrolling execution logs, and per-tube log clearing. Auto-polls every 10 seconds.
AI Operability
IAF is not just "an AI framework for humans" — it is an AI framework that AI can also operate. A top-tier AI agent taking over a framework must pass four gates:
| Gate | IAF | Traditional Frameworks |
|---|---|---|
| Understand | ~2000 lines, fits in one context window | 10,000+ lines, requires modular comprehension |
| Modify | Edit JSON and Markdown files | Write Python/SDK code conforming to framework constraints |
| Deploy | python3 chat_server.py — one command | docker compose / kubernetes |
| Monitor | cat tube_log.jsonl — plain text | Dedicated dashboards / monitoring APIs |
Four Orthogonal Extension Dimensions
| Operation | How | Code Changes |
|---|---|---|
| Add new agent | cp -r template/ agents/xxx/ | 0 lines |
| Add tool to agent | Drop .py in agents/xxx/tools/ | 0 lines |
| Add knowledge | Add .md file, update context_files | 0 lines |
| Add skill | Drop .md in agents/xxx/skills/ | 0 lines |
| Change trimming | Swap .py in agents/xxx/context/ | 0 lines |
| Add collaboration | Add folder in dispatch/ | 0 lines |
| Add signal pathway | Add entry in tube/tubes.json | 0 lines |
| Add trigger type | Drop .py in tube/triggers/ | 0 lines |
| Add drive target | Drop .py in tube/targets/ | 0 lines |
| Add LLM provider | Add entry in config.json | 0 lines |
| Add UI page | Drop .html in pages/ | 0 lines |
Status Summary
- ✅ Agent Layer (Fundamental Loop) — complete
- ✅ Dispatch Layer (Roundtable strategy) — complete
- ✅ Tube Layer (Signal Topology) — complete
- ✅ UI Layer (Yellow Pages + Chat + Tube Dashboard) — complete
- ✅ Shared Infrastructure (llm_client + token_utils) — complete
- ✅ Agent Toolset (file / shell / tube / dispatch) — complete
- ✅ CLI Entry Points — standardised
- ✅ Architecture Validation Tests — complete
Tech Stack
Pure Python + HTML. No Node.js, no npm, no frontend build tools.
pip install flask requests croniter — that is the entire dependency list.
Get the Code
# Clone the repo
git clone https://github.com/IntelligenismCommercialDevelopment-LLC/intelligenism-agent-framework.git
cd intelligenism-agent-framework
# Set up virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install flask requests croniter
# Configure your LLM provider in config.json
# Start the server
python3 chat_server.py
# Open http://localhost:5000
Links
- GitHub — intelligenism-agent-framework
- Full Architecture Docs — IAF Architecture Overview
- v0.9 Release Notes — IAF v0.9 — First Public Release
- Website — intelligenism.club
- Theory — intelligenism.org