Intelligenism Agent Framework
A ~550-line Python agent framework built on Intelligenism theory. Each agent owns a complete copy of the engine. Additive complexity. True parallel execution. Folder = product.
Why IAF Exists
IAF is not just another multi-agent framework. It is the first practical implementation of Intelligenism — a theoretical framework on how intelligence emerges in organisations, whether human, artificial, or hybrid.
The AI industry has no shortage of agent frameworks (CrewAI, AutoGen, LangGraph, Google ADK, and many more). They all solve the same problem: "how to make multiple agents collaborate on tasks." But they all share the same blind spot: they hardcode a specific collaboration paradigm into the framework itself, then call it "flexible" because you can configure parameters within that paradigm.
IAF starts from a different premise — an epistemological one.
From single neurons to complex neural networks, intelligence exhibits a bottom-up construction pattern. Regardless of whether we look at complex intelligent systems or individual neurons, their boundaries are clear, and individuals maintain a high degree of independence from one another. This means the complexity and intelligence potential of a neural network comes from the connections between individuals, not from making the individuals internally more complex. Therefore, maintaining the independence of individuals or small units, and expressing complexity outside the individual, is the essence of connectionism. — Thinking, Conception and Construction of Intelligenism
This principle directly shapes IAF's architecture: every agent is a complete, independent unit. Complexity lives in the connections between agents (the Dispatch layer), not inside any single agent's code.
Possibility Management vs Certainty Management
Intelligenism begins with an epistemological judgment: humans cannot assert that any theory is absolute truth. Since we cannot assert absolute truth, the way to evaluate a theory should shift toward "fitness" — whether the theory can effectively guide action and produce positive value in a given scenario. — Thinking, Conception and Construction of Intelligenism
2026 is a period of profound transformation in AI. The only thing we can be certain of is that we are in the middle of a revolution — everything else is uncertain. This creates a critical risk: if we bet resources on a framework built on certainty-based assumptions, when that assumption is falsified, all accumulated advantages collapse.
Most multi-agent frameworks practice certainty management. Their developers assume they know which collaboration paradigm works best, so they deeply integrate that paradigm into the framework's core. CrewAI's identity is role-playing sequential execution. AutoGen's identity is multi-round dialogue with consensus convergence. If you could swap these out, these frameworks would lose their reason to exist.
IAF practices possibility management. It assumes that more multi-agent collaboration modes will emerge in the future, and therefore refuses to assert from a position of absolute truth that any one mode is necessarily better.
What this means in practice:
- Certainty management — One shared engine serves all agents. Users are free within the range of parameters the framework designer foresaw. "We're flexible — you can configure many parameters."
- Possibility management — Each agent owns a complete copy of the entire engine. Users can modify any line of code, making any agent fundamentally different from any other — without risking impact on other agents. "You own the full code — you can change anything."
The core difference: traditional frameworks offer freedom inside a fence. IAF makes the fence 550 lines tall — low enough to step over at zero cost.
Core Concepts
Fundamental Loop
The Fundamental Loop is a complete, self-contained agent runtime. Every agent has its own independent copy containing:
- LLM Communication Engine — Assembles message arrays, sends API requests, processes responses
- Tool Executor — Auto-discovers and registers tools, executes LLM-requested tool calls
- Context Manager — History management, context trimming strategies
- Tool Library — Pluggable tool modules (file I/O, search, database, etc.)
- Identity (SOUL.md) — Defines the agent's persona, role, and behavioral rules
- Skills — Task instructions dynamically injected based on trigger rules
Current implementation: ~461 lines of Python for the core loop, plus 89 lines of shared infrastructure. Total: ~550 lines.
Additive Complexity
Shared-engine frameworks have multiplicative complexity: 3 agents × 2 trimming strategies × 4 tool sets = 24 combinations, each with potential interaction bugs. Adding one agent multiplies the complexity by a coefficient.
IAF has additive complexity: each agent's Fundamental Loop is always ~460 lines. Adding a 4th agent means one more 460-line copy. Each copy evolves independently, affecting nothing else.
True Parallel Execution
Because each Fundamental Loop runs as an independent process:
- No GIL limitation — Each agent is a separate Python process
- No shared memory — No locks, no race conditions, no state pollution
- Natural rate limit distribution — Different agents can call different LLM providers
- Fault isolation — One agent crashes, others keep running
Four-Layer Architecture
IAF consists of four completely independent layers. Each can start, stop, and scale independently.
Layer 1: Agent Layer
The bottom execution unit. Each agent owns a complete Fundamental Loop copy (engine, tools, context, identity, skills). Agents don't know Dispatch or Scheduler exist.
Layer 2: Dispatch Layer
Multi-agent collaboration orchestration. Each collaboration strategy is an independent folder containing its own orchestration logic, context injector, config, and optional UI page. Dispatch treats agents as data sources (reads their SOUL.md, skills, model config) but calls lib/llm_client.call_llm() directly — zero coupling with agent engines.
Key design principle: In Dispatch mode, an agent is not a complete "self" — it's an engine. Dispatch decides who this agent is, what it knows, and how it works for this specific collaboration.
Three-layer context isolation:
- Agent's chat history = its memory with the user. Agent doesn't know Dispatch exists.
- Dispatch's session = the collaboration record. Only Dispatch reads/writes it.
- The two never pollute each other.
Layer 3: Scheduler Layer
A unified timer trigger. Scans schedules.json in both agent and dispatch folders, fires tasks by cron rules. Doesn't care whether it's triggering a single-agent task or multi-agent collaboration — it just needs to know "when, call what, with what parameters."
Layer 4: UI Layer
Yellow-pages index + independent HTML pages. Each page is self-contained, communicates with the backend through APIs. No SPA, no shared UI engine — adding a page means dropping an HTML file into a directory.
Mapping to Intelligenism Theory
| Theory Concept | Architecture Implementation |
|---|---|
| Autonomous unit of an intelligent consortium | Each agent's independent Fundamental Loop |
| Carbon-silicon symbiosis, loose coupling | Process isolation + file system communication |
| Pluggable collaboration paradigms | Independent strategy folders under dispatch/ |
| Deterministic scheduling, no LLM orchestrator | Orchestrator is pure Python, not LLM probabilistic judgment |
| Possibility management over certainty management | Full code copies instead of parameterised configuration |
| Value lies in assembly, not in building blocks | The 550-line assembly knowledge is the competitive moat |
Directory Structure
intelligenism-agent-framework/
├── config.json # Global config (provider connections)
├── lib/ # Shared infrastructure (89 lines)
│ ├── llm_client.py # HTTP calls + retry + error classification
│ └── token_utils.py # Token estimation
├── template/ # Fundamental Loop template
│ ├── core/direct_llm.py # Engine core (266 lines)
│ ├── core/tool_executor.py # Tool auto-discovery registry (53 lines)
│ ├── tools/file_tools.py # Default tool set (78 lines)
│ ├── context/sliding_window.py # Default trimming strategy (47 lines)
│ ├── skills/ # Empty (to be filled)
│ ├── SOUL.md # Identity template
│ └── agent_config.json # Agent config template
├── agents/ # Agent instances
│ └── default/ # First agent (copied from template/)
├── dispatch/ # Collaboration strategies
│ └── brainstorm-attack/ # Example strategy
│ ├── dispatch.py # Orchestration logic
│ ├── context_injector.py # Context assembly
│ ├── dispatch_config.json # Config: participating agents, read rules
│ ├── context/ # Dispatch's own trimming strategy
│ ├── sessions/ # Collaboration records
│ └── brainstorm.html # Strategy-specific UI page
├── chat_server.py # Web server / router
├── scheduler.py # Scheduled task trigger
├── index.html # Yellow pages index
├── chat.html # Basic chat interface
├── pages/ # User-created pages
└── tests/test_template.py # Architecture validation tests
Three Orthogonal Extension Dimensions
Any change in one dimension does not affect the other two.
| Operation | Files Involved | Python Code Changes |
|---|---|---|
| Add new agent | cp -r template/ agents/xxx/ | 0 lines |
| Add tool to agent | Drop .py in agents/xxx/tools/ | 0 lines |
| Add skill to agent | Drop .md in agents/xxx/skills/ + trigger rule | 0 lines |
| Change trimming strategy | Swap .py in agents/xxx/context/ | 0 lines |
| Add new collaboration mode | Add folder in dispatch/ | 0 lines |
| Add new LLM provider | Add entry in config.json | 0 lines |
| Add new UI page | Drop .html in pages/ | 0 lines |
File System as Communication Protocol
Agents communicate through the file system — output files, monitor status files, log files. Not through shared memory or function calls.
- Simple implementation — No message queues, no RPC frameworks, no shared memory management
- Naturally debuggable — Intermediate files can be viewed with any text editor
- Future-proof for distribution — Replace file directory with NFS/S3/HTTP API, and agents can run on different servers
Dispatch in Detail
Context Injector
Each Dispatch strategy has a context_injector.py that selectively reads from agent folders based on dispatch_config.json:
{
"agents": {
"agent_a": {
"read_soul": true,
"read_skills": ["critical_review.md"],
"provider": "from_agent",
"model": "from_agent"
},
"agent_b": {
"read_soul": false,
"read_skills": [],
"provider": "from_agent",
"model": "openrouter/claude-opus"
}
}
}
provider and model set to "from_agent" reads from the agent's own agent_config.json. Or you can override directly — the same agent can use different models in different collaboration strategies.
Data Flow
dispatch.py
├── reads dispatch_config.json (which agents, what to read from each)
├── context_injector.py (selectively reads from agent folders per config)
│ ├── agents/agent_a/SOUL.md ← read or not, config decides
│ ├── agents/agent_a/skills/x.md ← which ones, config decides
│ └── agents/agent_a/agent_config.json ← get model/provider
├── assembles Dispatch's own context (session history, task instructions)
└── calls lib/llm_client.call_llm() directly
Folder = Product
| Product Unit | Delivery Form | User Operation |
|---|---|---|
| Agent | One folder (SOUL + skills + tools + config) | Copy to agents/, restart |
| Dispatch Strategy | One folder (orchestration + injector + config + UI) | Copy to dispatch/, configure participating agents |
| UI Page | One HTML file | Drop in pages/, refresh |
The core difference from traditional SaaS products: what users get is not a black-box service, but a fully disassemblable, learnable, and modifiable blueprint.
Key Design Principles
- Stateless API — LLM has no memory of previous conversations. Every call sends the full context. The program's job is to build that context.
- LLM decides, code executes — LLM returns structured JSON requesting tools, not executable code. Whitelist ensures safety.
- Auto-discovery — Tools, trimming strategies, agents, and pages are all discovered by scanning directories. Adding capability = dropping a file.
- Layered isolation — Each layer only knows its adjacent layer. chat.html doesn't know the LLM API exists. Flask doesn't know tools exist.
- Dual mode — Chat mode loads history; batch mode uses clean context; dispatch mode lets the collaboration strategy control context.
- Process isolation — Each agent runs in an independent process. No shared memory, no locks, no race conditions.
- Context ownership separation — Chat context belongs to the agent. Dispatch context belongs to the Dispatch strategy folder. The two never pollute each other.
Tech Stack
The entire framework depends only on Python + HTML. No Node.js, no npm, no frontend build tools.
pip install flask requests
That's it.
Current Status: v0.9
- ✅ Fundamental Loop (Agent Layer) — complete and functional
- ✅ UI Layer (Yellow Pages + Chat) — complete and functional
- 🔧 Dispatch Layer — architecture designed, implementation in progress
- 🔧 Scheduler Layer — architecture designed, implementation in progress
Comparison with Existing Frameworks
| Dimension | CrewAI / AutoGen / LangGraph | IAF |
|---|---|---|
| Code size | 10,000+ lines | ~550 lines |
| Engine model | One shared engine | Each agent gets independent engine |
| Complexity growth | Multiplicative | Additive |
| Parallelism | Async pseudo-parallel | Multi-process true parallel |
| Fault isolation | None (one crash = all crash) | Complete isolation |
| Customizability | Within config parameters | Any line of code |
| Dependencies | Many third-party libraries | flask + requests only |
| Agent communication | Shared memory / function calls | File system |
| Collaboration paradigm | Hardcoded in framework | Pluggable, independent folders |
| Distributed potential | Difficult | Natural support |
| Management philosophy | Certainty management | Possibility management |
| Paradigm switch risk | High (tightly coupled) | Low (swap a folder) |
Getting Started
# Clone the repo
git clone https://github.com/IntelligenismCommercialDevelopment-LLC/intelligenism-agent-framework.git
# Install dependencies
pip install flask requests
# Configure your LLM provider in config.json
# Create your first agent
cp -r template/ agents/my_agent/
# Edit agents/my_agent/SOUL.md to define your agent's identity
# Start the server
python chat_server.py
# Open http://localhost:5000
The Bigger Picture
IAF is one component of a larger knowledge production system rooted in Intelligenism theory:
| Layer | Component | What It Does |
|---|---|---|
| Knowledge Expression | v4 Protocol (open standard) | How knowledge is structured and communicated between agents |
| Knowledge Production | IAF (this framework) | How agents produce and collaborate on knowledge |
| Knowledge Retrieval | v4-optimised embedding model | How structured knowledge is searched and retrieved |
| Knowledge Visualisation | LanceDB + Dashboard | How knowledge is viewed and used |
Each layer is independently developed and independently valuable. Together, they form a complete pipeline from raw information to structured, retrievable, actionable knowledge.
Links
- Website — intelligenism.club
- Theory — intelligenism.org