Intelligenism Agent Framework

Why IAF Exists

IAF is not just another multi-agent framework. It is the first practical implementation of Intelligenism — a theoretical framework on how intelligence emerges in organisations, whether human, artificial, or hybrid.

The AI industry has no shortage of agent frameworks (CrewAI, AutoGen, LangGraph, Google ADK, and many more). They all solve the same problem: "how to make multiple agents collaborate on tasks." But they all share the same blind spot: they hardcode a specific collaboration paradigm into the framework itself, then call it "flexible" because you can configure parameters within that paradigm.

IAF starts from a different premise — an epistemological one.

From single neurons to complex neural networks, intelligence exhibits a bottom-up construction pattern. Regardless of whether we look at complex intelligent systems or individual neurons, their boundaries are clear, and individuals maintain a high degree of independence from one another. This means the complexity and intelligence potential of a neural network comes from the connections between individuals, not from making the individuals internally more complex. Therefore, maintaining the independence of individuals or small units, and expressing complexity outside the individual, is the essence of connectionism. — Thinking, Conception and Construction of Intelligenism

This principle directly shapes IAF's architecture: every agent is a complete, independent unit. Complexity lives in the connections between agents (the Dispatch layer), not inside any single agent's code.


Possibility Management vs Certainty Management

Intelligenism begins with an epistemological judgment: humans cannot assert that any theory is absolute truth. Since we cannot assert absolute truth, the way to evaluate a theory should shift toward "fitness" — whether the theory can effectively guide action and produce positive value in a given scenario. — Thinking, Conception and Construction of Intelligenism

2026 is a period of profound transformation in AI. The only thing we can be certain of is that we are in the middle of a revolution — everything else is uncertain. This creates a critical risk: if we bet resources on a framework built on certainty-based assumptions, when that assumption is falsified, all accumulated advantages collapse.

Most multi-agent frameworks practice certainty management. Their developers assume they know which collaboration paradigm works best, so they deeply integrate that paradigm into the framework's core. CrewAI's identity is role-playing sequential execution. AutoGen's identity is multi-round dialogue with consensus convergence. If you could swap these out, these frameworks would lose their reason to exist.

IAF practices possibility management. It assumes that more multi-agent collaboration modes will emerge in the future, and therefore refuses to assert from a position of absolute truth that any one mode is necessarily better.

What this means in practice:

The core difference: traditional frameworks offer freedom inside a fence. IAF makes the fence 550 lines tall — low enough to step over at zero cost.


Core Concepts

Fundamental Loop

The Fundamental Loop is a complete, self-contained agent runtime. Every agent has its own independent copy containing:

Current implementation: ~461 lines of Python for the core loop, plus 89 lines of shared infrastructure. Total: ~550 lines.

Additive Complexity

Shared-engine frameworks have multiplicative complexity: 3 agents × 2 trimming strategies × 4 tool sets = 24 combinations, each with potential interaction bugs. Adding one agent multiplies the complexity by a coefficient.

IAF has additive complexity: each agent's Fundamental Loop is always ~460 lines. Adding a 4th agent means one more 460-line copy. Each copy evolves independently, affecting nothing else.

True Parallel Execution

Because each Fundamental Loop runs as an independent process:


Four-Layer Architecture

IAF consists of four completely independent layers. Each can start, stop, and scale independently.

Layer 1: Agent Layer

The bottom execution unit. Each agent owns a complete Fundamental Loop copy (engine, tools, context, identity, skills). Agents don't know Dispatch or Scheduler exist.

Layer 2: Dispatch Layer

Multi-agent collaboration orchestration. Each collaboration strategy is an independent folder containing its own orchestration logic, context injector, config, and optional UI page. Dispatch treats agents as data sources (reads their SOUL.md, skills, model config) but calls lib/llm_client.call_llm() directly — zero coupling with agent engines.

Key design principle: In Dispatch mode, an agent is not a complete "self" — it's an engine. Dispatch decides who this agent is, what it knows, and how it works for this specific collaboration.

Three-layer context isolation:

Layer 3: Scheduler Layer

A unified timer trigger. Scans schedules.json in both agent and dispatch folders, fires tasks by cron rules. Doesn't care whether it's triggering a single-agent task or multi-agent collaboration — it just needs to know "when, call what, with what parameters."

Layer 4: UI Layer

Yellow-pages index + independent HTML pages. Each page is self-contained, communicates with the backend through APIs. No SPA, no shared UI engine — adding a page means dropping an HTML file into a directory.


Mapping to Intelligenism Theory

Theory ConceptArchitecture Implementation
Autonomous unit of an intelligent consortiumEach agent's independent Fundamental Loop
Carbon-silicon symbiosis, loose couplingProcess isolation + file system communication
Pluggable collaboration paradigmsIndependent strategy folders under dispatch/
Deterministic scheduling, no LLM orchestratorOrchestrator is pure Python, not LLM probabilistic judgment
Possibility management over certainty managementFull code copies instead of parameterised configuration
Value lies in assembly, not in building blocksThe 550-line assembly knowledge is the competitive moat

Directory Structure

intelligenism-agent-framework/
├── config.json                  # Global config (provider connections)
├── lib/                         # Shared infrastructure (89 lines)
│   ├── llm_client.py            # HTTP calls + retry + error classification
│   └── token_utils.py           # Token estimation
├── template/                    # Fundamental Loop template
│   ├── core/direct_llm.py       # Engine core (266 lines)
│   ├── core/tool_executor.py    # Tool auto-discovery registry (53 lines)
│   ├── tools/file_tools.py      # Default tool set (78 lines)
│   ├── context/sliding_window.py # Default trimming strategy (47 lines)
│   ├── skills/                  # Empty (to be filled)
│   ├── SOUL.md                  # Identity template
│   └── agent_config.json        # Agent config template
├── agents/                      # Agent instances
│   └── default/                 # First agent (copied from template/)
├── dispatch/                    # Collaboration strategies
│   └── brainstorm-attack/       # Example strategy
│       ├── dispatch.py          # Orchestration logic
│       ├── context_injector.py  # Context assembly
│       ├── dispatch_config.json # Config: participating agents, read rules
│       ├── context/             # Dispatch's own trimming strategy
│       ├── sessions/            # Collaboration records
│       └── brainstorm.html      # Strategy-specific UI page
├── chat_server.py               # Web server / router
├── scheduler.py                 # Scheduled task trigger
├── index.html                   # Yellow pages index
├── chat.html                    # Basic chat interface
├── pages/                       # User-created pages
└── tests/test_template.py       # Architecture validation tests

Three Orthogonal Extension Dimensions

Any change in one dimension does not affect the other two.

OperationFiles InvolvedPython Code Changes
Add new agentcp -r template/ agents/xxx/0 lines
Add tool to agentDrop .py in agents/xxx/tools/0 lines
Add skill to agentDrop .md in agents/xxx/skills/ + trigger rule0 lines
Change trimming strategySwap .py in agents/xxx/context/0 lines
Add new collaboration modeAdd folder in dispatch/0 lines
Add new LLM providerAdd entry in config.json0 lines
Add new UI pageDrop .html in pages/0 lines

File System as Communication Protocol

Agents communicate through the file system — output files, monitor status files, log files. Not through shared memory or function calls.


Dispatch in Detail

Context Injector

Each Dispatch strategy has a context_injector.py that selectively reads from agent folders based on dispatch_config.json:

{
  "agents": {
    "agent_a": {
      "read_soul": true,
      "read_skills": ["critical_review.md"],
      "provider": "from_agent",
      "model": "from_agent"
    },
    "agent_b": {
      "read_soul": false,
      "read_skills": [],
      "provider": "from_agent",
      "model": "openrouter/claude-opus"
    }
  }
}

provider and model set to "from_agent" reads from the agent's own agent_config.json. Or you can override directly — the same agent can use different models in different collaboration strategies.

Data Flow

dispatch.py
 ├── reads dispatch_config.json (which agents, what to read from each)
 ├── context_injector.py (selectively reads from agent folders per config)
 │   ├── agents/agent_a/SOUL.md ← read or not, config decides
 │   ├── agents/agent_a/skills/x.md ← which ones, config decides
 │   └── agents/agent_a/agent_config.json ← get model/provider
 ├── assembles Dispatch's own context (session history, task instructions)
 └── calls lib/llm_client.call_llm() directly

Folder = Product

Product UnitDelivery FormUser Operation
AgentOne folder (SOUL + skills + tools + config)Copy to agents/, restart
Dispatch StrategyOne folder (orchestration + injector + config + UI)Copy to dispatch/, configure participating agents
UI PageOne HTML fileDrop in pages/, refresh

The core difference from traditional SaaS products: what users get is not a black-box service, but a fully disassemblable, learnable, and modifiable blueprint.


Key Design Principles


Tech Stack

The entire framework depends only on Python + HTML. No Node.js, no npm, no frontend build tools.

pip install flask requests

That's it.


Current Status: v0.9


Comparison with Existing Frameworks

DimensionCrewAI / AutoGen / LangGraphIAF
Code size10,000+ lines~550 lines
Engine modelOne shared engineEach agent gets independent engine
Complexity growthMultiplicativeAdditive
ParallelismAsync pseudo-parallelMulti-process true parallel
Fault isolationNone (one crash = all crash)Complete isolation
CustomizabilityWithin config parametersAny line of code
DependenciesMany third-party librariesflask + requests only
Agent communicationShared memory / function callsFile system
Collaboration paradigmHardcoded in frameworkPluggable, independent folders
Distributed potentialDifficultNatural support
Management philosophyCertainty managementPossibility management
Paradigm switch riskHigh (tightly coupled)Low (swap a folder)

Getting Started

# Clone the repo
git clone https://github.com/IntelligenismCommercialDevelopment-LLC/intelligenism-agent-framework.git

# Install dependencies
pip install flask requests

# Configure your LLM provider in config.json

# Create your first agent
cp -r template/ agents/my_agent/

# Edit agents/my_agent/SOUL.md to define your agent's identity

# Start the server
python chat_server.py

# Open http://localhost:5000

The Bigger Picture

IAF is one component of a larger knowledge production system rooted in Intelligenism theory:

LayerComponentWhat It Does
Knowledge Expressionv4 Protocol (open standard)How knowledge is structured and communicated between agents
Knowledge ProductionIAF (this framework)How agents produce and collaborate on knowledge
Knowledge Retrievalv4-optimised embedding modelHow structured knowledge is searched and retrieved
Knowledge VisualisationLanceDB + DashboardHow knowledge is viewed and used

Each layer is independently developed and independently valuable. Together, they form a complete pipeline from raw information to structured, retrievable, actionable knowledge.


Links