multi-agent security writing

Blog

Posts, notes, and research updates.

/posts

A Tiny Multi-Agent Experiment That Explains Multi-Agent Systems

Framework used

  • Lightweight Python + LangChain setup
  • langchain-openai with a shared ChatOpenAI model (gpt-4o-mini)
  • Role-specific system prompts for each agent
  • A simple coordinator in app.py to run the pipeline
  • Repository: simplemultiagentsystem

The system architecture
The system runs as a simple sequential pipeline:

Pipeline Execution Graph

S1

Input: User Question

artifact: raw query

S2

Research Agent

artifact: concise notes

S3

Writer Agent (Draft)

artifact: initial explanation

S4

Editor Agent

artifact: critique + revision requests

S5

Writer Agent (Revision)

artifact: improved explanation

S6

Coordinator

artifact: final answer

The Coordinator manages the workflow by:

  • Passing outputs between agents
  • Controlling execution order
  • Printing the final results

This keeps each agent simple, while the pipeline stays clear and easy to follow.

Why multi-agent systems matter
Single-agent AI can do a lot: answer questions, summarize, and write code. But when a task gets more complex, it often helps to split the work up.

That is the core idea behind multi-agent systems: instead of one "do everything" prompt, you assign roles. Each role focuses on one part of the problem, and the pieces are combined into a better final result.

In this post, we walk through a small, runnable-style experiment that shows how a simple group of agents can collaborate to create a clearer explanation than a single prompt alone.

This is the first post in a series. I will start here, build out a simple multi-agent system, and slowly add capabilities to make it more robust. Along the way I will introduce security considerations. The goal is both personal experimentation and a ride along for those interested in multi-agent systems and security.

What is a multi-agent system?
A multi-agent system (MAS) is an AI setup where multiple agents collaborate to solve a problem.

  • A specific role
  • A focused responsibility
  • A clear handoff to the next agent

Instead of one model trying to do everything at once, work is distributed across specialized roles. A quick analogy is a content team:

  • Researcher: gathers facts
  • Writer: produces a draft
  • Editor: reviews and improves it

A multi-agent system works the same way, only the team members are agents.

The goal of this experiment
Use a small multi-agent system to answer one question: "What is a multi-agent system?"

To keep it educational and focused, the experiment uses these constraints:

  • Minimal architecture
  • The same LLM powers all agents
  • No tools
  • No memory
  • No RAG
  • No web search

This isolates the real lesson: orchestration plus role specialization.

Agent roles (what each one does)

  • Research Agent: produces concise bullet notes about the question to give the writer useful context
  • Writer Agent: turns notes into a readable explanation (draft, then revision based on editor feedback)
  • Editor Agent: reviews the draft and suggests improvements; critiques only and does not rewrite
  • Coordinator: orchestrates the process and ensures execution order, visibility, and logging

Making collaboration visible
To make teamwork easy to understand, the program prints each stage:

  • USER QUESTION
  • RESEARCH NOTES
  • DRAFT EXPLANATION
  • EDITOR FEEDBACK
  • REVISED EXPLANATION
  • FINAL ANSWER

This turns the system into a learning tool. Instead of a black box, you can watch the explanation improve step-by-step.

Logging and traceability
The experiment produces two helpful logs:

  • Detailed interaction log (.log): prompts, agent responses, and handoff messages
  • Execution trace (_trace.md): a clean, step-by-step record of the pipeline

Useful for debugging, inspecting behavior, and visualizing how information moves through the system.

Lessons from the experiment

  • Specialization improves clarity: roles create more structured output
  • Review loops improve quality: editor to writer revision boosts the final explanation
  • Multi-agent value shows up fast: benefits appear without complicated infrastructure

Closing thoughts
Multi-agent systems are not about making AI more complicated. They are about structured collaboration. Even a small pipeline like this demonstrates the key advantage: divide the work, specialize roles, and improve results through review.

If you want to try it yourself, modify the code and ask new questions. You may be surprised how far a simple agent workflow can go.