智能体使用体验评测:从创建到部署全过程演示

聚焦主题 1:知识库总结自动生成 + 提示词自动生成 + 智能体开发与调试 + MCP 服务接入 + 多智能体协作


目录

  1. 项目背景与目标
  2. 整体架构设计
  3. UML 建模
  4. 多智能体协作流程
  5. 项目文件结构
  6. 核心模块源码实现
  7. 部署与调试演示
  8. 总结与展望

1. 项目背景与目标

在企业知识管理场景中,用户常面临以下痛点:

  • 海量文档(PDF、Word、网页)无法快速提炼核心内容;
  • 手动编写提示词(Prompt)效率低、质量不稳定;
  • 单一智能体难以兼顾“理解-生成-校验”全流程。

为此,我们构建一个 MCP 驱动的多智能体系统,实现:

自动知识库摘要生成
基于语义的提示词自动生成
三智能体协作:Summarizer → Prompter → Validator
全链路可调试、可追溯、上下文一致


2. 整体架构设计

MCP request

MCP request

response

ctx-id

ctx-id

ctx-id

capability_announce

capability_announce

capability_announce

User Uploads Docs

Knowledge Ingestion

Vector Store

Summarizer Agent

Prompter Agent

Validator Agent

Final Output

MCP Broker

Context Manager

Capability Registry

核心组件说明:

组件 职责
Knowledge Ingestion 解析 PDF/HTML/TXT,提取文本,嵌入向量化
Summarizer Agent 调用 LLM 生成文档摘要(如“本文提出了一种新型量子纠错码…”)
Prompter Agent 基于摘要自动生成高质量提示词(如“请以科普风格解释该量子纠错机制”)
Validator Agent 校验提示词是否清晰、无歧义、可执行
MCP Broker 基于 WebSocket 的消息中转(支持广播/点对点)
Context Manager 确保三步操作共享同一 context_id

3. UML 建模

3.1 类图(Class Diagram)

sends

sends

sends

provides ctx_id

User

+upload_documents(files: List[str])

KnowledgeIngestor

+ingest(file_path: str) : str

+embed_and_store(text: str) : None

SummarizerAgent

+handle_task(task, args, ctx_id) : str

PrompterAgent

+handle_task(task, args, ctx_id) : str

ValidatorAgent

+handle_task(task, args, ctx_id) : dict

MCPBroker

+publish(topic, message)

+subscribe(topic, callback)

ContextManager

+create_context(metadata) : str

VectorStore

MCPMessage

3.2 序列图(协作流程)

MCPBroker Validator Prompter Summarizer Ingestor User MCPBroker Validator Prompter Summarizer Ingestor User upload("quantum.pdf") extract text + embed trigger summarization (ctx=CTX1) send_request(Prompter, "generate_prompt", {summary: "..."}, ctx=CTX1) deliver message send_request(Validator, "validate_prompt", {prompt: "..."}, ctx=CTX1) deliver message response({valid: true, suggestion: null}) forward final_output = {summary, prompt}

4. 多智能体协作流程(MCP 消息流)

  1. 初始化上下文

    ctx_id = context_manager.create_context({"doc": "quantum.pdf"})
    
  2. Summarizer 发起请求

    {
      "type": "request",
      "sender": "summarizer",
      "recipient": "prompter",
      "context_id": "CTX1",
      "payload": {
        "task": "generate_prompt",
        "args": {"summary": "本文提出..."}
      }
    }
    
  3. Prompter 委托校验

    {
      "type": "request",
      "sender": "prompter",
      "recipient": "validator",
      "context_id": "CTX1",
      "payload": {
        "task": "validate",
        "args": {"prompt": "请解释..."}
      }
    }
    
  4. Validator 返回结果

    {
      "type": "response",
      "sender": "validator",
      "recipient": "prompter",
      "context_id": "CTX1",
      "payload": {
        "result": {"valid": true, "score": 0.95}
      }
    }
    

5. 项目文件结构

mcp-knowledge-agent/
├── README.md
├── pyproject.toml
├── src/
│   └── mcp_knowledge/
│       ├── __init__.py
│       ├── ingestor.py             # 文档解析与向量化
│       ├── agents/
│       │   ├── __init__.py
│       │   ├── base_agent.py       # Agent 抽象基类
│       │   ├── summarizer.py
│       │   ├── prompter.py
│       │   └── validator.py
│       ├── mcp/
│       │   ├── message.py
│       │   ├── client.py           # WebSocket 客户端
│       │   ├── broker.py           # 简易 Broker(开发用)
│       │   └── context.py
│       └── main.py                 # 启动入口
├── examples/
│   └── quantum_computing.pdf
└── tests/
    └── test_agents.py

6. 核心模块源码实现

为简洁,仅展示关键逻辑;完整版可扩展签名、重试、日志等。

src/mcp_knowledge/agents/base_agent.py

from abc import ABC, abstractmethod
from typing import Dict, Any
from ..mcp.message import MCPMessage
from ..mcp.client import MCPClient

class BaseAgent(ABC):
    def __init__(self, agent_id: str, client: MCPClient):
        self.agent_id = agent_id
        self.client = client
        self.client.subscribe(f"mcp/request/{self.agent_id}", self._on_request)

    def _on_request(self, raw_msg: str):
        msg = MCPMessage.from_json(raw_msg)
        result = self.handle_task(msg.payload["task"], msg.payload["args"], msg.context_id)
        response = MCPMessage(
            sender=self.agent_id,
            recipient=msg.sender,
            msg_type="response",
            payload={"result": result},
            context_id=msg.context_id,
            parent_message_id=msg.message_id
        )
        self.client.publish(f"mcp/response/{msg.sender}", response.to_json())

    @abstractmethod
    def handle_task(self, task: str, args: Dict[str, Any], context_id: str) -> Any:
        pass

src/mcp_knowledge/agents/summarizer.py

from .base_agent import BaseAgent
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

class SummarizerAgent(BaseAgent):
    def __init__(self, agent_id: str, client: MCPClient, llm):
        super().__init__(agent_id, client)
        self.llm = llm

    def handle_task(self, task: str, args: Dict[str, Any], context_id: str) -> str:
        if task == "summarize_document":
            file_path = args["file_path"]
            docs = SimpleDirectoryReader(input_files=[file_path]).load_data()
            index = VectorStoreIndex.from_documents(docs)
            query_engine = index.as_query_engine()
            summary = query_engine.query("请用一段话总结本文核心贡献。").response
            # 委托给 Prompter
            prompt_req = MCPMessage(
                sender=self.agent_id,
                recipient="prompter",
                msg_type="request",
                payload={"task": "generate_prompt", "args": {"summary": summary}},
                context_id=context_id
            )
            self.client.publish("mcp/request/prompter", prompt_req.to_json())
            return summary  # 可选:返回中间结果
        else:
            raise ValueError(f"Unknown task: {task}")

src/mcp_knowledge/agents/prompter.py

class PrompterAgent(BaseAgent):
    def __init__(self, agent_id: str, client: MCPClient, llm):
        super().__init__(agent_id, client)
        self.llm = llm

    def handle_task(self, task: str, args: Dict[str, Any], context_id: str) -> Dict[str, Any]:
        if task == "generate_prompt":
            summary = args["summary"]
            prompt_template = f"""
            基于以下技术摘要,生成一个清晰、具体、可执行的提示词(Prompt),
            用于指导大模型生成面向非专业人士的解释:
            摘要:{summary}
            要求:1. 包含角色设定;2. 明确输出格式;3. 避免专业术语。
            """
            generated_prompt = self.llm.complete(prompt_template).text.strip()

            # 委托 Validator 校验
            validate_req = MCPMessage(
                sender=self.agent_id,
                recipient="validator",
                msg_type="request",
                payload={"task": "validate", "args": {"prompt": generated_prompt}},
                context_id=context_id
            )
            self.client.publish("mcp/request/validator", validate_req.to_json())
            return {"prompt": generated_prompt}
        else:
            raise ValueError(f"Unknown task: {task}")

src/mcp_knowledge/agents/validator.py

class ValidatorAgent(BaseAgent):
    def __init__(self, agent_id: str, client: MCPClient, llm):
        super().__init__(agent_id, client)
        self.llm = llm

    def handle_task(self, task: str, args: Dict[str, Any], context_id: str) -> Dict[str, bool]:
        if task == "validate":
            prompt = args["prompt"]
            validation_prompt = f"""
            请评估以下提示词是否满足:清晰、无歧义、可执行。
            提示词:{prompt}
            请以 JSON 格式回答:{{"valid": true/false, "reason": "..."}}
            """
            resp = self.llm.complete(validation_prompt).text
            try:
                result = json.loads(resp)
                return result
            except:
                return {"valid": False, "reason": "解析失败"}
        else:
            raise ValueError(f"Unknown task: {task}")

src/mcp_knowledge/main.py

from mcp_knowledge.mcp.broker import run_broker_in_thread
from mcp_knowledge.mcp.client import MCPClient
from mcp_knowledge.agents.summarizer import SummarizerAgent
from mcp_knowledge.agents.prompter import PrompterAgent
from mcp_knowledge.agents.validator import ValidatorAgent
from llama_index.llms.openai import OpenAI

def main():
    # 启动本地 Broker(开发模式)
    broker_thread = run_broker_in_thread("localhost", 8765)

    llm = OpenAI(model="gpt-4o")

    client_sum = MCPClient("ws://localhost:8765")
    client_pro = MCPClient("ws://localhost:8765")
    client_val = MCPClient("ws://localhost:8765")

    summarizer = SummarizerAgent("summarizer", client_sum, llm)
    prompter = PrompterAgent("prompter", client_pro, llm)
    validator = ValidatorAgent("validator", client_val, llm)

    # 模拟用户上传
    from mcp_knowledge.ingestor import KnowledgeIngestor
    ingestor = KnowledgeIngestor()
    ctx_id = ingestor.create_context({"doc": "examples/quantum_computing.pdf"})
    # 触发 Summarizer
    summarizer.handle_task("summarize_document", {"file_path": "examples/quantum_computing.pdf"}, ctx_id)

    input("Press Enter to exit...")
    broker_thread.join()

if __name__ == "__main__":
    main()

注:mcp/client.pymcp/broker.py 可基于 websockets 库实现简易 Pub/Sub。


7. 部署与调试演示

调试技巧:

  • 上下文追踪:所有日志打印 context_id,便于排查。
  • MCP 消息日志:Broker 记录所有进出消息。
  • 单元测试:对每个 Agent 的 handle_task 单独 mock LLM 测试。

部署建议:

  • 生产环境:替换本地 Broker 为 Redis PubSub 或 RabbitMQ。
  • 安全:启用 MCP 消息签名 + TLS。
  • 可观测性:集成 OpenTelemetry,追踪跨 Agent 调用链。

8. 总结与展望

本项目完整演示了:

  • 知识库自动摘要生成
  • 提示词自动生成与校验
  • 基于 MCP 的三智能体协作
  • 上下文一致性保障

未来方向

  • 支持 MCP over gRPC 提升吞吐;
  • 引入 智能体市场,动态发现外部能力;
  • 构建 MCP DevTools:可视化消息流、上下文图谱。

✅ 本方案提供从理论到代码的全栈实现,开发者可直接运行、扩展、部署。

Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐