Agno智能体框架简单使用
Agno是一个用于构建多智能体的Python框架,具有高性能和灵活性,支持从基础工具调用到复杂工作流的五个智能体级别。文章介绍了Agno的主要功能,包括Tools、Agentic RAG、Teams和WorkFlows,并提供了代码示例展示其简单工具调用和知识库构建方法(支持ChromaDB和LanceDB)。同时对比了其他开源智能体框架(如OpenManus、MetaGPT)和常用数据检索工具(
Agno智能体框架简单使用
1 介绍
从Manus发布智能体平台后,进一步推动了智能体的快速发展,同时也推动了MCP发展。针对智能体的快速发展,有很多优秀团体开源了很多智能体项目。例如:Agno、OpenManus、MetaGPT、OWL和DeerFlow等框架。
Agno是一个类似与LlamaIndex、LangChain等研发灵活度比较大的Python框架。不同之处,LlamaIndex和LangChain偏向于RAG,Agno是偏向Agent,性能很高。
OpenManus、MetaGPT和OWL都是多智能体框架,其中OpenManus是继Manus发布后,几位大牛仅用了3个小时就开源的多智能体框架,真是太厉害了。DeerFlow是字节基于Deep Research思路开源的多智能体项目。上面这几个项目是已经封装好的项目,借助上面框架研制符合自己需求的系统比较麻烦。
上面开源的项目中调用互联网数据,可使Tavily、DuckDuckGo、SearXNG等几个引擎,Tavily是一款专为智能体检索与推理需求量身打造的工具,还具有爬虫功能,不过在一定的请求范围内收费。DuckDuckGo免费,但是有广告。SearXNG是元搜索免费,但是研发成本较高。
数据引擎
Tavily
https://www.tavily.com/
DuckDuckGo
https://github.com/duckduckgo/duckduckgo
SearXNG
https://docs.searxng.org/
AI爬虫工具
智能体离不开AI爬虫工具,其中Firecrawl、此处介绍两个优秀的爬虫工具。
firecrawl是一个具有高级抓取、爬行和数据提取功能的爬虫框架。
https://github.com/mendableai/firecrawl
Crawl4AI 是一个专为AI应用设计的现代化异步网页爬取框架。
https://github.com/unclecode/crawl4ai
获取arXiv数据
https://github.com/lukasschwab/arxiv.py
智能体框架
Agno
# Agno官网地址
https://docs.agno.com/introduction
# Agno的Github
https://github.com/agno-agi/agno
OpenManus
https://github.com/FoundationAgents/OpenManus
MetaGPT
https://github.com/FoundationAgents/MetaGPT
OWL
# OWL的Github地址
https://github.com/camel-ai/owl
# Eigent的Github地址,Eigent是一个桌面版智能体,有兴趣可以看看
https://github.com/eigent-ai/eigent
DeerFlow
https://github.com/bytedance/deer-flow
Deep Research
https://github.com/zilliztech/deep-searcher
2 Agno功能介绍
2.1 Agno能力
Agno是一个用于构建具有共享内存、知识和推理的多智能体python 框架。Agno可以构建5个级别的智能体。
等级一:具备工具和指令的智能体;
等级二:具备知识和存储的智能体;
等级三:具备记忆和推理的智能体;
等级四:具备推理和协作的团队智能体;
等级五:具备状态和决策的智能工作流;
2.2 Agno功能
Agno主要功能点有Tools、Agentic RAG、Agent、Teams和WorkFlows等。
Tools主要功能是帮助智能体完成与外界交互。如果待用的是vLLM构建的大模型,注意在构建时必须设置以下参数:
--enable-auto-tool-choice
--tool-call-parser hermes
Agentic RAG是对ReAct(Reason Act)和RAG的结合和优化。
Teams是智能体重要的功能,重点是多智能体的团队协作;
WorkFlows是偏向智能体流。
3 代码实现
Agno用起来很方便,Agno的接口比LlamaIndex和LangChain好用,性能也比较高。
3.1 简单工具调用
from agno.agent import Agent
from agno.models.openai import OpenAILike
from agno.tools.reasoning import ReasoningTools
# 调用自己搭建的大模型接口
model = OpenAILike(
id="XXX",
api_key="XXX",
base_url="XXX"
)
# 构建智能体
agent = Agent(
# 设置智能体
model=model,
# 设置工具调用
tools=[
ReasoningTools(add_instructions=True)
],
# 设置指令
instructions=[
"Use tables to display data",
"Only output the report, no other text",
],
# 设置输出
markdown=True
)
# 执行
agent.print_response(
"Write a report on NVDA",
# 按照流输出
stream=True,
# 显示推理国产
show_full_reasoning=True,
# 显示推理的中间过程
stream_intermediate_steps=True
)
3.2 使用Agentic RAG
数据加载有很多方法,注意必须设置嵌入模型,知识初始化有很多方法,请参考官网。
自定义Embedding模型
from typing import List, Tuple, Optional, Dict
from agno.embedder.base import Embedder
from sentence_transformers import SentenceTransformer
class EmbeddingCustom(Embedder):
def __init__(self):
self.embedding_model = SentenceTransformer(
model_name_or_path="E:/model/all-MiniLM-L6-v2"
)
def get_embedding_and_usage(self, text: str) -> Tuple[List[float], Optional[Dict]]:
return self.get_embedding(text), None
def get_embedding(self, text: str) -> List[float]:
return self.embedding_model.encode(text).tolist()
使用ChromaDB
from typing import List
from agno.agent import Agent
from agno.document import Document
from agno.knowledge import AgentKnowledge
from agno.models.openai import OpenAILike
from agno.tools.reasoning import ReasoningTools
from agno.vectordb.chroma import ChromaDb
from custorm_embedding import EmbeddingCustom
# 构建知识
knowledge_base = AgentKnowledge(
# 设置RAG检索后,返回检索后的文献数量,默认5条。
num_documents=10,
# 设置向量库
vector_db=ChromaDb(
collection="recipes1",
embedder=EmbeddingCustom()
)
)
# 构建文档列表
# !! 注意document的数据列表不能重复
documents: List[Document] = list()
documents.append(
Document(
content="data1",
meta_data={"id": "1"},
id="1"
)
)
documents.append(
Document(
content="data2",
meta_data={"id": "2"},
id="2"
)
)
knowledge_base.load_documents(documents)
# 构建模型
model = OpenAILike(
id="XXX",
api_key="XXX",
base_url="XXX"
)
# 构建智能体
agent = Agent(
# 设置模型
model=model,
# 设置知识
knowledge=knowledge_base,
# 设置搜索
search_knowledge=True,
# 添加工具
tools=[ReasoningTools(add_instructions=True)],
# 添加指令
instructions=[
"Include sources in your response.",
"Always search your knowledge before answering the question.",
"Only include the output in your response. No other text.",
],
markdown=True
)
# 执行
if __name__ == "__main__":
agent.print_response(
"What are Agents?",
stream=True,
show_full_reasoning=True,
stream_intermediate_steps=True,
)
使用LanceDB
ChromaDB仅仅支持向量检索,LanceDB支持向量、关键词和混合检索。
from typing import List
from agno.agent import Agent
from agno.document import Document
from agno.knowledge import AgentKnowledge
from agno.models.openai import OpenAILike
from agno.tools.reasoning import ReasoningTools
from agno.vectordb.chroma import ChromaDb
from custorm_embedding import EmbeddingCustom
# 构建知识
knowledge_base = AgentKnowledge(
# 设置RAG检索后,返回检索后的文献数量,默认5条。
num_documents=10,
# 设置向量库
vector_db=LanceDb(
# 设置表明
table_name="recipes",
uri="/tmp/lancedb",
# 设置检索类型,设置混合模型
search_type=SearchType.hybrid,
embedder=EmbeddingCustom()
)
)
# 构建文档列表
# !! 注意document的数据列表不能重复
documents: List[Document] = list()
documents.append(
Document(
content="data1",
meta_data={"id": "1"},
id="1"
)
)
documents.append(
Document(
content="data2",
meta_data={"id": "2"},
id="2"
)
)
knowledge_base.load_documents(documents)
# 构建模型
model = OpenAILike(
id="XXX",
api_key="XXX",
base_url="XXX"
)
# 构建智能体
agent = Agent(
# 设置模型
model=model,
# 设置知识
knowledge=knowledge_base,
# 设置搜索
search_knowledge=True,
# 添加工具
tools=[ReasoningTools(add_instructions=True)],
# 添加指令
instructions=[
"Include sources in your response.",
"Always search your knowledge before answering the question.",
"Only include the output in your response. No other text.",
],
markdown=True
)
# 执行
if __name__ == "__main__":
agent.print_response(
"What are Agents?",
stream=True,
show_full_reasoning=True,
stream_intermediate_steps=True,
)
3.3 Team团队
from agno.models.openai import OpenAILike
from agno.team import Team
from agno.agent import Agent
model = OpenAILike(
id="XXX",
api_key="XXX",
base_url="XXX"
)
team = Team(
model=model,
members=[
Agent(model=model, name="Agent 1", role="You answer questions in English"),
Agent(model=model, name="Agent 2", role="You answer questions in Chinese")
]
)
# 执行
if __name__ == "__main__":
team.print_response(
"What are Agents?",
stream=True,
show_full_reasoning=True,
stream_intermediate_steps=True,
)
3.4 Workflow工作流
from typing import Iterator
from agno.agent import Agent, RunResponse
from agno.models.openai import OpenAILike
from agno.utils.log import logger
from agno.utils.pprint import pprint_run_response
from agno.workflow import Workflow
class CacheWorkflow(Workflow):
# 设置描述
description: str = "A workflow that caches previous outputs"
# 构建模型
model = OpenAILike(
id="moonshot-v1-8k",
api_key="sk-76yEcp3Ztk2PAmEqVB0Q1HpmjSaTMhKXut2R5ArResIVNjpB",
base_url="https://api.moonshot.cn/v1"
)
# 设置智能体
agent = Agent(model=model)
# 运行工作流的任务执行
def run(self, message: str) -> Iterator[RunResponse]:
logger.info(f"Checking cache for '{message}'")
# Check if the output is already cached
if self.session_state.get(message):
logger.info(f"Cache hit for '{message}'")
yield RunResponse(run_id=self.run_id, content=self.session_state.get(message))
return
logger.info(f"Cache miss for '{message}'")
# Run the agent and yield the response
yield from self.agent.run(message, stream=True)
# Cache the output after response is yielded
self.session_state[message] = self.agent.run_response.content
if __name__ == "__main__":
workflow = CacheWorkflow()
# Run workflow (this is takes ~1s)
response: Iterator[RunResponse] = workflow.run(message="Tell me a joke.")
# Print the response
pprint_run_response(response, markdown=True, show_time=True)
# Run workflow again (this is immediate because of caching)
response: Iterator[RunResponse] = workflow.run(message="Tell me a joke.")
# Print the response
pprint_run_response(response, markdown=True, show_time=True)
更多推荐
所有评论(0)