一、应该选择哪个 Agent SDK?

When you build your own agent, you have the choice of targeting an underlying SDK like the OpenAI SDK or the Anthropic SDK, or you can go with a higher level abstraction such as the Vercel AI SDK or Pydantic. The choice we made a while back was to adopt the Vercel AI SDK but only the provider abstractions, and to basically drive the agent loop ourselves[1]. At this point we[2] would not make that choice again. There is absolutely nothing wrong with the Vercel AI SDK, but when you are trying to build an agent, two things happen that we originally didn’t anticipate:

当你构建自己的 Agent 时,可以选择底层 SDK,比如 OpenAI SDK 或 Anthropic SDK,也可以选择更高层次的抽象,比如 Vercel AI SDK 或 Pydantic。我们之前的选择是采用 Vercel AI SDK,但只使用其提供商抽象,基本上自己驱动 Agent 循环[1]。现在我们[2]不会再做这样的选择了。Vercel AI SDK 本身没有任何问题,但当你尝试构建 Agent 时,会发生两件我们最初没有预料到的事情:

The first is that the differences between models are significant enough that you will need to build your own agent abstraction. We have not found any of the solutions from these SDKs that build the right abstraction for an agent. I think this is partly because, despite the basic agent design being just a loop, there are subtle differences based on the tools you provide. These differences affect how easy or hard it is to find the right abstraction (cache control, different requirements for reinforcement, tool prompts, provider-side tools, etc.). Because the right abstraction is not yet clear, using the original SDKs from the dedicated platforms keeps you fully in control. With some of these higher-level SDKs you have to build on top of their existing abstractions, which might not be the ones you actually want in the end.

首先,模型之间的差异足够显著,你需要构建自己的 Agent 抽象。我们没有从这些 SDK 中找到任何能为 Agent 构建正确抽象的解决方案。我认为这部分是因为,尽管基本的 Agent 设计只是一个循环,但根据你提供的工具会有细微差异。这些差异影响找到正确抽象的难易程度(缓存控制、不同的强化需求、工具提示、提供商端工具等)。由于正确的抽象尚不明确,使用专用平台的原始 SDK 能让你保持完全控制。使用某些高级 SDK 时,你必须在其现有抽象之上构建,而这些可能不是你最终真正想要的。

We also found it incredibly challenging to work with the Vercel SDK when it comes to dealing with provider-side tools. The attempted unification of messaging formats doesn’t quite work. For instance, the web search tool from Anthropic routinely destroys the message history with the Vercel SDK, and we haven’t yet fully figured out the cause. Also, in Anthropic’s case, cache management is much easier when targeting their SDK directly instead of the Vercel one. The error messages when you get things wrong are much clearer.

我们还发现,在处理提供商端工具时,使用 Vercel SDK 极具挑战性。尝试统一消息格式的做法并不奏效。例如,来自 Anthropic 的网页搜索工具经常会破坏 Vercel SDK 的消息历史,我们还没有完全弄清楚原因。而且,对于 Anthropic 来说,直接使用他们的 SDK 进行缓存管理要容易得多,而不是使用 Vercel 的。当你出错时,错误消息也更清晰。

This might change, but right now we would probably not use an abstraction when building an agent, at least until things have settled down a bit. The benefits do not yet outweigh the costs for us.

这种情况可能会改变,但现在我们在构建 Agent 时可能不会使用抽象层,至少要等到情况稳定一些。对我们来说,收益还不足以抵消成本。

Someone else might have figured it out. If you’re reading this and think I’m wrong, please drop me a mail. I want to learn.

也许其他人已经找到了解决方案。如果你读到这里并认为我错了,请给我发邮件。我想学习。

二、缓存经验

The different platforms have very different approaches to caching. A lot has been said about this already, but Anthropic makes you pay for caching. It makes you manage cache points explicitly, and this really changes the way you interact with it from an agent engineering level. I initially found the manual management pretty dumb. Why doesn’t the platform do this for me? But I’ve fully come around and now vastly prefer explicit cache management. It makes costs and cache utilization much more predictable.

不同平台对缓存的处理方式差异很大。关于这一点已经说了很多,但 Anthropic 让你为缓存付费。它要求你显式管理缓存点,这确实从 Agent 工程层面改变了你与它交互的方式。我最初觉得手动管理很愚蠢。为什么平台不帮我做这件事?但我现在完全转变了想法,非常喜欢显式缓存管理。它使成本和缓存利用率变得更加可预测。

Explicit caching allows you to do certain things that are much harder otherwise. For instance, you can split off a conversation and have it run in two different directions simultaneously. You also have the opportunity to do context editing. The optimal strategy here is unclear, but you clearly have a lot more control, and I really like having that control. It also makes it much easier to understand the cost of the underlying agent. You can assume much more about how well your cache will be utilized, whereas with other platforms we found it to be hit and miss.

显式缓存允许你做一些原本很难做到的事情。例如,你可以分叉一个对话,让它同时朝两个不同方向运行。你还有机会进行上下文编辑。这里的最佳策略尚不明确,但你显然拥有更多控制权,我真的很喜欢拥有这种控制权。它还使理解底层 Agent 的成本变得更容易。你可以对缓存的利用情况做出更多假设,而在其他平台上,我们发现这是不确定的。

The way we do caching in the agent with Anthropic is pretty straightforward. One cache point is after the system prompt. Two cache points are placed at the beginning of the conversation, where the last one moves up with the tail of the conversation. And then there is some optimization along the way that you can do.

我们在 Anthropic 的 Agent 中进行缓存的方式非常直接。一个缓存点在系统提示之后。两个缓存点放在对话开始处,其中最后一个随着对话尾部向上移动。然后你可以在此过程中进行一些优化。

Because the system prompt and the tool selection now have to be mostly static, we feed a dynamic message later to provide information such as the current time. Otherwise, this would trash the cache. We also leverage reinforcement during the loop much more.

因为系统提示和工具选择现在必须基本保持静态,我们稍后提供动态消息来提供诸如当前时间之类的信息。否则,这会破坏缓存。我们还在循环中更多地利用强化机制。

三、Agent 循环中的强化

Every time the agent runs a tool you have the opportunity to not just return data that the tool produces, but also to feed more information back into the loop. For instance, you can remind the agent about the overall objective and the status of individual tasks. You can also provide hints about how the tool call might succeed when a tool fails. Another use of reinforcement is to inform the system about state changes that happened in the background. If you have an agent that uses parallel processing, you can inject information after every tool call when that state changed and when it is relevant for completing the task.

每次 Agent 运行工具时,你都有机会不仅返回工具产生的数据,还可以将更多信息反馈到循环中。例如,你可以提醒 Agent 整体目标和各个任务的状态。当工具失败时,你还可以提供关于工具调用如何成功的提示。强化的另一个用途是通知系统后台发生的状态变化。如果你的 Agent 使用并行处理,你可以在每次工具调用后注入信息,说明状态何时发生了变化以及何时与完成任务相关。

Sometimes it’s enough for the agent to self-reinforce. In Claude Code, for instance, the todo write tool is a self-reinforcement tool. All it does is take from the agent a list of tasks that it thinks it should do and echo out what came in. It’s basically just an echo tool; it really doesn’t do anything else. But that is enough to drive the agent forward better than if the only task and subtask were given at the beginning of the context and too much has happened in the meantime.

有时 Agent 自我强化就足够了。例如,在 Claude Code 中,待办事项写入工具就是一个自我强化工具。它所做的只是从 Agent 那里获取它认为应该做的任务列表,然后回显输入的内容。它基本上只是一个回显工具;它真的不做其他任何事情。但这足以比仅在上下文开头给出任务和子任务更好地推动 Agent 前进,因为在此期间发生了太多事情。

We also use reinforcements to inform the system if the environment changed during execution in a way that’s problematic for the agent. For instance, if our agent fails and retries from a certain step forward but the recovery operates off broken data, we inject a message informing it that it might want to back off a couple of steps and redo an earlier step.

我们还使用强化来通知系统,如果执行期间环境以对 Agent 有问题的方式发生了变化。例如,如果我们的 Agent 失败并从某个步骤开始重试,但恢复操作基于损坏的数据,我们会注入一条消息,告知它可能需要后退几步并重做较早的步骤。

四、隔离失败

If you expect a lot of failures during code execution, there is an opportunity to hide those failures from the context. This can happen in two ways. One is to run tasks that might require iteration individually. You would run them in a subagent until they succeed and only report back the success, plus maybe a brief summary of approaches that did not work. It is helpful for an agent to learn about what did not work in a subtask because it can then feed that information into the next task to hopefully steer away from those failures.

如果你预期代码执行期间会有很多失败,有机会将这些失败从上下文中隐藏起来。这可以通过两种方式实现。一种是单独运行可能需要迭代的任务。你会在子 Agent 中运行它们,直到成功,然后只报告成功,也许还有一个关于不起作用的方法的简要摘要。让 Agent 了解子任务中哪些不起作用是有帮助的,因为它可以将该信息反馈到下一个任务中,以期避免这些失败。

The second option doesn’t exist in all agents or foundation models, but with Anthropic you can do context editing. So far we haven’t had a lot of success with context editing, but we believe it’s an interesting thing we would love to explore more. We would also love to learn if people have success with it. What is interesting about context editing is that you should be able to preserve tokens for further down the iteration loop. You can take out of the context certain failures that didn’t drive towards successful completion of the loop, but only negatively affected certain attempts during execution. But as with the point I made earlier: it is also useful for the agent to understand what didn’t work, but maybe it doesn’t require the full state and full output of all the failures.

第二个选项并非在所有 Agent 或基础模型中都存在,但使用 Anthropic 你可以进行上下文编辑。到目前为止,我们在上下文编辑方面没有取得太多成功,但我们认为这是一件有趣的事情,我们很想进一步探索。我们也很想知道是否有人在这方面取得了成功。上下文编辑的有趣之处在于,你应该能够为迭代循环的后续部分保留 token。你可以从上下文中删除某些没有推动循环成功完成的失败,这些失败只是在执行期间对某些尝试产生了负面影响。但正如我之前所说的:让 Agent 了解哪些不起作用也是有用的,但也许它不需要所有失败的完整状态和完整输出。

Unfortunately, context editing will automatically invalidate caches. There is really no way around it. So it can be unclear when the trade-off of doing that compensates for the extra cost of trashing the cache.

不幸的是,上下文编辑会自动使缓存失效。真的没有办法绕过它。因此,何时这样做的权衡能够补偿破坏缓存的额外成本可能并不清楚。

五、子 Agent / 子推理

As I mentioned a couple of times on this blog already, most of our agents are based on code execution and code generation. That really requires a common place for the agent to store data. Our choice is a file system—in our case a virtual file system—but that requires different tools to access it. This is particularly important if you have something like a subagent or subinference.

正如我在这个博客上已经提到过几次的,我们的大多数 Agent 都基于代码执行和代码生成。这确实需要一个 Agent 存储数据的公共位置。我们的选择是文件系统——在我们的例子中是虚拟文件系统——但这需要不同的工具来访问它。如果你有子 Agent 或子推理之类的东西,这一点尤其重要。

You should try to build an agent that doesn’t have dead ends. A dead end is where a task can only continue executing within the sub-tool that you built. For instance, you might build a tool that generates an image, but is only able to feed that image back into one more tool. That’s a problem because you might then want to put those images into a zip archive using the code execution tool. So there needs to be a system that allows the image generation tool to write the image to the same place where the code execution tool can read it. In essence, that’s a file system.

你应该尝试构建一个没有死胡同的 Agent。死胡同是指任务只能在你构建的子工具内继续执行。例如,你可能构建了一个生成图像的工具,但只能将该图像反馈到另一个工具中。这是一个问题,因为你可能想使用代码执行工具将这些图像放入 zip 存档中。因此需要一个系统,允许图像生成工具将图像写入代码执行工具可以读取的同一位置。本质上,这就是一个文件系统。

Obviously it has to go the other way around too. You might want to use the code execution tool to unpack a zip archive and then go back to inference to describe all the images so that the next step can go back to code execution and so forth. The file system is the mechanism that we use for that. But it does require tools to be built in a way that they can take file paths to the virtual file system to work with.

显然,反过来也必须如此。你可能想使用代码执行工具解压 zip 存档,然后返回推理来描述所有图像,以便下一步可以返回代码执行等等。文件系统是我们用于此目的的机制。但这确实要求工具以能够接受虚拟文件系统文件路径的方式构建。

So basically an ExecuteCode tool would have access to the same file system as the RunInference tool which could take a path to a file on that same virtual file system.

因此,基本上 ExecuteCode 工具可以访问与 RunInference 工具相同的文件系统,后者可以接受该虚拟文件系统上文件的 path

六、输出工具的使用

One interesting thing about how we structured our agent is that it does not represent a chat session. It will eventually communicate something to the user or the outside world, but all the messages that it sends in between are usually not revealed. The question is: how does it create that message? We have one tool which is the output tool. The agent uses it explicitly to communicate to the human. We then use a prompt to instruct it when to use that tool. In our case the output tool sends an email.

我们构建 Agent 的一个有趣之处在于,它不代表聊天会话。它最终会向用户或外部世界传达某些内容,但它在此期间发送的所有消息通常不会显示。问题是:它如何创建该消息?我们有一个工具,即输出工具。Agent 明确使用它与人类交流。然后我们使用提示来指示它何时使用该工具。在我们的例子中,输出工具发送电子邮件。

But that turns out to pose a few other challenges. One is that it’s surprisingly hard to steer the wording and tone of that output tool compared to just using the main agent loop’s text output as the mechanism to talk to the user. I cannot say why this is, but I think it’s probably related to how these models are trained.

但事实证明这带来了一些其他挑战。一个是,与仅使用主 Agent 循环的文本输出作为与用户交谈的机制相比,引导该输出工具的措辞和语气出奇地困难。我不能说这是为什么,但我认为这可能与这些模型的训练方式有关。

One attempt that didn’t work well was to have the output tool run another quick LLM like Gemini 2.5 Flash to adjust the tone to our preference. But this increases latency and actually reduces the quality of the output. In part, I think the model just doesn’t word things correctly and the subtool doesn’t have sufficient context. Providing more slices of the main agentic context into the subtool makes it expensive and also didn’t fully solve the problem. It also sometimes reveals information in the final output that we didn’t want to be there, like the steps that led to the end result.

一个不太奏效的尝试是让输出工具运行另一个快速 LLM,如 Gemini 2.5 Flash,以根据我们的偏好调整语气。但这增加了延迟,实际上降低了输出质量。我认为部分原因是模型的措辞不正确,子工具没有足够的上下文。将主 Agent 上下文的更多片段提供给子工具会使其成本高昂,而且也没有完全解决问题。它有时还会在最终输出中透露我们不希望出现的信息,比如导致最终结果的步骤。

Another problem with an output tool is that sometimes it just doesn’t call the tool. One of the ways in which we’re forcing this is we remember if the output tool was called. If the loop ends without the output tool, we inject a reinforcement message to encourage it to use the output tool.

输出工具的另一个问题是有时它根本不调用该工具。我们强制执行此操作的方法之一是记住输出工具是否被调用。如果循环在没有输出工具的情况下结束,我们会注入一条强化消息以鼓励它使用输出工具。

七、模型选择

Overall our choices for models haven’t dramatically changed so far. I think Haiku and Sonnet are still the best tool callers available, so they make for excellent choices in the agent loop. They are also somewhat transparent with regards to what the RL looks like. The other obvious choices are the Gemini models. We so far haven’t found a ton of success with the GPT family of models for the main loop.

总体而言,到目前为止,我们对模型的选择没有发生重大变化。我认为 Haiku 和 Sonnet 仍然是最好的工具调用者,因此它们是 Agent 循环中的绝佳选择。它们在强化学习方面也相对透明。其他明显的选择是 Gemini 模型。到目前为止,我们还没有发现 GPT 系列模型在主循环中取得很大成功。

For the individual sub-tools, which in part might also require inference, our current choice is Gemini 2.5 if you need to summarize large documents or work with PDFs and things like that. That is also a pretty good model for extracting information from images, in particular because the Sonnet family of models likes to run into a safety filter which can be annoying.

对于可能部分需要推理的各个子工具,如果你需要总结大型文档或处理 PDF 之类的内容,我们目前的选择是 Gemini 2.5。这也是一个从图像中提取信息的相当好的模型,特别是因为 Sonnet 系列模型喜欢触发安全过滤器,这可能很烦人。

There’s also probably the very obvious realization that token cost alone doesn’t really define how expensive an agent. A better tool caller will do the job in fewer tokens. There are some cheaper models available than sonnet today, but they are not necessarily cheaper in a loop.

还有一个非常明显的认识,即仅凭 token 成本并不能真正定义 Agent 的成本。更好的工具调用者将用更少的 token 完成工作。今天有一些比 Sonnet 更便宜的模型,但它们在循环中不一定更便宜。

But all things considered, not that much has changed in the last couple of weeks.

但总的来说,在过去几周里没有太大变化。

八、测试和评估

We find testing and evals to be the hardest problem here. This is not entirely surprising, but the agentic nature makes it even harder. Unlike prompts, you cannot just do the evals in some external system because there’s too much you need to feed into it. This means you want to do evals based on observability data or instrumenting your actual test runs. So far none of the solutions we have tried have convinced us that they found the right approach here. Unfortunately, I have to report that at the moment we haven’t found something that really makes us happy. I hope we’re going to find a solution for this because it is becoming an increasingly frustrating aspect of building an agent.

我们发现测试和评估是这里最困难的问题。这并不完全令人惊讶,但 Agent 的特性使其更加困难。与提示不同,你不能只是在某个外部系统中进行评估,因为你需要输入太多内容。这意味着你希望基于可观察性数据或检测实际测试运行来进行评估。到目前为止,我们尝试过的解决方案都没有让我们相信它们找到了正确的方法。不幸的是,我必须报告,目前我们还没有找到真正让我们满意的东西。我希望我们能找到解决方案,因为这正成为构建 Agent 越来越令人沮丧的方面。

九、编码 Agent 更新

As for my experience with coding agents, not really all that much has changed. The main new development is that I’m trialing Amp[3] more. In case you’re curious why: it’s not that it’s objectively a better agent than what I’m using, but I really quite like the way they’re thinking about agents from what they’re posting. The interactions of the different sub agents like the Oracle with the main loop is beautifully done, and not many other harnesses do this today. It’s also a good way for me to validate how different agent designs work. Amp, similar to Claude Code, really feels like a product built by people who also use their own tool. I do not feel every other agent in the industry does this.

至于我使用编码 Agent 的经验,实际上并没有太大变化。主要的新进展是我更多地试用 Amp[3]。如果你好奇为什么:并不是说它客观上比我正在使用的 Agent 更好,但我真的很喜欢他们从发布的内容中思考 Agent 的方式。不同子 Agent(如 Oracle)与主循环的交互做得很漂亮,今天没有多少其他工具能做到这一点。这也是我验证不同 Agent 设计如何工作的好方法。Amp 与 Claude Code 类似,真的感觉像是由也使用自己工具的人构建的产品。我觉得行业中并非每个其他 Agent 都这样做。

十、我阅读和发现的一些快速内容

That’s just a random assortment of things that I feel might also be worth sharing:

这只是我觉得可能也值得分享的一些随机内容:

  • What if you don’t need MCP at all?[4]: Mario argues that many MCP servers are overengineered and include large toolsets that consume lots of context. He proposes a minimalist approach for browser-agent use-cases by relying on simple CLI tools (e.g., start, navigate, evaluate JS, screenshot) executed via Bash, which keeps token usage small and workflows flexible. I built a Claude/Amp Skill out of it[5].
  • 如果你根本不需要 MCP 怎么办?[4]:Mario 认为许多 MCP 服务器过度设计,包含消耗大量上下文的大型工具集。他提出了一种针对浏览器 Agent 用例的极简方法,依赖于通过 Bash 执行的简单 CLI 工具(例如,启动、导航、评估 JS、截图),这使 token 使用量保持较小,工作流程灵活。我由此构建了一个 Claude/Amp 技能[5]。
  • The fate of “small” open source[6]: The author argues that the age of tiny, single-purpose open-source libraries is coming to an end, largely because built-in platform APIs and AI tools can now generate simple utilities on demand. Thank fucking god[7].
  • "小型"开源的命运[6]:作者认为,微小的单一用途开源库的时代即将结束,主要是因为内置平台 API 和 AI 工具现在可以按需生成简单的实用程序。谢天谢地[7]。
  • Tmux is love[8]. There is no article that goes with it, but the TLDR is that Tmux is great. If you have anything that remotely looks like an interactive system that an agent should work with, you should give it some Tmux skills[9].
  • Tmux 是爱[8]。没有与之配套的文章,但简而言之,Tmux 很棒。如果你有任何看起来像 Agent 应该使用的交互式系统,你应该给它一些 Tmux 技能[9]。
  • LLM APIs are a Synchronization Problem[10]. This was a separate realization that was too long for this post, so I wrote a separate one.
  • LLM API 是一个同步问题[10]。这是一个单独的认识,对于这篇文章来说太长了,所以我写了一篇单独的文章。

如何学习大模型 AI ?

由于新岗位的生产效率,要优于被取代岗位的生产效率,所以实际上整个社会的生产效率是提升的。

但是具体到个人,只能说是:

“最先掌握AI的人,将会比较晚掌握AI的人有竞争优势”。

这句话,放在计算机、互联网、移动互联网的开局时期,都是一样的道理。

我在一线互联网企业工作十余年里,指导过不少同行后辈。帮助很多人得到了学习和成长。

我意识到有很多经验和知识值得分享给大家,也可以通过我们的能力和经验解答大家在人工智能学习中的很多困惑,所以在工作繁忙的情况下还是坚持各种整理和分享。但苦于知识传播途径有限,很多互联网行业朋友无法获得正确的资料得到学习提升,故此将并将重要的AI大模型资料包括AI大模型入门学习思维导图、精品AI大模型学习书籍手册、视频教程、实战学习等录播视频免费分享出来。

在这里插入图片描述

第一阶段(10天):初阶应用

该阶段让大家对大模型 AI有一个最前沿的认识,对大模型 AI 的理解超过 95% 的人,可以在相关讨论时发表高级、不跟风、又接地气的见解,别人只会和 AI 聊天,而你能调教 AI,并能用代码将大模型和业务衔接。

  • 大模型 AI 能干什么?
  • 大模型是怎样获得「智能」的?
  • 用好 AI 的核心心法
  • 大模型应用业务架构
  • 大模型应用技术架构
  • 代码示例:向 GPT-3.5 灌入新知识
  • 提示工程的意义和核心思想
  • Prompt 典型构成
  • 指令调优方法论
  • 思维链和思维树
  • Prompt 攻击和防范

第二阶段(30天):高阶应用

该阶段我们正式进入大模型 AI 进阶实战学习,学会构造私有知识库,扩展 AI 的能力。快速开发一个完整的基于 agent 对话机器人。掌握功能最强的大模型开发框架,抓住最新的技术进展,适合 Python 和 JavaScript 程序员。

  • 为什么要做 RAG
  • 搭建一个简单的 ChatPDF
  • 检索的基础概念
  • 什么是向量表示(Embeddings)
  • 向量数据库与向量检索
  • 基于向量检索的 RAG
  • 搭建 RAG 系统的扩展知识
  • 混合检索与 RAG-Fusion 简介
  • 向量模型本地部署

第三阶段(30天):模型训练

恭喜你,如果学到这里,你基本可以找到一份大模型 AI相关的工作,自己也能训练 GPT 了!通过微调,训练自己的垂直大模型,能独立训练开源多模态大模型,掌握更多技术方案。

到此为止,大概2个月的时间。你已经成为了一名“AI小子”。那么你还想往下探索吗?

  • 为什么要做 RAG
  • 什么是模型
  • 什么是模型训练
  • 求解器 & 损失函数简介
  • 小实验2:手写一个简单的神经网络并训练它
  • 什么是训练/预训练/微调/轻量化微调
  • Transformer结构简介
  • 轻量化微调
  • 实验数据集的构建

第四阶段(20天):商业闭环

对全球大模型从性能、吞吐量、成本等方面有一定的认知,可以在云端和本地等多种环境下部署大模型,找到适合自己的项目/创业方向,做一名被 AI 武装的产品经理。

  • 硬件选型
  • 带你了解全球大模型
  • 使用国产大模型服务
  • 搭建 OpenAI 代理
  • 热身:基于阿里云 PAI 部署 Stable Diffusion
  • 在本地计算机运行大模型
  • 大模型的私有化部署
  • 基于 vLLM 部署大模型
  • 案例:如何优雅地在阿里云私有部署开源大模型
  • 部署一套开源 LLM 项目
  • 内容安全
  • 互联网信息服务算法备案

学习是一个过程,只要学习就会有挑战。天道酬勤,你越努力,就会成为越优秀的自己。

如果你能在15天内完成所有的任务,那你堪称天才。然而,如果你能完成 60-70% 的内容,你就已经开始具备成为一名大模型 AI 的正确特征了。

这份完整版的大模型 AI 学习资料已经上传CSDN,朋友们如果需要可以微信扫描下方CSDN官方认证二维码免费领取【保证100%免费

在这里插入图片描述

Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐