Windows 下基于 Cherry Studio 进行 MCP 服务开发环境搭建
·
目录
Windows 下基于 Cherry Studio 进行 MCP 服务开发环境搭建
🎯 概述
Cherry Studio (cherry-ai.com) 是面向 AI 应用开发的 IDE,支持 Model Context Protocol (MCP) 开发。MCP 是 Anthropic 提出的协议,用于连接 AI 模型和外部工具/数据源。
一、环境要求与准备
1. 系统要求
# 最低要求
- Windows 10/11 64位
- 8 GB RAM
- 10 GB 可用空间
- Node.js 18+
# 推荐配置
- Windows 11 22H2
- 16 GB RAM
- SSD 存储
- NVIDIA GPU(可选,用于本地模型)
2. 安装必要工具
# 1. 安装 Node.js(必需)
# 下载:https://nodejs.org/dist/v18.17.1/node-v18.17.1-x64.msi
# 验证安装
node --version # 18+
npm --version
# 2. 安装 Git
# 下载:https://git-scm.com/download/win
# 3. 安装 Python(可选,用于某些工具)
# 下载:https://www.python.org/downloads/
python --version # 3.8+
二、Cherry AI Studio 安装与配置
1. 下载 Cherry AI Studio
# 目前 Cherry AI Studio 主要通过以下方式获取:
# 1. 访问 https://cherry-ai.com
# 2. 加入等待列表或获取早期访问
# 备用方案:使用 VSCode + Cherry 插件
# 安装 VSCode:https://code.visualstudio.com/
2. VSCode 替代方案配置
# 安装 VSCode 并配置 MCP 开发环境
# 1. 安装 VSCode 扩展
code --install-extension claude.claude-dev
code --install-extension github.copilot-chat
code --install-extension ms-python.python
code --install-extension redhat.java
code --install-extension ms-toolsai.jupyter
# 2. 创建 MCP 开发工作区
mkdir cherry-mcp-dev
cd cherry-mcp-dev
code .
三、MCP (Model Context Protocol) 基础
1. MCP 架构理解
MCP 架构:
┌─────────────────────────────────┐
│ AI Model (Claude) │
├─────────────────────────────────┤
│ Model Context Protocol │
├─────────────────────────────────┤
│ MCP Server (你的服务) │
├─────────────────────────────────┤
│ Tools / Data Sources │
│ - 数据库 │
│ - API │
│ - 文件系统 │
│ - 自定义工具 │
└─────────────────────────────────┘
2. MCP 核心概念
MCP 组件:
- MCP Client: AI 模型端(如 Claude Desktop)
- MCP Server: 提供工具和数据访问的服务
- MCP Transport: 通信协议(stdio, SSE, HTTP)
协议特性:
- 双向通信
- 工具注册和调用
- 资源访问(文件、数据库等)
- 流式响应
四、创建 MCP 服务器项目
1. 初始化 TypeScript MCP 项目
# 创建项目目录
mkdir my-mcp-server
cd my-mcp-server
# 初始化 npm 项目
npm init -y
# 安装 TypeScript 和 MCP SDK
npm install typescript @types/node ts-node --save-dev
npm install @modelcontextprotocol/sdk
# 初始化 TypeScript 配置
npx tsc --init --target ES2020 --module CommonJS --outDir dist --rootDir src --strict
2. 项目结构
my-mcp-server/
├── src/
│ ├── server.ts # MCP 服务器主文件
│ ├── tools/ # 工具实现
│ │ ├── calculator.ts
│ │ ├── database.ts
│ │ └── filesystem.ts
│ └── resources/ # 资源管理
├── package.json
├── tsconfig.json
├── .env # 环境变量
└── README.md
3. 基本 MCP 服务器实现
// src/server.ts
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
// 创建 MCP 服务器
const server = new Server(
{
name: 'my-mcp-server',
version: '1.0.0',
},
{
capabilities: {
tools: {},
resources: {},
},
}
);
// 注册工具
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: 'calculator_add',
description: 'Add two numbers',
inputSchema: {
type: 'object',
properties: {
a: { type: 'number', description: 'First number' },
b: { type: 'number', description: 'Second number' },
},
required: ['a', 'b'],
},
},
{
name: 'get_weather',
description: 'Get current weather for a city',
inputSchema: {
type: 'object',
properties: {
city: { type: 'string', description: 'City name' },
},
required: ['city'],
},
},
],
};
});
// 处理工具调用
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
switch (name) {
case 'calculator_add':
const result = (args as any).a + (args as any).b;
return {
content: [
{
type: 'text',
text: `Result: ${result}`,
},
],
};
case 'get_weather':
// 模拟天气查询
return {
content: [
{
type: 'text',
text: `Weather in ${(args as any).city}: Sunny, 25°C`,
},
],
};
default:
throw new Error(`Unknown tool: ${name}`);
}
});
// 启动服务器
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error('MCP server running on stdio');
}
main().catch((error) => {
console.error('Server error:', error);
process.exit(1);
});
五、配置 Claude Desktop 连接
1. 安装 Claude Desktop
# 1. 下载 Claude Desktop
# https://claude.ai/desktop
# 2. 安装并登录 Anthropic 账号
# 3. 配置 Claude 连接 MCP 服务器
2. Claude Desktop 配置
// Claude Desktop 配置文件位置:
// C:\Users\<用户名>\AppData\Roaming\Claude\claude_desktop_config.json
{
"mcpServers": {
"my-mcp-server": {
"command": "node",
"args": [
"C:\\path\\to\\your\\mcp-server\\dist\\server.js"
],
"env": {
"API_KEY": "your-api-key"
}
}
}
}
3. 简化配置方法
# 创建启动脚本
@'
// start-mcp.js - 自动配置 Claude
const fs = require('fs');
const path = require('path');
const configPath = path.join(
process.env.APPDATA,
'Claude',
'claude_desktop_config.json'
);
const config = {
mcpServers: {
"my-mcp-server": {
command: "node",
args: [__dirname + "\\dist\\server.js"],
env: {}
}
}
};
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
console.log('Claude MCP 配置已更新');
'@ | Out-File -FilePath start-mcp.js -Encoding UTF8
六、开发各种 MCP 工具
1. 数据库工具示例
// src/tools/database.ts
import { Tool } from '@modelcontextprotocol/sdk/types.js';
export class DatabaseTools {
private connection: any;
constructor() {
// 初始化数据库连接
this.connect();
}
private async connect() {
// 使用 SQLite 或其他数据库
const sqlite3 = await import('sqlite3');
this.connection = new sqlite3.Database(':memory:');
}
getTools(): Tool[] {
return [
{
name: 'db_query',
description: 'Execute SQL query on database',
inputSchema: {
type: 'object',
properties: {
query: {
type: 'string',
description: 'SQL query to execute'
}
},
required: ['query']
}
},
{
name: 'db_schema',
description: 'Get database schema information',
inputSchema: {
type: 'object',
properties: {
table: {
type: 'string',
description: 'Table name (optional)',
nullable: true
}
}
}
}
];
}
async executeTool(name: string, args: any) {
switch (name) {
case 'db_query':
return await this.executeQuery(args.query);
case 'db_schema':
return await this.getSchema(args.table);
default:
throw new Error(`Unknown tool: ${name}`);
}
}
private async executeQuery(query: string): Promise<string> {
return new Promise((resolve, reject) => {
this.connection.all(query, (err: any, rows: any[]) => {
if (err) reject(err);
else resolve(JSON.stringify(rows, null, 2));
});
});
}
private async getSchema(table?: string): Promise<string> {
if (table) {
return `Schema for table ${table}: ...`;
}
return 'Full database schema: ...';
}
}
2. 文件系统工具
// src/tools/filesystem.ts
import * as fs from 'fs/promises';
import * as path from 'path';
export class FileSystemTools {
getTools() {
return [
{
name: 'read_file',
description: 'Read content of a file',
inputSchema: {
type: 'object',
properties: {
path: { type: 'string', description: 'File path' }
},
required: ['path']
}
},
{
name: 'list_directory',
description: 'List files in a directory',
inputSchema: {
type: 'object',
properties: {
path: { type: 'string', description: 'Directory path', default: '.' },
recursive: { type: 'boolean', description: 'List recursively', default: false }
}
}
},
{
name: 'search_files',
description: 'Search for files by pattern',
inputSchema: {
type: 'object',
properties: {
pattern: { type: 'string', description: 'Search pattern (glob)' },
directory: { type: 'string', description: 'Search directory', default: '.' }
},
required: ['pattern']
}
}
];
}
async executeTool(name: string, args: any) {
switch (name) {
case 'read_file':
return await this.readFile(args.path);
case 'list_directory':
return await this.listDirectory(args.path, args.recursive);
case 'search_files':
return await this.searchFiles(args.pattern, args.directory);
default:
throw new Error(`Unknown tool: ${name}`);
}
}
private async readFile(filePath: string): Promise<string> {
const content = await fs.readFile(filePath, 'utf-8');
return `File: ${filePath}\n\n${content}`;
}
private async listDirectory(dirPath: string, recursive: boolean): Promise<string> {
const files = await fs.readdir(dirPath, { withFileTypes: true });
let result = `Directory: ${dirPath}\n`;
for (const file of files) {
result += ` ${file.name} (${file.isDirectory() ? 'DIR' : 'FILE'})\n`;
if (recursive && file.isDirectory()) {
const subDir = path.join(dirPath, file.name);
const subFiles = await this.listDirectory(subDir, true);
result += subFiles.split('\n').map(line => ' ' + line).join('\n');
}
}
return result;
}
private async searchFiles(pattern: string, directory: string): Promise<string> {
const glob = await import('glob');
const files = glob.sync(pattern, { cwd: directory });
return `Found ${files.length} files:\n${files.map(f => ` - ${f}`).join('\n')}`;
}
}
3. API 集成工具
// src/tools/api.ts
import axios from 'axios';
export class APITools {
private cache = new Map<string, { data: any; timestamp: number }>();
private CACHE_TTL = 5 * 60 * 1000; // 5分钟
getTools() {
return [
{
name: 'fetch_api',
description: 'Make HTTP request to an API',
inputSchema: {
type: 'object',
properties: {
url: { type: 'string', description: 'API endpoint URL' },
method: {
type: 'string',
description: 'HTTP method',
enum: ['GET', 'POST', 'PUT', 'DELETE', 'PATCH'],
default: 'GET'
},
headers: {
type: 'object',
description: 'Request headers',
additionalProperties: { type: 'string' }
},
body: { type: 'object', description: 'Request body (for POST/PUT)' }
},
required: ['url']
}
},
{
name: 'get_weather',
description: 'Get weather information for a location',
inputSchema: {
type: 'object',
properties: {
city: { type: 'string', description: 'City name' },
country: { type: 'string', description: 'Country code (optional)' }
},
required: ['city']
}
},
{
name: 'search_web',
description: 'Search the web for information',
inputSchema: {
type: 'object',
properties: {
query: { type: 'string', description: 'Search query' },
limit: { type: 'number', description: 'Number of results', default: 5 }
},
required: ['query']
}
}
];
}
async executeTool(name: string, args: any) {
const cacheKey = `${name}:${JSON.stringify(args)}`;
const cached = this.cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < this.CACHE_TTL) {
return `[Cached] ${cached.data}`;
}
let result: string;
switch (name) {
case 'fetch_api':
result = await this.fetchAPI(args.url, args.method, args.headers, args.body);
break;
case 'get_weather':
result = await this.getWeather(args.city, args.country);
break;
case 'search_web':
result = await this.searchWeb(args.query, args.limit);
break;
default:
throw new Error(`Unknown tool: ${name}`);
}
this.cache.set(cacheKey, { data: result, timestamp: Date.now() });
return result;
}
private async fetchAPI(
url: string,
method: string = 'GET',
headers?: Record<string, string>,
body?: any
): Promise<string> {
try {
const response = await axios({
method,
url,
headers,
data: body,
timeout: 10000
});
return `Response from ${url} (${response.status}):\n${JSON.stringify(response.data, null, 2)}`;
} catch (error: any) {
return `Error fetching ${url}: ${error.message}`;
}
}
private async getWeather(city: string, country?: string): Promise<string> {
// 使用 OpenWeatherMap API(需要 API key)
const apiKey = process.env.OPENWEATHER_API_KEY;
const location = country ? `${city},${country}` : city;
try {
const response = await axios.get(
`https://api.openweathermap.org/data/2.5/weather`,
{
params: {
q: location,
appid: apiKey,
units: 'metric'
}
}
);
const data = response.data;
return `Weather in ${data.name}, ${data.sys.country}:
Temperature: ${data.main.temp}°C
Feels like: ${data.main.feels_like}°C
Humidity: ${data.main.humidity}%
Conditions: ${data.weather[0].description}
Wind: ${data.wind.speed} m/s`;
} catch (error) {
return `Unable to get weather for ${city}. Please check city name or try again later.`;
}
}
private async searchWeb(query: string, limit: number = 5): Promise<string> {
// 使用 DuckDuckGo Instant Answer API
try {
const response = await axios.get(
`https://api.duckduckgo.com/`,
{
params: {
q: query,
format: 'json',
no_html: 1,
skip_disambig: 1
}
}
);
const data = response.data;
let result = `Search results for "${query}":\n`;
if (data.AbstractText) {
result += `\nSummary: ${data.AbstractText}\n`;
}
if (data.RelatedTopics && data.RelatedTopics.length > 0) {
result += `\nRelated topics:\n`;
data.RelatedTopics.slice(0, limit).forEach((topic: any, index: number) => {
if (topic.Text) {
result += `${index + 1}. ${topic.Text}\n`;
}
});
}
return result;
} catch (error) {
return `Search failed: ${error}`;
}
}
}
七、集成 AI 模型工具
1. 本地模型集成
// src/tools/ai.ts
import { Ollama } from 'ollama';
export class AITools {
private ollama: Ollama | null = null;
private localModels: string[] = [];
constructor() {
this.initializeLocalAI();
}
private async initializeLocalAI() {
try {
this.ollama = new Ollama({ host: 'http://localhost:11434' });
const models = await this.ollama.list();
this.localModels = models.models.map((m: any) => m.name);
} catch (error) {
console.warn('Ollama not available, local AI tools disabled');
}
}
getTools() {
const tools = [
{
name: 'summarize_text',
description: 'Summarize text content',
inputSchema: {
type: 'object',
properties: {
text: { type: 'string', description: 'Text to summarize' },
max_length: { type: 'number', description: 'Maximum summary length', default: 200 }
},
required: ['text']
}
},
{
name: 'translate_text',
description: 'Translate text between languages',
inputSchema: {
type: 'object',
properties: {
text: { type: 'string', description: 'Text to translate' },
target_lang: {
type: 'string',
description: 'Target language',
enum: ['en', 'zh', 'es', 'fr', 'de', 'ja', 'ko', 'ru'],
default: 'en'
},
source_lang: {
type: 'string',
description: 'Source language (auto-detected if not specified)',
nullable: true
}
},
required: ['text']
}
}
];
// 添加本地模型工具(如果可用)
if (this.localModels.length > 0) {
tools.push({
name: 'ask_local_ai',
description: 'Ask a question to local AI model',
inputSchema: {
type: 'object',
properties: {
question: { type: 'string', description: 'Question to ask' },
model: {
type: 'string',
description: 'Model to use',
enum: this.localModels,
default: this.localModels[0]
}
},
required: ['question']
}
});
}
return tools;
}
async executeTool(name: string, args: any) {
switch (name) {
case 'summarize_text':
return await this.summarizeText(args.text, args.max_length);
case 'translate_text':
return await this.translateText(args.text, args.target_lang, args.source_lang);
case 'ask_local_ai':
return await this.askLocalAI(args.question, args.model);
default:
throw new Error(`Unknown tool: ${name}`);
}
}
private async summarizeText(text: string, maxLength: number): Promise<string> {
// 使用简单的算法或调用 API
if (text.length <= maxLength) return text;
// 简单实现:取前 maxLength 个字符
return text.substring(0, maxLength) + '...';
}
private async translateText(
text: string,
targetLang: string,
sourceLang?: string
): Promise<string> {
// 使用翻译 API(这里为示例)
const languages: Record<string, string> = {
en: 'English',
zh: 'Chinese',
es: 'Spanish',
fr: 'French',
de: 'German',
ja: 'Japanese',
ko: 'Korean',
ru: 'Russian'
};
return `Translated to ${languages[targetLang] || targetLang}: [Translation placeholder]\nOriginal: ${text}`;
}
private async askLocalAI(question: string, model: string): Promise<string> {
if (!this.ollama) {
throw new Error('Local AI not available. Install Ollama first.');
}
try {
const response = await this.ollama.chat({
model,
messages: [
{
role: 'user',
content: question
}
],
stream: false
});
return response.message.content;
} catch (error: any) {
return `Error from local AI: ${error.message}`;
}
}
}
2. 安装 Ollama(可选)
# 1. 下载 Ollama for Windows
# https://ollama.com/download/windows
# 2. 安装后启动服务
ollama serve
# 3. 拉取模型(新终端)
ollama pull llama2
ollama pull mistral
# 4. 测试
ollama run llama2 "Hello, how are you?"
八、高级 MCP 服务器配置
1. 完整服务器实现
// src/server.ts - 完整版本
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
ListResourcesRequestSchema,
ReadResourceRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
import { DatabaseTools } from './tools/database.js';
import { FileSystemTools } from './tools/filesystem.js';
import { APITools } from './tools/api.js';
import { AITools } from './tools/ai.js';
class MCPServer {
private server: Server;
private tools: Map<string, any> = new Map();
constructor() {
this.server = new Server(
{
name: 'cherry-mcp-server',
version: '1.0.0',
},
{
capabilities: {
tools: {},
resources: {
subscribe: false,
listChanged: false,
},
},
}
);
this.initializeTools();
this.setupHandlers();
}
private initializeTools() {
const tools = [
new DatabaseTools(),
new FileSystemTools(),
new APITools(),
new AITools(),
];
for (const toolSet of tools) {
const toolList = toolSet.getTools();
for (const tool of toolList) {
this.tools.set(tool.name, toolSet);
}
}
}
private setupHandlers() {
// 列出所有可用工具
this.server.setRequestHandler(ListToolsRequestSchema, async () => {
const tools = [];
for (const [name, toolSet] of this.tools) {
const toolDefs = toolSet.getTools();
const toolDef = toolDefs.find((t: any) => t.name === name);
if (toolDef) {
tools.push(toolDef);
}
}
return { tools };
});
// 处理工具调用
this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
const toolSet = this.tools.get(name);
if (!toolSet) {
throw new Error(`Tool not found: ${name}`);
}
try {
const result = await toolSet.executeTool(name, args);
return {
content: [
{
type: 'text',
text: result,
},
],
};
} catch (error: any) {
return {
content: [
{
type: 'text',
text: `Error executing tool ${name}: ${error.message}`,
},
],
isError: true,
};
}
});
// 资源管理
this.server.setRequestHandler(ListResourcesRequestSchema, async () => {
return {
resources: [
{
uri: 'file:///tmp/example.txt',
name: 'Example File',
description: 'An example text file',
mimeType: 'text/plain',
},
{
uri: 'db://users',
name: 'Users Table',
description: 'Database users table',
mimeType: 'application/json',
},
],
};
});
this.server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const { uri } = request.params;
// 根据 URI 返回资源内容
if (uri.startsWith('file://')) {
const fs = await import('fs/promises');
const content = await fs.readFile(uri.replace('file://', ''), 'utf-8');
return {
contents: [
{
uri,
mimeType: 'text/plain',
text: content,
},
],
};
}
throw new Error(`Resource not found: ${uri}`);
});
}
async start() {
const transport = new StdioServerTransport();
await this.server.connect(transport);
console.error('Cherry MCP Server started');
}
}
// 启动服务器
const server = new MCPServer();
server.start().catch((error) => {
console.error('Failed to start server:', error);
process.exit(1);
});
2. 配置热重载
// package.json 配置
{
"name": "cherry-mcp-server",
"version": "1.0.0",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/server.js",
"dev": "nodemon --watch src --ext ts --exec \"npm run build && npm start\"",
"test": "echo \"No tests specified\" && exit 0"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^0.1.0",
"axios": "^1.5.0",
"glob": "^10.3.3",
"sqlite3": "^5.1.6"
},
"devDependencies": {
"@types/node": "^20.4.5",
"nodemon": "^3.0.1",
"typescript": "^5.1.6",
"@types/sqlite3": "^3.1.8",
"@types/glob": "^8.1.0"
}
}
九、测试和调试
1. 手动测试脚本
// test/test-client.ts
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { spawn } from 'child_process';
async function testMCP() {
// 启动 MCP 服务器
const serverProcess = spawn('node', ['dist/server.js'], {
stdio: ['pipe', 'pipe', 'pipe'],
});
// 创建客户端
const transport = new StdioClientTransport({
command: 'node',
args: ['dist/server.js'],
});
const client = new Client(
{
name: 'test-client',
version: '1.0.0',
},
{
capabilities: {},
}
);
await client.connect(transport);
try {
// 测试列出工具
const tools = await client.request({
method: 'tools/list',
params: {},
});
console.log('Available tools:', tools.tools);
// 测试调用工具
const result = await client.request({
method: 'tools/call',
params: {
name: 'calculator_add',
arguments: { a: 5, b: 3 },
},
});
console.log('Tool result:', result);
} finally {
await client.close();
serverProcess.kill();
}
}
testMCP().catch(console.error);
2. 调试配置
// .vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Debug MCP Server",
"program": "${workspaceFolder}/src/server.ts",
"runtimeArgs": [
"--loader",
"ts-node/esm"
],
"outFiles": [
"${workspaceFolder}/dist/**/*.js"
],
"env": {
"NODE_ENV": "development"
}
},
{
"type": "node",
"request": "launch",
"name": "Test MCP Client",
"program": "${workspaceFolder}/test/test-client.ts",
"runtimeArgs": [
"--loader",
"ts-node/esm"
]
}
]
}
十、打包和部署
1. *Docker 容器化
# Dockerfile
FROM node:18-alpine
WORKDIR /app
# 复制依赖文件
COPY package*.json ./
COPY tsconfig.json ./
# 安装依赖
RUN npm ci --only=production && \
npm cache clean --force
# 复制源代码
COPY src ./src
# 构建 TypeScript
RUN npm run build
# 设置环境变量
ENV NODE_ENV=production
ENV PORT=3000
# 暴露端口
EXPOSE 3000
# 启动命令
CMD ["node", "dist/server.js"]
2. Docker Compose 配置
# docker-compose.yml
version: '3.8'
services:
mcp-server:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- OPENWEATHER_API_KEY=${OPENWEATHER_API_KEY}
volumes:
- ./data:/app/data
restart: unless-stopped
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
restart: unless-stopped
volumes:
ollama_data:
3. 一键部署脚本
# deploy.ps1
param(
[string]$Environment = "development"
)
Write-Host "Deploying Cherry MCP Server..." -ForegroundColor Green
# 1. 构建
npm run build
# 2. 根据环境选择部署方式
switch ($Environment) {
"development" {
Write-Host "Starting development server..." -ForegroundColor Yellow
# 使用 nodemon 热重载
nodemon --watch src --ext ts --exec "npm run build && npm start"
}
"production" {
Write-Host "Building Docker image..." -ForegroundColor Yellow
# 构建 Docker 镜像
docker build -t cherry-mcp-server:latest .
# 启动容器
docker run -d \
--name cherry-mcp-server \
-p 3000:3000 \
--env-file .env \
cherry-mcp-server:latest
Write-Host "Production server started on port 3000" -ForegroundColor Green
}
default {
Write-Host "Unknown environment: $Environment" -ForegroundColor Red
}
}
十一、Cherry AI Studio 集成
1. 配置 Cherry AI Studio
// 假设 Cherry AI Studio 使用类似配置
// .cherry/config.json
{
"mcpServers": {
"my-custom-tools": {
"command": "node",
"args": ["${workspaceFolder}/dist/server.js"],
"env": {
"CHERRY_API_KEY": "${env:CHERRY_API_KEY}",
"OPENAI_API_KEY": "${env:OPENAI_API_KEY}"
}
}
},
"features": {
"autoConnectMCP": true,
"toolDiscovery": true,
"resourceManagement": true
}
}
2. 开发工作流集成
# 创建开发辅助脚本
# dev-tools.ps1
# 快速创建新工具模板
function New-MCPTool {
param(
[string]$Name,
[string]$Description,
[string]$OutputPath = "src/tools/"
)
$template = @"
import { Tool } from '@modelcontextprotocol/sdk/types.js';
export class ${Name}Tools {
getTools(): Tool[] {
return [
{
name: '${Name.ToLower()}_operation',
description: '$Description',
inputSchema: {
type: 'object',
properties: {
// 定义输入参数
input: { type: 'string', description: 'Input parameter' }
},
required: ['input']
}
}
];
}
async executeTool(name: string, args: any) {
switch (name) {
case '${Name.ToLower()}_operation':
return await this.processOperation(args.input);
default:
throw new Error(\`Unknown tool: \${name}\`);
}
}
private async processOperation(input: string): Promise<string> {
// 实现工具逻辑
return \`Processed: \${input}\`;
}
}
"@
$outputFile = Join-Path $OutputPath "${Name.ToLower()}.ts"
$template | Out-File -FilePath $outputFile -Encoding UTF8
Write-Host "Created tool template: $outputFile" -ForegroundColor Green
}
# 导出函数供 PowerShell 使用
Export-ModuleMember -Function New-MCPTool
十二、最佳实践
1. 工具设计原则
工具设计指南:
1. 单一职责:每个工具只做一件事
2. 清晰的文档:提供详细的工具描述和参数说明
3. 错误处理:优雅地处理错误并返回有用信息
4. 安全性:验证输入,防止注入攻击
5. 性能:实现缓存,避免重复计算
6. 幂等性:多次调用产生相同结果
2. 安全考虑
// 安全工具包装器
class SecureToolWrapper {
constructor(private toolSet: any) {}
async executeTool(name: string, args: any) {
// 1. 验证输入
this.validateInput(args);
// 2. 权限检查
await this.checkPermissions(name);
// 3. 速率限制
await this.checkRateLimit(name);
// 4. 执行工具
const result = await this.toolSet.executeTool(name, args);
// 5. 清理输出
return this.sanitizeOutput(result);
}
private validateInput(args: any) {
// 验证输入参数
if (args && typeof args === 'object') {
// 检查是否有危险内容
const dangerousPatterns = ['<script>', 'javascript:', 'onerror='];
const argsString = JSON.stringify(args).toLowerCase();
for (const pattern of dangerousPatterns) {
if (argsString.includes(pattern)) {
throw new Error('Potential security risk detected in input');
}
}
}
}
// 其他安全方法...
}
3. 性能优化
// 实现缓存层
class CachedToolExecution {
private cache = new Map<string, {
result: any;
timestamp: number;
ttl: number;
}>();
constructor(private ttl: number = 5 * 60 * 1000) {}
async executeWithCache(
toolName: string,
args: any,
executor: Function
) {
const cacheKey = this.getCacheKey(toolName, args);
const cached = this.cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < cached.ttl) {
console.log(`Cache hit for ${toolName}`);
return cached.result;
}
console.log(`Cache miss for ${toolName}, executing...`);
const result = await executor();
this.cache.set(cacheKey, {
result,
timestamp: Date.now(),
ttl: this.ttl
});
return result;
}
private getCacheKey(toolName: string, args: any): string {
return `${toolName}:${JSON.stringify(args)}`;
}
}
总结
环境验证清单
✅ Node.js 18+ 安装
✅ TypeScript 配置
✅ MCP SDK 安装
✅ 基础工具实现
✅ Claude Desktop 配置
✅ 本地测试通过
开发流程
- 设计工具:定义工具功能和参数
- 实现工具:编写 TypeScript 代码
- 测试工具:使用测试客户端验证
- 集成测试:与 Claude Desktop 连接测试
- 部署:本地运行或容器化部署
快速开始命令
# 1. 克隆示例项目
git clone https://github.com/cherry-ai/mcp-examples.git
# 2. 安装依赖
npm install
# 3. 构建项目
npm run build
# 4. 启动服务器
npm start
# 5. 配置 Claude Desktop
# 编辑 Claude 配置文件添加 MCP 服务器
资源链接
提示:MCP 仍在快速发展中,建议定期查看官方文档获取最新信息。Cherry AI Studio 的 MCP 集成功能可能随时间变化,请关注官方公告。
更多推荐



所有评论(0)