Qwen2模型结构代码详解
并行优化:通过张量并行(TP)切分线性层/注意力头,流水线并行(PP)切分解码器层,适配大模型多GPU推理;效率优化:融合QKV/gate-up投影、KV缓存、RoPE优化,减少计算和通信开销;兼容性:兼容HuggingFace的Qwen2权重,支持量化、LoRA、超长上下文;推理专属:无训练相关代码,全部为推理优化(Pre-LN、残差连接、缓存管理)。
·
Qwen2模型结构代码详解
实现了适配vLLM推理框架的Qwen2大语言模型,核心是实现解码器(Decoder)结构的前向推理、权重加载、量化适配等能力。下面我们拆解每个模块的设计逻辑和代码含义:
一、基础导入与版权声明
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
# Adapted from
# https://github.com/huggingface/transformers/blob/v4.28.0/src/transformers/models/qwen2/modeling_qwen2.py
# 版权声明与开源协议说明:该代码基于Apache 2.0协议,适配自HuggingFace的Qwen2实现,
# 同时保留了vLLM、Qwen团队、EleutherAI等机构的版权信息
"""Inference-only Qwen2 model compatible with HuggingFace weights."""
# 模块说明:这是仅用于推理的Qwen2模型实现,兼容HuggingFace发布的权重文件
from collections.abc import Iterable
from typing import Any, Optional, Union
import torch
from torch import nn
from transformers import Qwen2Config
from vllm.attention import Attention, AttentionType
from vllm.compilation.decorators import support_torch_compile
from vllm.config import CacheConfig, VllmConfig
from vllm.distributed import get_pp_group, get_tensor_model_parallel_world_size
from vllm.model_executor.layers.activation import SiluAndMul
from vllm.model_executor.layers.layernorm import RMSNorm
from vllm.model_executor.layers.linear import (MergedColumnParallelLinear,
QKVParallelLinear,
RowParallelLinear)
from vllm.model_executor.layers.logits_processor import LogitsProcessor
from vllm.model_executor.layers.quantization import QuantizationConfig
from vllm.model_executor.layers.rotary_embedding import get_rope
from vllm.model_executor.layers.vocab_parallel_embedding import (
ParallelLMHead, VocabParallelEmbedding)
from vllm.model_executor.model_loader.weight_utils import (
default_weight_loader, maybe_remap_kv_scale_name)
from vllm.model_executor.sampling_metadata import SamplingMetadata
from vllm.sequence import IntermediateTensors
from .interfaces import SupportsLoRA, SupportsPP
from .utils import (AutoWeightsLoader, PPMissingLayer, extract_layer_index,
is_pp_missing_parameter,
make_empty_intermediate_tensors_factory, make_layers,
maybe_prefix)
- 基础库导入:
torch/nn是PyTorch核心,用于构建神经网络;Qwen2Config是HuggingFace定义的Qwen2模型配置类,包含模型维度、层数等参数。 - vLLM专属导入:
Attention/AttentionType:vLLM优化的注意力机制实现(支持张量并行、缓存优化);CacheConfig/VllmConfig:vLLM的缓存、整体配置类,控制KV缓存、量化、并行等策略;- 并行线性层(
MergedColumnParallelLinear等):适配张量并行(TP)的线性层,把大矩阵切分到多个GPU; RMSNorm:Qwen2使用的归一化层(替代LayerNorm);QuantizationConfig:量化配置,支持FP8/GPTQ等量化方式;get_rope:旋转位置编码(RoPE)实现,Qwen2的核心位置编码方式;- 并行嵌入层(
VocabParallelEmbedding):词汇表切分的嵌入层,适配多GPU并行; - 工具类(
AutoWeightsLoader等):处理权重加载、流水线并行(PP)的参数缺失判断等。
二、Qwen2MLP:前馈网络模块
class Qwen2MLP(nn.Module):
def __init__(
self,
hidden_size: int,
intermediate_size: int,
hidden_act: str,
quant_config: Optional[QuantizationConfig] = None,
prefix: str = "",
) -> None:
super().__init__()
- 类定义:
Qwen2MLP继承nn.Module(PyTorch所有网络层的基类),实现Qwen2的前馈网络(Feed-Forward Network)。 - 初始化参数:
hidden_size:模型隐藏层维度(比如Qwen2-7B的hidden_size是4096);intermediate_size:前馈网络中间层维度(通常是hidden_size的2-4倍);hidden_act:激活函数类型;quant_config:量化配置(可选,用于量化推理);prefix:参数命名前缀(适配权重加载)。
self.gate_up_proj = MergedColumnParallelLinear(
hidden_size,
[intermediate_size] * 2,
bias=False,
quant_config=quant_config,
prefix=f"{prefix}.gate_up_proj",
)
gate_up_proj:融合了“门控投影”(gate)和“上投影”(up)的并行线性层。- 普通MLP是
x → up → act → down,Qwen2用GLU变体:x → (gate, up) → gate * silu(up) → down; MergedColumnParallelLinear:把gate和up的权重合并成一个大矩阵,按列切分到多个GPU(张量并行),减少通信开销;[intermediate_size] * 2:输出维度是2倍intermediate_size,分别对应gate和up。
- 普通MLP是
self.down_proj = RowParallelLinear(
intermediate_size,
hidden_size,
bias=False,
quant_config=quant_config,
prefix=f"{prefix}.down_proj",
)
down_proj:下投影层,把中间维度映射回隐藏层维度。RowParallelLinear:按行切分的并行线性层,和gate_up_proj的列切分配合,完成张量并行的前馈计算。
if hidden_act != "silu":
raise ValueError(f"Unsupported activation: {hidden_act}. "
"Only silu is supported for now.")
self.act_fn = SiluAndMul()
- 激活函数校验:Qwen2仅支持SiLU(Sigmoid Linear Unit)激活;
SiluAndMul:vLLM优化的SiLU+乘法操作,对应GLU的计算逻辑(silu(up) * gate)。
def forward(self, x):
gate_up, _ = self.gate_up_proj(x)
x = self.act_fn(gate_up)
x, _ = self.down_proj(x)
return x
- 前向传播逻辑:
gate_up_proj(x):输入x经过融合的门控+上投影,输出形状是[batch, seq_len, 2*intermediate_size];act_fn(gate_up):把gate_up切分成gate和up两部分,计算silu(up) * gate;down_proj(x):把中间维度映射回hidden_size,完成MLP计算;- 返回值:MLP的输出(形状和输入x一致:
[batch, seq_len, hidden_size])。
三、Qwen2Attention:注意力机制模块
class Qwen2Attention(nn.Module):
def __init__(
self,
hidden_size: int,
num_heads: int,
num_kv_heads: int,
max_position: int = 4096 * 32,
rope_theta: float = 10000,
cache_config: Optional[CacheConfig] = None,
quant_config: Optional[QuantizationConfig] = None,
rope_scaling: Optional[tuple] = None,
prefix: str = "",
attn_type: str = AttentionType.DECODER,
dual_chunk_attention_config: Optional[dict[str, Any]] = None,
) -> None:
super().__init__()
- 类定义:实现Qwen2的多头注意力机制(MHA),支持多查询注意力(MQA)/分组查询注意力(GQA)。
- 核心参数:
num_heads:总查询头数(Q头数);num_kv_heads:总KV头数(MQA时num_kv_heads=1,GQA时是num_heads的约数);max_position:最大序列长度(Qwen2支持超长上下文,默认4096*32);rope_theta:RoPE的基础参数(控制位置编码的周期);cache_config:KV缓存配置(vLLM的核心优化,缓存之前的K/V减少重复计算);rope_scaling:RoPE缩放配置(支持超长上下文的位置编码缩放);attn_type:注意力类型(Decoder表示因果注意力,Encoder-only表示双向注意力);dual_chunk_attention_config:双块注意力配置(Qwen2的优化注意力策略)。
self.hidden_size = hidden_size
tp_size = get_tensor_model_parallel_world_size()
self.total_num_heads = num_heads
assert self.total_num_heads % tp_size == 0
self.num_heads = self.total_num_heads // tp_size
- 张量并行(TP)切分:
tp_size:获取当前张量并行的GPU数量(比如2个GPU则tp_size=2);total_num_heads是全局总头数,num_heads是当前GPU负责的头数(总头数/TP数),确保能被整除。
self.total_num_kv_heads = num_kv_heads
if self.total_num_kv_heads >= tp_size:
# Number of KV heads is greater than TP size, so we partition
# the KV heads across multiple tensor parallel GPUs.
assert self.total_num_kv_heads % tp_size == 0
else:
# Number of KV heads is less than TP size, so we replicate
# the KV heads across multiple tensor parallel GPUs.
assert tp_size % self.total_num_kv_heads == 0
self.num_kv_heads = max(1, self.total_num_kv_heads // tp_size)
- KV头的TP适配:
- 如果KV总头数≥TP数:按TP数切分KV头(每个GPU负责部分KV头);
- 如果KV总头数<TP数:每个KV头复制到多个GPU(比如TP=4,KV头=2,则每个KV头在2个GPU上复制);
num_kv_heads:当前GPU负责的KV头数。
self.head_dim = hidden_size // self.total_num_heads
self.q_size = self.num_heads * self.head_dim
self.kv_size = self.num_kv_heads * self.head_dim
self.scaling = self.head_dim**-0.5
self.rope_theta = rope_theta
self.dual_chunk_attention_config = dual_chunk_attention_config
- 维度计算:
head_dim:每个注意力头的维度(隐藏层维度/总头数);q_size:当前GPU的Q维度(本地头数*头维度);kv_size:当前GPU的KV维度(本地KV头数*头维度);scaling:注意力分数的缩放因子(1/√头维度),防止数值过大。
self.qkv_proj = QKVParallelLinear(
hidden_size,
self.head_dim,
self.total_num_heads,
self.total_num_kv_heads,
bias=True,
quant_config=quant_config,
prefix=f"{prefix}.qkv_proj",
)
qkv_proj:融合Q/K/V投影的并行线性层。- 普通实现是3个独立线性层(Q_proj/K_proj/V_proj),这里合并成一个,减少显存占用和通信;
QKVParallelLinear:适配TP的QKV投影,按列切分权重到多个GPU。
self.o_proj = RowParallelLinear(
self.total_num_heads * self.head_dim,
hidden_size,
bias=False,
quant_config=quant_config,
prefix=f"{prefix}.o_proj",
)
o_proj:注意力输出投影层,把多个头的输出合并回隐藏层维度,使用RowParallelLinear完成TP并行的行切分。
self.rotary_emb = get_rope(
self.head_dim,
rotary_dim=self.head_dim,
max_position=max_position,
base=self.rope_theta,
rope_scaling=rope_scaling,
dual_chunk_attention_config=dual_chunk_attention_config,
)
- 旋转位置编码初始化:
get_rope返回vLLM优化的RoPE实现,支持超长上下文、缩放等特性,rotary_dim=self.head_dim表示所有维度都应用RoPE。
self.attn = Attention(
self.num_heads,
self.head_dim,
self.scaling,
num_kv_heads=self.num_kv_heads,
cache_config=cache_config,
quant_config=quant_config,
attn_type=attn_type,
prefix=f"{prefix}.attn",
**{
"layer_idx": extract_layer_index(prefix),
"dual_chunk_attention_config": dual_chunk_attention_config,
} if dual_chunk_attention_config else {})
- 注意力核心实现:
Attention是vLLM优化的注意力层,支持:- KV缓存(减少重复计算);
- 量化(KV缓存量化);
- 因果注意力(Decoder)/双向注意力(Encoder-only);
- 双块注意力(dual chunk attention);
layer_idx:当前层的索引(用于缓存管理)。
注意力前向传播
def forward(
self,
positions: torch.Tensor,
hidden_states: torch.Tensor,
) -> torch.Tensor:
qkv, _ = self.qkv_proj(hidden_states)
q, k, v = qkv.split([self.q_size, self.kv_size, self.kv_size], dim=-1)
- 输入:
positions是序列位置(形状[seq_len]),hidden_states是输入隐藏层(形状[batch, seq_len, hidden_size]); qkv_proj(hidden_states):输入经过QKV投影,输出形状[batch, seq_len, q_size+2*kv_size];split:把QKV投影结果切分成Q、K、V三个张量,维度分别是[batch, seq_len, q_size]、[batch, seq_len, kv_size]、[batch, seq_len, kv_size]。
q, k = self.rotary_emb(positions, q, k)
- 应用RoPE:根据位置信息,对Q和K做旋转编码(V不做),让模型感知序列顺序。
attn_output = self.attn(q, k, v)
- 注意力计算:调用vLLM的
Attention层,完成QKV的注意力计算(包含KV缓存、因果掩码、TP并行),输出形状[batch, seq_len, num_heads*head_dim]。
output, _ = self.o_proj(attn_output)
return output
- 输出投影:把注意力输出映射回hidden_size维度,返回最终结果(形状
[batch, seq_len, hidden_size])。
四、Qwen2DecoderLayer:解码器单层
class Qwen2DecoderLayer(nn.Module):
def __init__(
self,
config: Qwen2Config,
cache_config: Optional[CacheConfig] = None,
quant_config: Optional[QuantizationConfig] = None,
prefix: str = "",
) -> None:
super().__init__()
self.hidden_size = config.hidden_size
# Requires transformers > 4.32.0
rope_theta = getattr(config, "rope_theta", 1000000)
rope_scaling = getattr(config, "rope_scaling", None)
dual_chunk_attention_config = getattr(config,
"dual_chunk_attention_config",
None)
- 初始化:从
Qwen2Config中读取模型参数,兼容不同版本的transformers(比如rope_theta默认值)。
# By default, Qwen2 uses causal attention as it is a decoder-only model.
# You can override the HF config with `is_causal=False` to enable
# bidirectional attention, which is used in some embedding models
# (e.g. Alibaba-NLP/gte-Qwen2-7B-instruct)
if getattr(config, "is_causal", True):
attn_type = AttentionType.DECODER
else:
attn_type = AttentionType.ENCODER_ONLY
- 注意力类型选择:
- 默认是因果注意力(Decoder):只能关注前面的token(符合大模型生成逻辑);
- 如果设置
is_causal=False:双向注意力(用于嵌入模型,比如gte-Qwen2)。
self.self_attn = Qwen2Attention(
hidden_size=self.hidden_size,
num_heads=config.num_attention_heads,
max_position=config.max_position_embeddings,
num_kv_heads=config.num_key_value_heads,
rope_theta=rope_theta,
cache_config=cache_config,
quant_config=quant_config,
rope_scaling=rope_scaling,
prefix=f"{prefix}.self_attn",
attn_type=attn_type,
dual_chunk_attention_config=dual_chunk_attention_config,
)
- 初始化自注意力层:把配置参数传递给
Qwen2Attention,构建当前层的注意力模块。
self.mlp = Qwen2MLP(
hidden_size=self.hidden_size,
intermediate_size=config.intermediate_size,
hidden_act=config.hidden_act,
quant_config=quant_config,
prefix=f"{prefix}.mlp",
)
- 初始化前馈网络:传递hidden_size、intermediate_size等参数,构建MLP模块。
self.input_layernorm = RMSNorm(config.hidden_size,
eps=config.rms_norm_eps)
self.post_attention_layernorm = RMSNorm(config.hidden_size,
eps=config.rms_norm_eps)
- 归一化层:Qwen2使用RMSNorm(无均值偏移的LayerNorm),分别应用在注意力输入前、注意力输出后。
解码器单层前向传播
def forward(
self,
positions: torch.Tensor,
hidden_states: torch.Tensor,
residual: Optional[torch.Tensor],
) -> tuple[torch.Tensor, torch.Tensor]:
# Self Attention
if residual is None:
residual = hidden_states
hidden_states = self.input_layernorm(hidden_states)
else:
hidden_states, residual = self.input_layernorm(
hidden_states, residual)
- 残差连接+归一化(Pre-LN结构):
- Qwen2用Pre-LN:先归一化,再做注意力/MLP;
- 如果
residual是None(第一层或流水线并行的起始层):初始残差为输入,然后归一化输入; - 否则:归一化层同时处理输入和残差(vLLM优化的残差+归一化逻辑)。
hidden_states = self.self_attn(
positions=positions,
hidden_states=hidden_states,
)
- 自注意力计算:归一化后的输入经过注意力层,得到注意力输出。
# Fully Connected
hidden_states, residual = self.post_attention_layernorm(
hidden_states, residual)
hidden_states = self.mlp(hidden_states)
return hidden_states, residual
- 前馈网络计算:
- 注意力输出经过后归一化层(结合残差);
- 归一化结果送入MLP;
- 返回MLP输出和更新后的残差(用于下一层计算)。
五、Qwen2Model:完整解码器模型
@support_torch_compile(
dynamic_arg_dims={
"input_ids": 0,
# positions is of shape (3, seq_len) if mrope is enabled for qwen2-vl,
# otherwise (seq_len, ).
"positions": -1,
"intermediate_tensors": 0,
"inputs_embeds": 0,
})
class Qwen2Model(nn.Module):
@support_torch_compile:vLLM的装饰器,支持PyTorch 2.0的torch.compile编译优化,指定动态维度(比如input_ids的batch维度、positions的seq_len维度)。
def __init__(self,
*,
vllm_config: VllmConfig,
prefix: str = "",
decoder_layer_type: type[nn.Module] = Qwen2DecoderLayer):
super().__init__()
config = vllm_config.model_config.hf_config
cache_config = vllm_config.cache_config
quant_config = vllm_config.quant_config
- 初始化:从
VllmConfig中解析HuggingFace配置、缓存配置、量化配置。
# TODO (@robertgshaw2): see if this can be moved out
if (cache_config.sliding_window is not None
and hasattr(config, "max_window_layers")):
assert config.max_window_layers == config.num_hidden_layers, (
"Sliding window for some but all layers is not supported. "
"This model uses sliding window but `max_window_layers` = {} "
"is less than `num_hidden_layers` = {}. Please open an issue "
"to discuss this feature.".format(
config.max_window_layers,
config.num_hidden_layers,
))
- 滑动窗口校验:如果启用滑动窗口注意力,要求所有层都启用(否则抛出断言)。
self.config = config
self.quant_config = quant_config
self.vocab_size = config.vocab_size
- 保存配置参数:方便后续使用。
if get_pp_group().is_first_rank or (config.tie_word_embeddings
and get_pp_group().is_last_rank):
self.embed_tokens = VocabParallelEmbedding(
config.vocab_size,
config.hidden_size,
quant_config=quant_config,
prefix=f"{prefix}.embed_tokens",
)
else:
self.embed_tokens = PPMissingLayer()
- 词嵌入层初始化(适配流水线并行PP):
- PP的第一个rank:负责词嵌入计算;
- 如果词嵌入和LM头绑定(tie_word_embeddings=True):PP的最后一个rank也保留词嵌入(复用权重);
- 其他rank:用
PPMissingLayer占位(无参数,避免冗余)。
# Use the provided decoder layer type or default to Qwen2DecoderLayer
decoder_layer_type = decoder_layer_type or Qwen2DecoderLayer
self.start_layer, self.end_layer, self.layers = make_layers(
config.num_hidden_layers,
lambda prefix: decoder_layer_type(config=config,
cache_config=cache_config,
quant_config=quant_config,
prefix=prefix),
prefix=f"{prefix}.layers",
)
- 构建解码器层列表(适配PP):
make_layers:根据PP的rank,切分解码器层(比如总层数32,PP=2,则rank0负责0-15层,rank1负责16-31层);start_layer/end_layer:当前PP rank负责的层范围;layers:当前rank的解码器层列表。
self.make_empty_intermediate_tensors = (
make_empty_intermediate_tensors_factory(
["hidden_states", "residual"], config.hidden_size))
- 构建空的中间张量工厂:用于PP时传递层间的hidden_states和residual(避免显存浪费)。
if get_pp_group().is_last_rank:
self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
else:
self.norm = PPMissingLayer()
- 最终归一化层(适配PP):只有PP最后一个rank需要最终归一化(生成logits前),其他rank用占位符。
词嵌入获取
def get_input_embeddings(self, input_ids: torch.Tensor) -> torch.Tensor:
return self.embed_tokens(input_ids)
- 输入token id(input_ids)经过词嵌入层,得到词嵌入向量(形状
[batch, seq_len, hidden_size])。
完整模型前向传播
def forward(
self,
input_ids: torch.Tensor,
positions: torch.Tensor,
intermediate_tensors: Optional[IntermediateTensors] = None,
inputs_embeds: Optional[torch.Tensor] = None,
) -> Union[torch.Tensor, IntermediateTensors]:
if get_pp_group().is_first_rank:
if inputs_embeds is not None:
hidden_states = inputs_embeds
else:
hidden_states = self.get_input_embeddings(input_ids)
residual = None
else:
assert intermediate_tensors is not None
hidden_states = intermediate_tensors["hidden_states"]
residual = intermediate_tensors["residual"]
- 输入处理(适配PP):
- PP第一个rank:从input_ids/inputs_embeds获取初始hidden_states,残差设为None;
- 其他rank:从
intermediate_tensors获取上一个rank传递的hidden_states和residual。
for layer in self.layers[self.start_layer:self.end_layer]:
hidden_states, residual = layer(
positions,
hidden_states,
residual,
)
- 逐层计算:遍历当前rank负责的解码器层,依次计算,更新hidden_states和residual。
if not get_pp_group().is_last_rank:
return IntermediateTensors({
"hidden_states": hidden_states,
"residual": residual
})
hidden_states, _ = self.norm(hidden_states, residual)
return hidden_states
- 输出处理(适配PP):
- 非最后一个PP rank:返回中间张量(传递给下一个rank);
- 最后一个rank:经过最终归一化,返回最终的hidden_states(用于生成logits)。
权重加载
def load_weights(self, weights: Iterable[tuple[str,
torch.Tensor]]) -> set[str]:
stacked_params_mapping = [
# (param_name, shard_name, shard_id)
("qkv_proj", "q_proj", "q"),
("qkv_proj", "k_proj", "k"),
("qkv_proj", "v_proj", "v"),
("gate_up_proj", "gate_proj", 0),
("gate_up_proj", "up_proj", 1),
]
- 权重映射:vLLM把Q/K/V、gate/up融合成一个参数,而HuggingFace的权重是分开的,这里定义映射关系(比如把q_proj映射到qkv_proj的q分片)。
params_dict = dict(self.named_parameters(remove_duplicate=False))
loaded_params: set[str] = set()
for name, loaded_weight in weights:
if "rotary_emb.inv_freq" in name:
continue
- 初始化参数字典和已加载参数集合;跳过RoPE的逆频率参数(无需加载,动态计算)。
if (self.quant_config is not None and
(scale_name := self.quant_config.get_cache_scale(name))):
# Loading kv cache quantization scales
param = params_dict[scale_name]
weight_loader = getattr(param, "weight_loader",
default_weight_loader)
loaded_weight = (loaded_weight if loaded_weight.dim() == 0 else
loaded_weight[0])
weight_loader(param, loaded_weight)
loaded_params.add(scale_name)
continue
- 加载量化尺度参数:如果启用量化,加载KV缓存的量化尺度参数(适配FP8/GPTQ等)。
for (param_name, weight_name, shard_id) in stacked_params_mapping:
if weight_name not in name:
continue
name = name.replace(weight_name, param_name)
# Skip loading extra bias for GPTQ models.
if name.endswith(".bias") and name not in params_dict:
continue
if is_pp_missing_parameter(name, self):
continue
param = params_dict[name]
weight_loader = param.weight_loader
weight_loader(param, loaded_weight, shard_id)
break
- 处理融合参数的权重加载:
- 遍历映射表,把HuggingFace的分开权重(如q_proj)映射到vLLM的融合参数(qkv_proj);
- 跳过GPTQ模型的冗余bias、PP缺失的参数;
- 调用参数的weight_loader加载权重(适配TP切分)。
else:
# Skip loading extra bias for GPTQ models.
if name.endswith(".bias") and name not in params_dict:
continue
# Remapping the name of FP8 kv-scale.
name = maybe_remap_kv_scale_name(name, params_dict)
if name is None:
continue
if is_pp_missing_parameter(name, self):
continue
param = params_dict[name]
weight_loader = getattr(param, "weight_loader",
default_weight_loader)
weight_loader(param, loaded_weight)
loaded_params.add(name)
return loaded_params
- 处理非融合参数的权重加载:
- 跳过GPTQ冗余bias;
- 重映射FP8 KV尺度参数名;
- 加载参数并记录已加载的参数名;
- 返回已加载的参数集合。
六、Qwen2ForCausalLM:因果语言模型封装
class Qwen2ForCausalLM(nn.Module, SupportsLoRA, SupportsPP):
packed_modules_mapping = {
"qkv_proj": [
"q_proj",
"k_proj",
"v_proj",
],
"gate_up_proj": [
"gate_proj",
"up_proj",
],
}
- 类定义:继承
nn.Module,同时实现SupportsLoRA(支持LoRA微调)、SupportsPP(支持流水线并行); packed_modules_mapping:定义融合模块的映射(适配LoRA权重加载)。
def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
super().__init__()
config = vllm_config.model_config.hf_config
quant_config = vllm_config.quant_config
lora_config = vllm_config.lora_config
self.config = config
self.lora_config = lora_config
self.quant_config = quant_config
self.model = Qwen2Model(vllm_config=vllm_config,
prefix=maybe_prefix(prefix, "model"))
- 初始化:创建Qwen2Model实例,传递配置参数。
if get_pp_group().is_last_rank:
if config.tie_word_embeddings:
self.lm_head = self.model.embed_tokens
else:
self.lm_head = ParallelLMHead(config.vocab_size,
config.hidden_size,
quant_config=quant_config,
prefix=maybe_prefix(
prefix, "lm_head"))
else:
self.lm_head = PPMissingLayer()
- LM头(生成logits):
- 仅PP最后一个rank需要LM头;
- 如果词嵌入和LM头绑定:复用词嵌入层(节省显存);
- 否则:创建
ParallelLMHead(并行的输出层,切分词汇表到多个GPU)。
self.logits_processor = LogitsProcessor(config.vocab_size)
self.make_empty_intermediate_tensors = (
self.model.make_empty_intermediate_tensors)
LogitsProcessor:vLLM的logits处理层(支持采样、束搜索等);- 复用模型的中间张量工厂。
输入嵌入获取
def get_input_embeddings(self, input_ids: torch.Tensor) -> torch.Tensor:
return self.model.get_input_embeddings(input_ids)
- 复用Qwen2Model的词嵌入方法。
前向传播
def forward(
self,
input_ids: torch.Tensor,
positions: torch.Tensor,
intermediate_tensors: Optional[IntermediateTensors] = None,
inputs_embeds: Optional[torch.Tensor] = None,
) -> Union[torch.Tensor, IntermediateTensors]:
hidden_states = self.model(input_ids, positions, intermediate_tensors,
inputs_embeds)
return hidden_states
- 调用Qwen2Model的forward方法,返回hidden_states(适配PP的中间张量或最终hidden_states)。
计算Logits
def compute_logits(
self,
hidden_states: torch.Tensor,
sampling_metadata: SamplingMetadata,
) -> Optional[torch.Tensor]:
logits = self.logits_processor(self.lm_head, hidden_states,
sampling_metadata)
return logits
- 把最终的hidden_states经过LM头映射到词汇表维度(logits),再经过LogitsProcessor处理(比如采样、温度缩放),返回logits(形状
[batch, seq_len, vocab_size])。
权重加载
def load_weights(self, weights: Iterable[tuple[str,
torch.Tensor]]) -> set[str]:
loader = AutoWeightsLoader(
self,
skip_prefixes=(["lm_head."]
if self.config.tie_word_embeddings else None),
)
return loader.load_weights(weights)
- 自动权重加载:
- 如果词嵌入和LM头绑定:跳过lm_head的权重(复用词嵌入);
AutoWeightsLoader:vLLM的自动权重加载器,适配LoRA、PP、TP等场景。
核心设计总结
- 并行优化:通过张量并行(TP)切分线性层/注意力头,流水线并行(PP)切分解码器层,适配大模型多GPU推理;
- 效率优化:融合QKV/gate-up投影、KV缓存、RoPE优化,减少计算和通信开销;
- 兼容性:兼容HuggingFace的Qwen2权重,支持量化、LoRA、超长上下文;
- 推理专属:无训练相关代码,全部为推理优化(Pre-LN、残差连接、缓存管理)。
对于代码小白来说,核心记住:这份代码是Qwen2模型在vLLM框架下的推理实现,核心模块是“词嵌入→多层解码器(注意力+MLP)→LM头”,所有复杂逻辑都是为了让大模型在多GPU上跑得更快、更省显存。
更多推荐



所有评论(0)