【深度学习】时序数据用在Transformer的好能手-TIme2Vec-pytorch实现
面对时序问题,如果我们想直接引入Transformer会很困难,那是那句话我比较着急,直接上代码,这个是我修订后的,原作者那个没有batch_size不能直接用在训练阶段,源代码地址https//github.com/ojus1/Time2Vec-PyTorch。......
·
面对时序问题,如果我们想直接引入Transformer会很困难,还是那句话我比较着急,直接上代码,这个是我修订后的,原作者那个没有batch_size 不能直接用在训练阶段,源代码地址:https://github.com/ojus1/Time2Vec-PyTorch
class Time2Vec(nn.Module):
def __init__(self, activation, hidden_dim):
'''
:param activation: 激活函数(非线性激活函数) sin/cos
:param hidden_dim: 隐藏(自定义,不影响运行)
'''
super(Time2Vec, self).__init__()
if activation == 'sin':
self.activation = torch.sin
else:
self.activation = torch.cos
self.out_features = hidden_dim
self.fc1 = nn.Linear(hidden_dim, 2)
def forward(self, x):
# 获取x的尺寸信息
batch_size = x.shape[0]
sentence_len = x.shape[1]
in_features = x.shape[2]
# 初始化权重和偏置
self.w0 = nn.parameter.Parameter(torch.randn(batch_size, in_features, 1))
self.b0 = nn.parameter.Parameter(torch.randn(batch_size,sentence_len, 1))
self.w = nn.parameter.Parameter(torch.randn(batch_size, in_features, self.out_features - 1))
self.b = nn.parameter.Parameter(torch.randn(batch_size,sentence_len, in_features - 1))
# 运算
v1 = self.activation(torch.matmul(x, self.w) + self.b)
v2 = torch.matmul(x, self.w0) + self.b0
v3 = torch.cat([v1, v2], -1)
x = self.fc1(v3)
return x
举个例子
if __name__ == '__main__':
time2vec = Time2Vec("sin", 5)
# batch_sizex句长x特征数量
n = torch.randn((3, 32, 5))
m = time2vec(n)
更多推荐



所有评论(0)