最近在用dataloader写数据集,在使用tqdm的时候遇见了一些问题,经过查找大量的资料,总结一个简单的方法。

首先,先设置网络的输入和输出,假设这两个量已经是tensor类型了。

  • 输入:tensor_x
  • 输出:tensor_y

那么导入这个库函数

from torch.utils.data import  DataLoader, TensorDataset
dataset = TensorDataset(tensor_x, tensor_y)
loader = DataLoader(dataset=dataset,batch_size=32,shuffle=False,num_workers=2,pin_memory=True)

接下来,开始写循环

for i in range(10):

	with tqdm(total=len(loader)) as t:
	
		for step, (batch_x, batch_y) in enumerate(loader):
		
            batch_x = batch_x.cuda()
            batch_y = batch_y.cuda()
			
			pre_y = model(batch_x)
			
			MSE = nn.MSELoss()
            loss= MSE(pre_control, batch_y)
			
			t.set_description(desc="Epoch %i"%i)
            t.set_postfix(steps=step, loss=loss.data.item())
            t.update(1)
			
			optimizer = optim.SGD(model.parameters(), lr=0.001)
			optimizer.zero_grad()
			loss.backward()
			optimizer.step()

那么结果显示为:

Epoch 0: 100%|??????????| 232/232 [00:51<00:00,  4.54it/s, loss=0.0431, steps=231]
Epoch 1: 100%|??????????| 232/232 [00:35<00:00,  6.62it/s, loss=0.131, steps=231]
Epoch 2: 100%|??????????| 232/232 [00:35<00:00,  6.46it/s, loss=0.172, steps=231]
Epoch 3: 100%|??????????| 232/232 [00:35<00:00,  6.49it/s, loss=0.123, steps=231]
Epoch 4: 100%|??????????| 232/232 [00:36<00:00,  6.42it/s, loss=0.0824, steps=231]
Epoch 5: 100%|??????????| 232/232 [00:35<00:00,  6.46it/s, loss=0.0824, steps=231]
Epoch 6: 100%|??????????| 232/232 [00:35<00:00,  6.54it/s, loss=0.118, steps=231]
Epoch 7: 100%|??????????| 232/232 [00:34<00:00,  6.67it/s, loss=0.0883, steps=231]
Epoch 8: 100%|??????????| 232/232 [00:34<00:00,  6.74it/s, loss=0.143, steps=231]
Epoch 9: 100%|??????????| 232/232 [00:34<00:00,  6.71it/s, loss=0.123, steps=231]
Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐