CPU 于GPU 在深度學習過程中使用的效率差別
以下是一個簡單的深度學習模型,使用CPU和GPU訓練并記錄訓練時間的Python代碼。在此示例中,我們將使用PyTorch框架。
import torch
import time
# 定義模型
class Model(torch.nn.Module):
??? def __init__(self):
??????? super(Model, self).__init__()
??????? self.linear1 = torch.nn.Linear(784, 256)
??????? self.linear2 = torch.nn.Linear(256, 128)
??????? self.linear3 = torch.nn.Linear(128, 10)
??????? self.relu = torch.nn.ReLU()
??????? self.softmax = torch.nn.Softmax(dim=1)
??? def forward(self, x):
??????? x = self.linear1(x)
??????? x = self.relu(x)
??????? x = self.linear2(x)
??????? x = self.relu(x)
??????? x = self.linear3(x)
??????? x = self.softmax(x)
??????? return x
# 定義數據集
train_data = torch.randn(50000, 784)
train_label = torch.randint(0, 10, (50000,))
# CPU 訓練
start_time = time.time()
model_cpu = Model()
loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model_cpu.parameters(), lr=0.001)
for epoch in range(10):
??? optimizer.zero_grad()
??? outputs = model_cpu(train_data)
??? loss = loss_fn(outputs, train_label)
??? loss.backward()
??? optimizer.step()
print("CPU訓練用時:", time.time() - start_time, "秒")
# GPU 訓練
if torch.cuda.is_available():
??? start_time = time.time()
??? train_data = train_data.cuda()
??? train_label = train_label.cuda()
??? model_gpu = Model().cuda()
??? loss_fn = torch.nn.CrossEntropyLoss()
??? optimizer = torch.optim.Adam(model_gpu.parameters(), lr=0.001)
??? for epoch in range(10):
??????? optimizer.zero_grad()
??????? outputs = model_gpu(train_data)
??????? loss = loss_fn(outputs, train_label)
??????? loss.backward()
??????? optimizer.step()
??? print("GPU訓練用時:", time.time() - start_time, "秒")
else:
??? print("GPU不可用")
在這個示例中,我們首先定義了一個簡單的神經網絡模型,該模型具有三個線性層和兩個激活函數(ReLU和Softmax)。我們還創(chuàng)建了一個隨機的訓練數據集和相應的標簽。然后我們訓練模型,首先使用CPU訓練模型,然后使用GPU訓練模型(如果可用)。最后,我們記錄了每種情況下的訓練時間并打印出來。