本篇內容介紹了“PyTorch怎么設置隨機種子”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!

成都創新互聯是一家專業提供大武口企業網站建設,專注與成都做網站、成都網站建設、成都外貿網站建設、HTML5、小程序制作等業務。10年已為大武口眾多企業、政府機構等服務。創新互聯專業的建站公司優惠進行中。
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from tools import set_seed
from torch.utils.tensorboard import SummaryWriter
set_seed(1) # 設置隨機種子
n_hidden = 200
max_iter = 2000
disp_interval = 200
lr_init = 0.01
def gen_data(num_data=10, x_range=(-1, 1)):
w = 1.5
train_x = torch.linspace(*x_range, num_data).unsqueeze_(1)
train_y = w*train_x + torch.normal(0, 0.5, size=train_x.size())
test_x = torch.linspace(*x_range, num_data).unsqueeze_(1)
test_y = w*test_x + torch.normal(0, 0.3, size=test_x.size())
return train_x, train_y, test_x, test_y
train_x, train_y, test_x, test_y = gen_data(num_data=10, x_range=(-1, 1))
class MLP(nn.Module):
def __init__(self, neural_num):
super(MLP, self).__init__()
self.linears = nn.Sequential(
nn.Linear(1, neural_num),
nn.ReLU(inplace=True),
nn.Linear(neural_num, neural_num),
nn.ReLU(inplace=True),
nn.Linear(neural_num, neural_num),
nn.ReLU(inplace=True),
nn.Linear(neural_num, 1),
)
def forward(self, x):
return self.linears(x)
net_n = MLP(neural_num=n_hidden)
net_weight_decay = MLP(neural_num=n_hidden)
optim_n = torch.optim.SGD(net_n.parameters(), lr=lr_init, momentum=0.9)
optim_wdecay = torch.optim.SGD(net_weight_decay.parameters(), lr=lr_init, momentum=0.9, weight_decay=1e-2)
loss_fun = torch.nn.MSELoss() #均方損失
writer = SummaryWriter(comment='test', filename_suffix='test')
for epoch in range(max_iter):
pred_normal, pred_wdecay = net_n(train_x), net_weight_decay(train_x)
loss_n, loss_wdecay = loss_fun(pred_normal, train_y), loss_fun(pred_wdecay, train_y)
optim_n.zero_grad()
optim_wdecay.zero_grad()
loss_n.backward()
loss_wdecay.backward()
optim_n.step() #參數更新
optim_wdecay.step()
if (epoch + 1) % disp_interval == 0:
for name, layer in net_n.named_parameters(): ##
writer.add_histogram(name + '_grad_normal', layer.grad, epoch)
writer.add_histogram(name + '_data_normal', layer, epoch)
for name, layer in net_weight_decay.named_parameters():
writer.add_histogram(name + '_grad_weight_decay', layer.grad, epoch)
writer.add_histogram(name + '_data_weight_decay', layer, epoch)
test_pred_normal, test_pred_wdecay = net_n(test_x), net_weight_decay(test_x)
plt.scatter(train_x.data.numpy(), train_y.data.numpy(), c='blue', s=50, alpha=0.3, label='trainc')
plt.scatter(test_x.data.numpy(), test_y.data.numpy(), c='red', s=50, alpha=0.3, label='test')
plt.plot(test_x.data.numpy(), test_pred_normal.data.numpy(), 'r-', lw=3, label='no weight decay')
plt.plot(test_x.data.numpy(), test_pred_wdecay.data.numpy(), 'b--', lw=3, label='weight decay')
plt.text(-0.25, -1.5, 'no weight decay loss={:.6f}'.format(loss_n.item()),
fontdict={'size': 15, 'color': 'red'})
plt.text(-0.25, -2, 'weight decay loss={:.6f}'.format(loss_wdecay.item()),
fontdict={'size': 15, 'color': 'red'})
plt.ylim(-2.5, 2.5)
plt.legend()
plt.title('Epoch: {}'.format(epoch + 1))
plt.show()
plt.close()1. weight decay在pytorch的SGD中實現代碼是哪一行?它對應的數學公式為?
2. PyTorch中,Dropout在訓練的時候權值尺度會進行什么操作?
optim_wdecay = torch.optim.SGD(net_weight_decay.parameters(), lr=lr_init, momentum=0.9, weight_decay=1e-2) optim_wdecay.step()
Dropout隨機失活,隱藏單元以一定概率被丟棄,以1-p的概率除以1-p做拉伸,即輸出單元的計算不依賴于丟棄的隱藏層單元
“PyTorch怎么設置隨機種子”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業相關的知識可以關注創新互聯網站,小編將為大家輸出更多高質量的實用文章!
本文標題:PyTorch怎么設置隨機種子
轉載源于:http://www.yijiale78.com/article38/ihohpp.html
成都網站建設公司_創新互聯,為您提供虛擬主機、網站設計公司、App開發、網站維護、微信小程序、云服務器
聲明:本網站發布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創新互聯