欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 科技 > 能源 > Pytorch(一)

Pytorch(一)

2024/10/24 4:32:29 来源:https://blog.csdn.net/zxt_tong/article/details/140926062  浏览:    关键词:Pytorch(一)

线性模型,梯度下降和反向传播已经在深度学习中学习过了,这里就直接学习怎么用pytorch来实现这些过程。

一、实现线性模型

import numpy as np
import matplotlib.pyplot as pltx_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]def forward(x):return x*wdef loss(x, y):y_pred = forward(x)return (y_pred - y)**2# 穷举法
w_list = []
mse_list = []
for w in np.arange(0.0, 4.1, 0.1):print("w=", w)l_sum = 0for x_val, y_val in zip(x_data, y_data):y_pred_val = forward(x_val)loss_val = loss(x_val, y_val)l_sum += loss_valprint('\t', x_val, y_val, y_pred_val, loss_val)print('MSE=', l_sum/3)w_list.append(w)mse_list.append(l_sum/3)plt.plot(w_list,mse_list)
plt.ylabel('Loss')
plt.xlabel('w')
plt.show()    

二、实现梯度下降 

import matplotlib.pyplot as plt# prepare the training set
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]# initial guess of weight 
w = 1.0# define the model linear model y = w*x
def forward(x):return x*w#define the cost function MSE 
def cost(xs, ys):cost = 0for x, y in zip(xs,ys):y_pred = forward(x)cost += (y_pred - y)**2return cost / len(xs)# define the gradient function  gd
def gradient(xs,ys):grad = 0for x, y in zip(xs,ys):grad += 2*x*(x*w - y)return grad / len(xs)epoch_list = []
cost_list = []
print('predict (before training)', 4, forward(4))
for epoch in range(100):cost_val = cost(x_data, y_data)grad_val = gradient(x_data, y_data)w-= 0.01 * grad_val  # 0.01 learning rateprint('epoch:', epoch, 'w=', w, 'loss=', cost_val)epoch_list.append(epoch)cost_list.append(cost_val)print('predict (after training)', 4, forward(4))
plt.plot(epoch_list,cost_list)
plt.ylabel('cost')
plt.xlabel('epoch')
plt.show() 

三、反向传播 

import torch
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]w = torch.tensor([1.0]) # w的初值为1.0
w.requires_grad = True # 需要计算梯度def forward(x):return x*w  # w是一个Tensordef loss(x, y):y_pred = forward(x)return (y_pred - y)**2print("predict (before training)", 4, forward(4).item())for epoch in range(100):for x, y in zip(x_data, y_data):l =loss(x,y) # l是一个张量,tensor主要是在建立计算图 forward, compute the lossl.backward() #  backward,compute grad for Tensor whose requires_grad set to Trueprint('\tgrad:', x, y, w.grad.item())w.data = w.data - 0.01 * w.grad.data   # 权重更新时,注意grad也是一个tensorw.grad.data.zero_() # after update, remember set the grad to zeroprint('progress:', epoch, l.item()) # 取出loss使用l.item,不要直接使用l(l是tensor会构建计算图)print("predict (after training)", 4, forward(4).item())

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com