欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 文旅 > 旅游 > 深度学习Note.5(机器学习2)

深度学习Note.5(机器学习2)

2025/4/2 6:21:48 来源:https://blog.csdn.net/2301_79626091/article/details/146656233  浏览:    关键词:深度学习Note.5(机器学习2)

多项式回归

1.与线性回归联系:

与线性回归大体相似,代码部分复用性高,不同点:公式中为x的次方,可能要规范化处理。

2.paddle的API

paddle.sin(x, name=None)
功能:计算输入的正弦值。
输入:输入Tensor
输出:Tensor,输入的sin值

paddle.ones(shape, dtype=None)
功能:创建建指定形状且值全为1的Tensor
输入:输出结果形状
输出:值全为1的Tensor

 paddle.multiply(x, y, name=None)
功能:逐元素相乘
输入:2个Tensor
输出:Tensor,相乘结果 

paddle.concat(x, axis=0, name=None)
功能:对输入沿axis轴进行联结
输入:待联结的Tensorlist域者Tensortuple和运算轴
输出:联结后的Tensor  

3.模型--sin(2*pi*x)

3. 1数据集构建

import math# sin函数: sin(2 * pi * x)
def sin(x):y = paddle.sin(2 * math.pi * x)return y

用前面定义的create_toy_data函数来构建训练和测试数据\

import math
import paddle
import numpy as np
from matplotlib import pyplot as pltdef sin(x):y = paddle.sin(2 * math.pi * x)return ydef create_toy_data(func, interval, sample_num, noise=0.0, add_outlier=False, outlier_ratio=0.01):X = paddle.rand(shape=[sample_num]) * (interval[1] - interval[0]) + interval[0]y = func(X)epsilon = paddle.normal(0, noise, shape=[y.shape[0]])y += epsilony_np = y.numpy()  # 转换为 NumPy 数组if add_outlier:outlier_num = max(1, int(len(y_np) * outlier_ratio))outlier_idx = paddle.randint(len(y_np), shape=[outlier_num]).numpy()y_np[outlier_idx] *= 5  # 直接操作 NumPy 数组return X.numpy(), y_np# 生成数据
func = sin
interval = (0, 1)
train_num = 15
test_num = 10
noise = 0.5X_train, y_train = create_toy_data(func=func, interval=interval, sample_num=train_num, noise=noise)
X_test, y_test = create_toy_data(func=func, interval=interval, sample_num=test_num, noise=noise)# 生成用于绘图的基准数据(转换为 NumPy)
X_underlying = paddle.linspace(interval[0], interval[1], num=100).numpy()
y_underlying = sin(paddle.to_tensor(X_underlying)).numpy()# 绘制图像
plt.rcParams['figure.figsize'] = (8.0, 6.0)
plt.scatter(X_train, y_train, facecolor="none", edgecolor='#e4007f', s=50, label="train data")
plt.plot(X_underlying, y_underlying, c='#000000', label=r"$\sin(2\pi x)$")
plt.legend(fontsize='x-large')
plt.savefig('ml-vis2.pdf')
plt.show()

结果:

3.2模型构建

def polynomial_basis_function(x, degree = 2):"""输入:- x: tensor, 输入的数据,shape=[N,1]- degree: int, 多项式的阶数example Input: [[2], [3], [4]], degree=2example Output: [[2^1, 2^2], [3^1, 3^2], [4^1, 4^2]]注意:本案例中,在degree>=1时不生成全为1的一列数据;degree为0时生成形状与输入相同,全1的Tensor输出:- x_result: tensor"""if degree==0:return paddle.ones(shape = x.shape,dtype='float32') x_tmp = xx_result = x_tmpfor i in range(2, degree+1):x_tmp = paddle.multiply(x_tmp,x) # 逐元素相乘x_result = paddle.concat((x_result,x_tmp),axis=-1)return x_result# 简单测试
data = [[2], [3], [4]]
X = paddle.to_tensor(data = data,dtype='float32')
degree = 3
transformed_X = polynomial_basis_function(X,degree=degree)
print("转换前:",X)
print("阶数为",degree,"转换后:",transformed_X)
转换前: Tensor(shape=[3, 1], dtype=float32, place=CPUPlace, stop_gradient=True,[[2.],[3.],[4.]])
阶数为 3 转换后: Tensor(shape=[3, 3], dtype=float32, place=CPUPlace, stop_gradient=True,[[2. , 4. , 8. ],[3. , 9. , 27.],[4. , 16., 64.]])

3.3模型训练

plt.rcParams['figure.figsize'] = (12.0, 8.0)for i, degree in enumerate([0, 1, 3, 8]): # []中为多项式的阶数model = Linear(degree)X_train_transformed = polynomial_basis_function(X_train.reshape([-1,1]), degree)X_underlying_transformed = polynomial_basis_function(X_underlying.reshape([-1,1]), degree)model = optimizer_lsm(model,X_train_transformed,y_train.reshape([-1,1])) #拟合得到参数y_underlying_pred = model(X_underlying_transformed).squeeze()print(model.params)# 绘制图像plt.subplot(2, 2, i + 1)plt.scatter(X_train, y_train, facecolor="none", edgecolor='#e4007f', s=50, label="train data")plt.plot(X_underlying, y_underlying, c='#000000', label=r"$\sin(2\pi x)$")plt.plot(X_underlying, y_underlying_pred, c='#f19ec2', label="predicted function")plt.ylim(-2, 1.5)plt.annotate("M={}".format(degree), xy=(0.95, -1.4))#plt.legend(bbox_to_anchor=(1.05, 0.64), loc=2, borderaxespad=0.)
plt.legend(loc='lower left', fontsize='x-large')
plt.savefig('ml-vis3.pdf')
plt.show()

分析:当阶数太小,拟合曲线简单,欠拟合。

           当阶数太大,拟合曲线复杂,过拟合。

3.4模型评估

# 训练误差和测试误差
training_errors = []
test_errors = []
distribution_errors = []# 遍历多项式阶数
for i in range(9):model = Linear(i)X_train_transformed = polynomial_basis_function(X_train.reshape([-1,1]), i) X_test_transformed = polynomial_basis_function(X_test.reshape([-1,1]), i) X_underlying_transformed = polynomial_basis_function(X_underlying.reshape([-1,1]), i)optimizer_lsm(model,X_train_transformed,y_train.reshape([-1,1]))y_train_pred = model(X_train_transformed).squeeze()y_test_pred = model(X_test_transformed).squeeze()y_underlying_pred = model(X_underlying_transformed).squeeze()train_mse = mean_squared_error(y_true=y_train, y_pred=y_train_pred).item()training_errors.append(train_mse)test_mse = mean_squared_error(y_true=y_test, y_pred=y_test_pred).item()test_errors.append(test_mse)#distribution_mse = mean_squared_error(y_true=y_underlying, y_pred=y_underlying_pred).item()#distribution_errors.append(distribution_mse)print ("train errors: \n",training_errors)
print ("test errors: \n",test_errors)
#print ("distribution errors: \n", distribution_errors)# 绘制图片
plt.rcParams['figure.figsize'] = (8.0, 6.0)
plt.plot(training_errors, '-.', mfc="none", mec='#e4007f', ms=10, c='#e4007f', label="Training")
plt.plot(test_errors, '--', mfc="none", mec='#f19ec2', ms=10, c='#f19ec2', label="Test")
#plt.plot(distribution_errors, '-', mfc="none", mec="#3D3D3F", ms=10, c="#3D3D3F", label="Distribution")
plt.legend(fontsize='x-large')
plt.xlabel("degree")
plt.ylabel("MSE")
plt.savefig('ml-mse-error.pdf')
plt.show()

当阶数较低的时候,模型的表示能力有限,训练误差和测试误差都很高,代表模型欠拟合;

当阶数较高的时候,模型表示能力强,但将训练数据中的噪声也作为特征进行学习,一般情况下训练误差继续降低而测试误差显著升高,代表模型过拟合。

如何解决?

引入正则化方法,通过向误差函数中添加一个惩罚项来避免系数倾向于较大的取值

degree = 8 # 多项式阶数
reg_lambda = 0.0001 # 正则化系数X_train_transformed = polynomial_basis_function(X_train.reshape([-1,1]), degree)
X_test_transformed = polynomial_basis_function(X_test.reshape([-1,1]), degree)
X_underlying_transformed = polynomial_basis_function(X_underlying.reshape([-1,1]), degree)model = Linear(degree) optimizer_lsm(model,X_train_transformed,y_train.reshape([-1,1]))y_test_pred=model(X_test_transformed).squeeze()
y_underlying_pred=model(X_underlying_transformed).squeeze()model_reg = Linear(degree) optimizer_lsm(model_reg,X_train_transformed,y_train.reshape([-1,1]),reg_lambda=reg_lambda)y_test_pred_reg=model_reg(X_test_transformed).squeeze()
y_underlying_pred_reg=model_reg(X_underlying_transformed).squeeze()mse = mean_squared_error(y_true = y_test, y_pred = y_test_pred).item()
print("mse:",mse)
mes_reg = mean_squared_error(y_true = y_test, y_pred = y_test_pred_reg).item()
print("mse_with_l2_reg:",mes_reg)# 绘制图像
plt.scatter(X_train, y_train, facecolor="none", edgecolor="#e4007f", s=50, label="train data")
plt.plot(X_underlying, y_underlying, c='#000000', label=r"$\sin(2\pi x)$")
plt.plot(X_underlying, y_underlying_pred, c='#e4007f', linestyle="--", label="$deg. = 8$")
plt.plot(X_underlying, y_underlying_pred_reg, c='#f19ec2', linestyle="-.", label="$deg. = 8, \ell_2 reg$")
plt.ylim(-1.5, 1.5)
plt.annotate("lambda={}".format(reg_lambda), xy=(0.82, -1.4))
plt.legend(fontsize='large')
plt.savefig('ml-vis4.pdf')
plt.show()

 

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词