江东的笔记

Be overcome difficulties is victory

0%

线性回归的简洁实现

创建单层神经网络

线性回归详细实现,请点击此处

导入包:

1
2
3
4
5
6
import torch
import numpy as np
import torch.utils.data as Data # 数据读取
import torch.nn as nn
from torch.nn import init # 初始化
import torch.optim as optim # 优化器

数据集的生成

1
2
3
4
5
6
7
8
num_inputs = 2     # 2个维度
num_examples = 1000 # 1000调数据
# 标准的参数
true_w = [2, -3.4]
true_b = 4.2
features = torch.tensor(np.random.normal(0, 1, (num_examples,num_inputs)), dtype=torch.float)
label = true_w[0] * features[:, 0] + true_w[1] * features[:, 1] +true_b
label += torch.tensor(np.random.normal(0, 0.01,size=label.size()), dtype=torch.float)

数据的读取

1
2
3
batch_size = 30
dataset = Data.TensorDataset(features,label)
data_it = Data.DataLoader(dataset,batch_size,shuffle =True)

定义模型

定义模型有多种方法:
方法一:继承nn.Module

1
2
3
4
5
6
7
8
9
10
11
12
class LinearNet(nn.Module):
def __init__(self,n_feature):
super(LinearNet, self).__init__()
self.linear = nn.Linear(n_feature,1)

def forward(self, x):
y = self.linear(x)
return y

net = LinearNet(num_inputs)
for param in net.parameters():
print(param)

方法二:nn.Sequential

1
2
3
net = nn.Sequential(
nn.Linear(num_inputs, 1)
)

方法三:nn.Sequential()+add_module

1
2
net = nn.Sequential()
net.add_module('linear', nn.Linear(num_inputs, 1))

方法四:导入OrderedDict

1
2
3
from collections import OrderedDict
net = nn.Sequential(OrderedDict([
('linear', nn.Linear(num_inputs, 1))]))

初始化模型参数

1
2
init.normal_(net[0].weight, mean=0, std=0.01)
init.constant_(net[0].bias,val=2)

MSE损失函数

1
loss = nn.MSELoss()

定义优化算法

1
optimizer = optim.SGD(net.parameters(), lr=0.03)

模型的优化

1
2
3
4
5
6
7
8
9
num_epochs = 3
for epoch in range(1, num_epochs + 1):
for X, y in data_it:
output = net(X)
l = loss(output, y.view(-1, 1))
optimizer.zero_grad() # 梯度清零,等价于net.zero_grad()
l.backward()
optimizer.step()
print(epoch, l.item())

最后输出epoch和loss:

1
2
3
4
1 0.3572668433189392
2 0.005662666633725166
3 0.00011592111695790663

喜欢文章可以点赞收藏,欢迎关注,如有错误请指正!