简单线性回归(梯度下降法) python实现
grad_desc
简单线性回归(梯度下降法)¶
0.引入依赖¶
In [1]:
import numpy as np
import matplotlib.pyplot as plt
1.导入数据¶
In [34]:
points = np.genfromtxt("data.csv",delimiter=",")
#points
#提取points中的两列数据,分别作为x,y
x=points[:,0];
y=points[:,1];
#用plt画出散点图
plt.scatter(x,y)
plt.show()
2.定义损失函数¶
In [35]:
# 损失函数是系数的函数,另外还要传入数据的x,y
def compute_cost(w,b,points):
total_cost=0
M =len(points)
for i in range(M):
x=points[i,0]
y=points[i,1]
total_cost += (y-w*x-b)**2
return total_cost/M #一除都是浮点 两个除号是地板除,整型。 如 3 // 4
3.定义模型的超参数¶
In [52]:
alpha = 0.0000001
initial_w = 0
initial_b = 0
num_iter =20
4.定义核心梯度下降算法函数¶
In [37]:
def grad_desc(points,initial_w,initial_b,alpha,num_iter):
w = initial_w
b = initial_b
# 定义一个list保存所有的损失函数值,用来显示下降过程。
cost_list=[]
for i in range(num_iter):
cost_list.append(compute_cost(w,b,points))
w,b= step_grad_desc(w,b,alpha,points)
return [w,b,cost_list]
def step_grad_desc(current_w,current_b,alpha,points):
sum_grad_w=0
sum_grad_b=0
M=len(points)
#对每个点代入公式求和
for i in range(M):
x= points[i,0]
y= points[i,1]
sum_grad_w += (current_w * x +current_b -y) *x
sum_grad_b += current_w * x +current_b -y
#用公式求当前梯度
grad_w=2/M * sum_grad_w
grad_b=2/M * sum_grad_b
#梯度下降,更新当前的w和b
updated_w = current_w- alpha * grad_w
updated_b = current_b -alpha * grad_b
return updated_w,updated_b
5.测试,运行梯度下降算法¶
In [54]:
w,b,cost_list= grad_desc(points,initial_w,initial_b,alpha,num_iter)
print ("w is :",w)
print ("b is :",b)
cost = compute_cost(w,b,points)
print("cost_list:",cost_list)
print("cost is:",cost)
plt.plot(cost_list)
Out[54]:
In [55]:
plt.scatter(x,y)
pred_y= w*x+b
plt.plot(x,pred_y,c='r')
Out[55]:
In [ ]:
如果,您认为阅读这篇博客让您有些收获,不妨点击一下右下角的【推荐】。
如果,您希望更容易地发现我的新博客,不妨点击一下左下角的【关注我】。
如果,您对我的博客所讲述的内容有兴趣,请继续关注我的后续博客,我是【Arli】。
本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。