深度学习--梯度下降再理解+线性回归

深度学习--梯度下降再理解+线性回归

梯度下降

梯度下降的对象是 模型的参数,即 权重w ,偏置项b,通过寻找合适的参数使模型的loss值最小

Loss函数是关于输入,输出,权重,偏置项的函数,即:loss=(y-(wx+b))^2。loss值最小,y与wx+b相似。

个人思考:如果训练的数据量越大,识别的准确率是越大还是越小?

线性回归

通过对L=(wx+b-y)^2进行线性回归计算

Linear_Optimize.py文件,损失函数实现和梯度更新

import numpy as np
#计算Loss
def compute_error_for_line_given_points(b,w,points):
totalError = 0
for i in range(0,len(points)):
x = points[i,0]
y = points[i,1]
totalError +=(y-(w*x+b))**2
return totalError/float(len(points))
#计算梯度
def step_gradient(b_current , w_current ,points , learningRate):
b_gradient = 0
w_gradient = 0
N = float(len(points))
for i in range(0,len(points)):
x = points[i,0]
y = points[i,1]
#求梯度
b_gradient += -(2/N)*(y-((w_current*x)+b_current))
w_gradient += -(2/N)*x*(y-((w_current*x)+b_current))
new_b = b_current - (learningRate * b_gradient)
new_w = w_current - (learningRate * w_gradient)
return [new_b,new_w]
#循环迭代
def gradient_desecnt_runner(points,starting_b,starting_w,learning_rate,num_iterations):
b = starting_b
w = starting_w
for i in range(num_iterations):
b,w = step_gradient(b,w,np.array(points),learning_rate)
return [b,w]

主函数调用运行:

import numpy as np
import Linear_Optimize as LO
def run():
points = np.genfromtxt("data.csv",delimiter=",")
learning_rate = 0.0001
initial_b = 0
initial_w = 0
num_iterations =1000
print("Starting gradient descent at b={0},w={1},error={2}".format(initial_b,initial_w,LO.compute_error_for_line_given_points(initial_b,initial_w,points)))
print("Running...")
[b,w]=LO.gradient_desecnt_runner(points,initial_b,initial_w,learning_rate,num_iterations)
print("after {0} iterations b={1},w={2},error={3}".format(num_iterations,b, w,LO.compute_error_for_line_given_points(b,w,points)))
if __name__ == '__main__':
run()
posted @   林每天都要努力  阅读(35)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· winform 绘制太阳,地球,月球 运作规律
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 上周热点回顾(3.3-3.9)
· 超详细:普通电脑也行Windows部署deepseek R1训练数据并当服务器共享给他人
点击右上角即可分享
微信分享提示