logistic 回归
logistic回归
1.算法思想
根据给定的数据集确定分类的边界。这个分类的边界就是我们所要求的回归函数。
所谓的回归其实就是最佳拟合,回归函数就是确定最佳回归参数,然后对不同的特征赋予不同的权重
2.算法基础
(1)所采用的映射函数是sigmoid函数,sigmoid函数比0-1函数(正方形波)更好的原因是sigmoid函数在局部上看是平滑的,而在全局上看是接近跳跃的。而0-1函数它本身是跳跃的,不够平滑,误差比较大。
(2)根据回归函数计算出了一个结果,然后代入sigmoid函数,就可以得到一个位于0与1之间的函数值,然后根据这个函数值得大小就可以判断类别;如果是二类分类问题值大于0.5属于1类, 否则属于0类
(3)最佳回归系数确定的方法是梯度上升法:
a.梯度上升法是用来求函数的最大值的,常说的梯度下降法是用来求函数的最小值的
b.所谓的梯度其实就是数学中的导数,也就是数据变化最大的方向。一般用倒三角符号来表示梯度。
c.公式为 w= w+ a.tidu(f(w)),其中a是步长,该公式会一直被迭代直到次数达到某一个值,或者达到某个误差允许的范围。
3.算法的优缺点
优点:计算比较简单,易于理解说明
缺点:有可能会欠拟合
适用的数据:标称数据和数值数据
4.算法的python实现
(1)创造简单的数据
from numpy import *
from math import *
import matplotlib.pyplot as plt
# create the data
def createdata(filename):
fr = open(filename, 'r')
lines = fr.readlines()
dataset = []
labelset = []
for each in lines:
current_data = each.strip().split()
dataset.append([1.0, float(current_data[0]), float(current_data[1])])
labelset.append(int(current_data[2]))
return dataset, labelset
(2)定义sigmoid函数
# define the sigmoid fuction
def sigmoid(x):
return 1.0/(1+ exp(-x))
(3)定义梯度上升算法
# define the gradascent
def gradascent(dataset, lableset):
datamatrix = mat(dataset)
y = mat(lableset).transpose()
m, n = shape(datamatrix)
a = 0.001
maxloop = 500
w = ones((n, 1))
for i in range(maxloop):
l = datamatrix*w
h = ones((m, 1))
j =0
for each in l:
h[j] = sigmoid(each)
j += 1
error = y - h
w += a * datamatrix.transpose()*error
return w
(4)定义随机梯度下降算法,这是一个改进的算法,之所以它是一个改进的算法是因为他节省了计算资源
# improve the grad
def gradimprove(dataset, datalable, times = 150):
datamatrix = array(dataset)
m,n = shape(datamatrix)
weights = ones(n)
for i in range(times):
dataindex = range(m)
for j in range(m):
a = 4/(i + j + 10)+0.01
randindex = int(random.uniform(0, len(dataindex)))
t = sum(datamatrix[randindex]*weights)
h = sigmoid(t)
error = datalable[randindex] - h
weights += a*datamatrix[randindex]*error
del(dataindex[randindex])
return weights
(5)绘制logstic函数
# plot the regression function
def plotregression(weights):
datamat, datalable = createdata("F:data/machinelearninginaction/Ch05/testSet.txt")
datastr = array(datamat)
n = shape(datastr)[0]
x1 = []
y1 = []
x2 = []
y2 = []
for i in range(n):
if datalable[i] == 1:
x1.append(datastr[i, 1])
y1.append(datastr[i, 2])
else:
x2.append(datastr[i, 1])
y2.append(datastr[i, 2])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(x1, y1, s=30, c='red', marker='s')
ax.scatter(x2, y2, s=30, c='green')
x = arange(-3.0, 3.0, 0.1)
y = (-weights[0]-weights[1]*x)/weights[2]
ax.plot(x, y)
plt.show()
(6)对测试向量进行分类
# classify the vector
def classify(testdata, weights):
testsum = sum(testdata*weights)
classnum = sigmoid(testsum)
if classnum < 0.5:
return 0
else:
return 1
(7)进行十折交叉验证,这里针对的是判定马是否得病的案例
# the multi test
def multitest(times):
errorall = 0.0
for i in range(times):
error = horse()
errorall += error
errorrate = errorall/float(times)
print "the %d errorrate is %f" % (times, errorrate)
return errorrate
(8)
5.具体应用:判断一匹马是不是得病了
# create the horse function
def horse():
fr1 = open("F:data/machinelearninginaction/Ch05/horseColicTraining.txt")
fr2 = open("F:data/machinelearninginaction/Ch05/horseColicTest.txt")
lines = fr1.readlines()
dataset = []
labelset = []
for each in lines:
current_data = each.strip().split('\t')
vector = []
for i in range(21):
vector.append(float(current_data[i]))
dataset.append(vector)
labelset.append(float(current_data[21]))
weights = gradimprove(dataset, labelset, 500)
test_lines = fr2.readlines()
testdata = []
testlable = []
for each in test_lines:
current_data = each.strip().split('\t')
vector = []
for i in range(21):
vector.append(float(current_data[i]))
testdata.append(vector)
testlable.append(float(current_data[21]))
error = 0.0
for i in range(len(testdata)):
lable = classify(testdata[i], weights)
if lable != testlable[i]:
error += 1.0
errorrate = error/float(len(testdata))
print "the error rate is %f" % errorrate
return errorrate
5.分析与总结
1.书上的算法采用的是梯度上升算法,但是其实它就是梯度下降算法的变式。因为w = w+(y - h)*X= w-(h-y)*X
2.书上算w为什么没有用到导数的原因见此博客http://blog.csdn.net/dongtingzhizi/article/details/15962797
3.logistic回归,其实就是找到一个能够最好的分割两个类别的边界函数。