朴素贝叶斯分类器及Python实现

贝叶斯定理

贝叶斯定理是通过对观测值概率分布的主观判断(即先验概率)进行修正的定理,在概率论中具有重要地位。

先验概率分布(边缘概率)是指基于主观判断而非样本分布的概率分布,后验概率(条件概率)是根据样本分布和未知参数的先验概率分布求得的条件概率分布。

贝叶斯公式:

P(A∩B) = P(A)*P(B|A) = P(B)*P(A|B)

变形得:

P(A|B)=P(B|A)*P(A)/P(B)

其中

  • P(A)是A的先验概率或边缘概率,称作"先验"是因为它不考虑B因素。

  • P(A|B)是已知B发生后A的条件概率,也称作A的后验概率。

  • P(B|A)是已知A发生后B的条件概率,也称作B的后验概率,这里称作似然度。

  • P(B)是B的先验概率或边缘概率,这里称作标准化常量。

  • P(B|A)/P(B)称作标准似然度。

朴素贝叶斯分类(Naive Bayes)

朴素贝叶斯分类器在估计类条件概率时假设属性之间条件独立。

首先定义

  • x = {a1,a2,...}为一个样本向量,a为一个特征属性

  • div = {d1 = [l1,u1],...} 特征属性的一个划分

  • class = {y1,y2,...}样本所属的类别

算法流程:

(1) 通过样本集中类别的分布,对每个类别计算先验概率p(y[i])

(2) 计算每个类别下每个特征属性划分的频率p(a[j] in d[k] | y[i])

(3) 计算每个样本的p(x|y[i])

p(x|y[i]) = p(a[1] in d | y[i]) * p(a[2] in d | y[i]) * ...

样本的所有特征属性已知,所以特征属性所属的区间d已知。

可以通过(2)确定p(a[k] in d | y[i])的值,从而求得p(x|y[i])

(4) 由贝叶斯定理得:

p(y[i]|x) = ( p(x|y[i]) * p(y[i]) ) / p(x)

因为分母相同,只需计算分子。

p(y[i]|x)是观测样本属于分类y[i]的概率,找出最大概率对应的分类作为分类结果。

示例:

导入数据集

{a1 = 0, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}

{a1 = 0, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}

{a1 = 0, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}

{a1 = 1, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}

{a1 = 1, a2 = 0, C = 0} {a1 = 0, a2 = 0, C = 1}

{a1 = 1, a2 = 0, C = 0} {a1 = 1, a2 = 0, C = 1}

{a1 = 1, a2 = 1, C = 0} {a1 = 1, a2 = 0, C = 1}

{a1 = 1, a2 = 1, C = 0} {a1 = 1, a2 = 1, C = 1}

{a1 = 1, a2 = 1, C = 0} {a1 = 1, a2 = 1, C = 1}

{a1 = 1, a2 = 1, C = 0} {a1 = 1, a2 = 1, C = 1}

计算类别的先验概率

P(C = 0) = 0.5

P(C = 1) = 0.5

计算每个特征属性条件概率:

P(a1 = 0 | C = 0) = 0.3

P(a1 = 1 | C = 0) = 0.7

P(a2 = 0 | C = 0) = 0.4

P(a2 = 1 | C = 0) = 0.6

P(a1 = 0 | C = 1) = 0.5

P(a1 = 1 | C = 1) = 0.5

P(a2 = 0 | C = 1) = 0.7

P(a2 = 1 | C = 1) = 0.3

测试样本:

x = { a1 = 1, a2 = 2}

p(x | C = 0) = p(a1 = 1 | C = 0) * p( 2 = 2 | C = 0) = 0.3 * 0.6 = 0.18

p(x | C = 1) = p(a1 = 1 | C = 1) * p (a2 = 2 | C = 1) = 0.5 * 0.3 = 0.15

计算P(C | x) * p(x):

P(C = 0) * p(x | C = 1) = 0.5 * 0.18 = 0.09

P(C = 1) * p(x | C = 2) = 0.5 * 0.15 = 0.075

所以认为测试样本属于类型C1

Python实现

朴素贝叶斯分类器的训练过程为计算(1),(2)中的概率表,应用过程为计算(3),(4)并寻找最大值。

还是使用原来的接口进行类封装:

from numpy import *

class NaiveBayesClassifier(object):
    
	def __init__(self):
        self.dataMat = list()
        self.labelMat = list()
        self.pLabel1 = 0
        self.p0Vec = list()
        self.p1Vec = list()

    def loadDataSet(self,filename):
        fr = open(filename)
        for line in fr.readlines():
            lineArr = line.strip().split()
            dataLine = list()
            for i in lineArr:
                dataLine.append(float(i))
            label = dataLine.pop() # pop the last column referring to  label
            self.dataMat.append(dataLine)
            self.labelMat.append(int(label))


    def train(self):
        dataNum = len(self.dataMat)
        featureNum = len(self.dataMat[0])
        self.pLabel1 = sum(self.labelMat)/float(dataNum)
        p0Num = zeros(featureNum)
        p1Num = zeros(featureNum)
        p0Denom = 1.0
        p1Denom = 1.0
        for i in range(dataNum):
            if self.labelMat[i] == 1:
                p1Num += self.dataMat[i]
                p1Denom += sum(self.dataMat[i])
            else:
                p0Num += self.dataMat[i]
                p0Denom += sum(self.dataMat[i])
        self.p0Vec = p0Num/p0Denom
        self.p1Vec = p1Num/p1Denom

    def classify(self, data):
        p1 = reduce(lambda x, y: x * y, data * self.p1Vec) * self.pLabel1
        p0 = reduce(lambda x, y: x * y, data * self.p0Vec) * (1.0 - self.pLabel1)
        if p1 > p0:
            return 1
        else: 
            return 0

    def test(self):
        self.loadDataSet('testNB.txt')
        self.train()
        print(self.classify([1, 2]))

if __name__ == '__main__':
    NB =  NaiveBayesClassifier()
    NB.test()

Matlab

Matlab的标准工具箱提供了对朴素贝叶斯分类器的支持:

trainData = [0 1; -1 0; 2 2; 3 3; -2 -1;-4.5 -4; 2 -1; -1 -3];
group = [1 1 -1 -1 1 1 -1 -1]';
model = fitcnb(trainData, group)
testData = [5 2;3 1;-4 -3];
predict(model, testData)

fitcnb用来训练模型,predict用来预测。

posted @ 2016-03-29 21:58  -Finley-  阅读(11374)  评论(2编辑  收藏  举报