_Ritchie

ID3算法

ID3算法是决策树算法中的一种,决策树的具体教程可以看这里http://blog.csdn.net/ritchiewang/article/details/46008643

ID3算法的大致思路:从根节点开始,对接点计算所有可能的特征的信息增益,选择信息增益最大的特征作为结点的特征,由该特征的不同取值建立子结点;再对子结点递归地调用以上方法,构建决策树;直到所有特征的信息增益均很小或没有特征可以选择为止,最后得到一个决策树。
利用ID3算法创建的决策树,是只针对于分类属性的。

具体算法步骤:

输入:训练数据集D,特征集A,阈值e
输出:决策树T
1. 若D中所有实例属于同一个类C,则T为单结点树,并将类C作为该结点的类标记,返回T
2. 若A=,则T为单结点树,并将D中实例数最大的类C作为该结点的类标记,返回T
3. 否则,计算A中各特征对D的信息增益,选择信息增益最大的特征Ak
4. 如果Ak的信息增益小于阈值e,则置T为单结点树,并将D中实例数最大的类C作为该结点的类标记,返回T
5. 否则,对Ak的每一个可能值ai,依Ak=ai将D分割为若干非空子集Di,将属性Ak作为一个结点,其每个属性值ai作为一个分支,分别构建子结点,由结点及其子结点构成树T,返回T
6. 对第i各子结点,以Di为训练集,以A{Ak}为特征集,递归地调用步骤(1)~(5),得到子树Ti,返回Ti

代码实现

# 加载数据
def loadDataSet(dataPath):  
    dataset = []  
    with open(dataPath) as file:  
        lines = file.readlines()  
        for line in lines:  
            values = line.strip().split(' ')  
            dataset.append(values)  
    return dataset  

# 数据集分割,返回属性axis的值为value的数据子集,子集中不包含axis属性
def splitDataSet(dataset, axis, value):  
    retDataSet = []  
    for featVec in dataset:  
        if featVec[axis] == value:  
            reducedFeatVec = featVec[:axis]  
            reducedFeatVec.extend(featVec[axis+1:])  
            retDataSet.append(reducedFeatVec)  
    return retDataSet  

# 计算数据集的信息熵
def calShannonEnt(dataset):  
    numEntries = len(dataset) * 1.0  
    labelCounts = dict()  
    for featVec in dataset:  
        currentLabel = featVec[-1]  
        if currentLabel not in labelCounts.keys():  
            labelCounts[currentLabel] = 0  
        labelCounts[currentLabel] += 1  
    shannonEnt = 0.0  
    for key in labelCounts:  
        prob = labelCounts[key] / numEntries  
        import math  
        shannonEnt -= prob * math.log(prob, 2)  
    return shannonEnt  

# 计算数据子集相较于原数据集的信息增益
def InfoGain(dataset, axis, baseShannonEnt):  
    featList = [example[axis] for example in dataset]  
    uniqueVals = set(featList)  
    newShannonEnt = 0.0  
    numEntries = len(dataset) * 1.0  
    for value in uniqueVals:  
        subDataSet = splitDataSet(dataset, axis, value)  
        ent = calShannonEnt(subDataSet)  
        prob = len(subDataSet) / numEntries  
        newShannonEnt += prob * ent  
    infoGain = baseShannonEnt - newShannonEnt  
    return infoGain

# 根据不同属性的信息增益,进行属性选择
def ChooseBestFeatureByInfoGain(dataset):  
    numFeature = len(dataset[0]) - 1  
    baseShannonEnt = calShannonEnt(dataset)  
    bestInfoGain = 0.0  
    bestFeature = -1  
    for i in range(numFeature):  
        infoGain = InfoGain(dataset, i, baseShannonEnt)  
        if infoGain > bestInfoGain:  
            bestInfoGain = infoGain  
            bestFeature = i  
    return bestFeature

# 递归地创建决策树
def createTree(dataset, labels):  
    classList = [example[-1] for example in dataset]  
    if classList.count(classList[0]) == len(classList):  
        return classList[0]  
    if len(dataset[0]) == 1:  
        return majorityCnt(classList)  
    bestFeature = ChooseBestFeatureByInfoGain(dataset)  
    bestFeatureLabel = labels[bestFeature]  
    myTree = {bestFeatureLabel:{}}  
    del(labels[bestFeature])  
    featValues = [example[bestFeature] for example in dataset]  
    uniqueVals = set(featValues)  
    for value in uniqueVals:  
        subLabels = labels[:]  
        myTree[bestFeatureLabel][value] = \
         createTree(splitDataSet(dataset, bestFeature, value), subLabels)  
    return myTree  

posted on 2015-12-05 10:32  _Ritchie  阅读(977)  评论(0编辑  收藏  举报

导航