Data scientist———java实现常见的机器学习代码(跟百度深度学习研究院师兄学机器学习)

2016-05-02开始决定好好记录一切有关《数据科学家》的学习过程。记录学习笔记。

-------------------------------------------------------------------------------------------------

第一部分:14年跟百度T7师兄学了一段时间的机器学习基础知识。Java实现基础算法。复习一遍基础知识。

-------------------------------------------------------------------------------------------------

 

 

-------------------------------------------------------------------------------------------------

第六课,逻辑回归

-------------------------------------------------------------------------------------------------

http://www.cnblogs.com/keedor/p/4459196.html这个链接里面有公式推导。

损失函数loss function 由两部分构成:损失项(loss term) + 正则项(regularization term)。

通常而言,损失函数由损失项(loss term)和正则项(regularization term)组成。发现一份不错的介绍资料:

http://www.ics.uci.edu/~dramanan/teaching/ics273a_winter08/lectures/lecture14.pdf (题名“Loss functions; a unifying view”)。
 
一、损失项
  • 对回归问题,常用的有:平方损失(for linear regression),绝对值损失;
  • 对分类问题,常用的有:hinge loss(for soft margin SVM),log loss(for logistic regression)。
  • 对hinge loss,又可以细分出hinge loss(或简称L1 loss)和squared hinge loss(或简称L2 loss)。国立台湾大学的Chih-Jen Lin老师发布的Liblinear就实现了这2种hinge loss。L1 loss和L2 loss与下面的regularization是不同的,注意区分开。
二、正则项
  • 常用的有L1-regularization和L2-regularization。上面列的那个资料对此还有详细的总结。

这里的逻辑回归用的就是log loss。那么他的推导过程就是把似然函数写出来之后求出小L(theta)。然后loss function 取他的-1/m。这样就可以求出每一个theta的偏导。因为前面去了-1,所以依然用梯度下降来解。(最大似然求的是最大值,那么取反之后求的是最小值。)

实现的代码如下:

题目:

Problem Logistic Regression 逻辑回归
题目描述:
逻辑回归(Logistic Regression)是最基础、使用最广泛的机器学习分类算法之一。它以线
性回归为理论支持,在线性回归的基础上增加了sigmoid 函数(逻辑回归函数),从而轻松处
理0/1 分类问题。在本题中,你需要使用梯度下降算法(gradient descent)实现一个线性回归
训练器。
具体如下:
已知逻辑回归方程,即估计函数(hypothesis)为:
其中n 为特征个数,θ0至θ𝑛为待学习的模型参数,x1 至xn 为特征。
函数g 即为sigmoid 函数,定义如下:
逻辑回归训练过程中,需要优化的代价函数cost function 为:
其中m 为训练样本的个数;y 为样本的标记0 或1。
在梯度下降算法中,依据(α为迭代速度)公式:
对θ0, ,θ1 ,θ2 ,…,θ𝑛 不断地进行迭代,最终使得J(θ0, ,θ1 ,θ2 ,…,θ𝑛 )取得全局最小。
此时的θ0, ,θ1 ,θ2 ,…,θ𝑛 即为线性回归模型的训练结果。
题目的输入输出:
输入:
首先为正整数n、m、α、t,分别代表特征个数、训练样本个数、迭代速率与迭代次数,随
后为m 行,每行有n+1 个整数。其中(1<n<=1000,1<m<=1000,1<=t<=1000)
在后续的m 行中,每行代表一个样本中的n 个特征值(𝑥1 ,𝑥2 ,…,𝑥𝑛 )与样本的实际观测结果
y。
输出:
首先是t 行,代表每一次迭代后计算出的J 值,然后是一行,n+1 个浮点数,分别代表
θ0, ,θ1 ,θ2 ,…,θ𝑛 。
Sample Input1:
2 12 0.001 10
34.6 78.0 0
30.2 43.8 0
35.8 72.9 0
60.1 86.3 1
79.0 75.3 1
45.0 56.3 0
61.1 96.5 1
75.0 46.5 1
76.0 87.4 1
84.4 43.5 1
95.8 38.2 0
75.0 30.6 0
Sample Output1:
0.693542
0.692222
0.692593
0.691437
0.691784
0.690761
0.691085
0.690169
0.690472
0.689647
-0.000744 0.000890 -0.000123
View Code

代码:

import java.util.Scanner;


public class LogisticRegression {
    public static void main(String[] args) {
        Scanner in = new Scanner(System.in);
        int featureNumber = in.nextInt();
        int sampleNumber = in.nextInt();
        double alpha = in.nextDouble();
        int iterateNumber = in.nextInt();
        double[][] x = new double[sampleNumber][featureNumber + 1];
        double[] theta = new double[featureNumber + 1];
        int[] y = new int[sampleNumber];
        for (int i = 0; i < sampleNumber; ++i) {
            x[i][0] = 1;
            for (int j = 1; j <= featureNumber; ++j) {
                x[i][j] = in.nextDouble();
            }
            y[i] = in.nextInt();
        }
        double[] J = new double[iterateNumber];
        gradientDescent(x, y, theta, featureNumber, sampleNumber, alpha, iterateNumber, J);
        for (int i = 0; i < iterateNumber; ++i) {
            System.out.println("第i次迭代" + J[i]);
        }
        System.out.println();
        for (int i = 0; i <= featureNumber; ++i) {
            System.out.print(theta[i] + "  ");
        }
        System.out.println();
    }
    
    // sigmoid 逻辑回归函数
    public static double sigmoid(double z) {
        return 1 / (1 + Math.exp(-z));
    }
    // 计算预测函数
    public static double hypothesis(double[] x, double[] theta, int featureNumber) {
        double h = 0;
        for (int i = 0; i <= featureNumber; ++i) {
            h += x[i] * theta[i];
        }
        return sigmoid(h);
    }
    // 计算偏导数
    public static double computeGradient(double[][] x, int[] y, double[] theta, int featureNumber, int featurePos, int sampleNumber) {
        // featurePos 对theta(j)求导。也就是表示j
        double sum = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            double h = hypothesis(x[i], theta, featureNumber);
            sum += (h - y[i]) * x[i][featurePos];
        }
        return sum / sampleNumber;
    }
    // 使用梯度下降法来进行特征的训练
    public static void gradientDescent(double[][] x, int[] y, double[] theta, int featureNumber, int sampleNumber, double alpha, int iterateNumber, double[] J) {
        for (int i = 0; i < iterateNumber; ++i) {
            // 迭代多少次
            double[] temp = new double[featureNumber + 1];
            for (int j = 0; j <= featureNumber; ++j) {
                temp[j] = theta[j] - alpha * computeGradient(x, y, theta, featureNumber, j, sampleNumber);
            }
            // 上面之所以先用temp记下来,是因为theta后面还要用到,改变了就影响后面的计算了。
            for (int j = 0; j <= featureNumber; ++j) {
                theta[j] = temp[j];
            }
            J[i] = computeCost(x, y, theta, featureNumber, sampleNumber);
        }
    }
    // 计算代价函数
    public static double computeCost(double[][] x, int[] y, double[] theta, int featureNumber, int sampleNumber) {
        double sum = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            double h = hypothesis(x[i], theta, featureNumber);
            sum += -y[i] * Math.log(h) - (1 - y[i]) * Math.log(1 - h);
        }
        return sum / sampleNumber;
    }
}
View Code

 

-------------------------------------------------------------------------------------------------

第五课,KNN

-------------------------------------------------------------------------------------------------

 

import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;


class KNN {
    double distance; // 到某训练样本的距离
    int label; // 该训练样本的标记
}
public class KNearestNeighbors {
    
    static int MAX_CLASS_NUMBER = 2;
    
    public static void main(String[] args) {
        double[][] train = {{34.6, 70.8}, {30.2, 43.8}, {60.1, 86.3}, {45.0, 56.3}, {61.1, 96.5}, {75.0, 46.5}, {76.0, 87.4}, {84.4, 43.5}, {55.8, 38.2}, {45.0, 30.6}};
        int[] label = {0, 0, 1, 0, 1, 1, 1, 1, 0, 0};
        double[] test1 = {35.8, 72.9};
        double[] test2 = {79.0, 75.3};
        int k = 4;
        int sampleNumber = 10;
        int featureNumber = 2;
        System.out.println("test1 是" + kNearestNeighborsPredict(train, label, sampleNumber, k, test1, featureNumber));
        System.out.println("test2是" + kNearestNeighborsPredict(train, label, sampleNumber, k, test2, featureNumber));
    }
    public static int kNearestNeighborsPredict(double[][] train, int[] label, int sampleNumber, int K, double[] test, int featureNumber) {
        ArrayList<KNN> knnList = new ArrayList();
        for (int i = 0; i < sampleNumber; ++i) {
            KNN temp = new KNN();
            temp.distance = computeDistance(train[i], test, featureNumber);
            temp.label = label[i];
            knnList.add(temp);
        }
        // 对knnList进行排序
        Collections.sort(knnList, new Comparator<KNN>() {
            public int compare(KNN a, KNN b) {
                if (a.distance > b.distance) {
                    return 1;
                } else {
                    return -1;
                }
            }
        });
        // 计算前k个样本各个类别的个数
        int[] classCount = new int[MAX_CLASS_NUMBER];
        for (int i = 0; i < K; ++i) {
            classCount[knnList.get(i).label]++;
        }
        System.out.println(Arrays.toString(classCount));
        int result = 0;
        int maxCount = -1;
        for (int i = 0; i <MAX_CLASS_NUMBER; ++i) {
            if (maxCount < classCount[i]) {
                result = i;
                maxCount = classCount[i];
            }
        }
        return result;
    }
    public static double computeDistance(double[] sample1, double[] sample2, int featureNumber) {
        double result = 0;
        for (int i = 0; i < featureNumber; ++i) {
            result += (sample1[i] - sample2[i]) * (sample1[i] - sample2[i]);
        }
        return Math.sqrt(result);
    }
}
View Code

 

-------------------------------------------------------------------------------------------------

第四课,朴素贝叶斯:

-------------------------------------------------------------------------------------------------

(1)朴素贝叶斯的思想就是求的是:这一组特征的每个特征的特征值都确定的情况下:哪一个样本标记最可能发生。那么就算一下谁的概率最大:p(yi|x1,x2,...,xn), i 是y的种类。那么就等于== p(yi,(x1,x2,..., xn)/ p(x1,x2,...,xn);

那么分母都一样,就只需要比较分子。分子进一步== p((x1,x2,..., x3)|yi) * p(yi).那么就是求分子。因为独立,进一步== p(x1|yi) * p(x2|yi) *...*p(xn|yi) * p(yi)。那么开一个三维素组pFeatureValue[i][j][k],表示的是第k个样本下特征是i,并且i特征的特征值是j的时候的概率。这样的话把特征概率矩阵求出来,后面用到这些概率来算的时候就很方便了。

完整代码如下:

import java.util.Arrays;

public class NaiveBayesianPredict {
    static final int  MAX_CLASS_NUMBER = 4; // 样本最多有4类结果
    static final int MAX_FEATURE_VALUE_NUMBER = 3; //每个特征的特征值最多有3种
    static final int MAX_FEATURE_DIMENSION = 3; // 样本的特征种类最多有3个特征
    public static void main(String[] args) {
        String[] gClassValue = {"打Dota", "Study", "Dating", "Basketball"};
        int[][] x = {{2, 1, 1}, {2, 0, 1}, {1, 1, 1}, {0, 1, 0}, {0, 0, 1}, {0, 1, 0}, {1, 0, 0}, {1, 0, 1}, {1, 1, 1}, {2, 0, 0}};
        int[] y = {0, 1, 0, 0, 2, 0, 1, 3, 0, 1};
        int featureNumber = 3;
        int sampleNumber = 10;
        int testNumber = 5;
        int classNumber = 4;
        int[][] test = {{1, 0, 1}, {2, 0, 1}, {1, 1, 1}, {2, 1, 0}, {0, 0, 0}};
        double[][][] pFeatureValue = new double[MAX_FEATURE_DIMENSION][MAX_FEATURE_VALUE_NUMBER][MAX_CLASS_NUMBER];
        double[] pClass = new double[MAX_CLASS_NUMBER];
        naiveBayesianTrain(x, y, featureNumber, sampleNumber, pFeatureValue, pClass);
        System.out.println("pClass = " + Arrays.toString(pClass));
        for (int i = 0; i < pFeatureValue.length; ++i) {
            for (int j = 0; j < pFeatureValue[0].length; ++j) {
                for (int k = 0; k < pFeatureValue[0][0].length; ++k) {
                    System.out.println(i + "," + j + "," + k + ":" + pFeatureValue[i][j][k]);
                }
            }
        }
        for (int i = 0; i < testNumber; ++i) {
            double[] pResult = new double[MAX_CLASS_NUMBER];
            System.out.println("预测" + Arrays.toString(test[i]) + gClassValue[naiveBayesianPredict(test[i], featureNumber, pFeatureValue, pClass, pResult, classNumber)]);
            System.out.println(Arrays.toString(pResult));
        }
    }
    /**
     * 
     * @param x 训练矩阵
     * @param y 样本标记
     * @param featureNumber 特征个数
     * @param sampleNumber  样本个数
     * @param pFeatureValue[特征][特征值][类别] : 在这个类别下某个特征的特征值是**的概率是多少?
     * @param pClass 某一类别的概率
     */
    public static void  naiveBayesianTrain(int[][] x, int[] y, int featureNumber, int sampleNumber, double[][][] pFeatureValue, double[] pClass) {
        int[] classCount = new int[MAX_CLASS_NUMBER]; // 记录所有样本标记的个数
        for (int i = 0; i < sampleNumber; ++i) {
            classCount[y[i]]++;
        }
        // 计算pClass。也就是计算每个样本标记的概率
        for (int i = 0; i < MAX_CLASS_NUMBER; ++i) {
            pClass[i] = classCount[i] / (double)sampleNumber;
        }
        // 计算m个类别的条件下,全部特征的各特征值下的概率
        for (int i = 0; i < featureNumber; ++i) { //遍历的是特征
            // 记录某个类别下,某特征的特征值个数
            // featureValueClassCount[2][0]:在外面循环特征是0(deadline)的时候,表示的是:在0(打dota)的时候,特征deadline是urgent的次数
            int[][] featureValueClassCount = new int[MAX_FEATURE_VALUE_NUMBER][MAX_CLASS_NUMBER]; 
            for (int j = 0; j < sampleNumber; ++j) {
                featureValueClassCount[x[j][i]][y[j]]++; // x[j][i]指的是第i个特征这一列里面的特征值是x[j][i]的,并且样本活动标记为y[i]的有多少个。
            }
            //
            for (int j = 0; j < MAX_FEATURE_VALUE_NUMBER; ++j) {
                for (int k = 0; k < MAX_CLASS_NUMBER; ++k) {
                    if (classCount[k] != 0) {
                        pFeatureValue[i][j][k] = (double)featureValueClassCount[j][k] / classCount[k]; // 第k个样本标记下特征i的特征值是j的有多少个 除以总的k样本的个数。
                    }
                }
            }
        }
    }
    public static int naiveBayesianPredict(int[] x, int featureNumber, double[][][] pFeatureValue, double[] pClass, double[] pResult, int classNumber) {
        int finalResult = -1;
        double finalP = -1;
        // 因为要相乘p(样本=某一个),所以要初始化他。
        for (int i = 0; i < classNumber; ++i) {
            pResult[i] = pClass[i];
        }
        for (int i = 0; i < classNumber; ++i) { // 这次预测的目的就是找出哪一个样本结果标记的概率最大,所以样本标记是外层遍历变量
            for (int j = 0; j < featureNumber; ++j) {
                pResult[i] *= pFeatureValue[j][x[j]][i]; // 不断地相乘。得出第i个样本条件下这一组特征出现的概率。第j个样本他的特征是输入的x的x[j]。
            }
            if (finalP < pResult[i]) {
                finalP = pResult[i];
                finalResult = i;
            }
        }
        return finalResult; // 把那个最可能出现的样本标记返回出去。预测就结束了
    }
}
View Code

 

-------------------------------------------------------------------------------------------------

第三课,决策树:

-------------------------------------------------------------------------------------------------

1,补充基础知识:

(1)信息增益,信息熵:

1,信息熵越大表示信息越混乱,表示信息量越大。
2,信息增益指的是两个前后之间的差距,信息增益越大,表示差距越大。决策树里面划分越标准。前者是一个定值-后者的熵,那么后者划分的越好,熵越小,那么增益越大。
3,在决策树中,因为只需要求信息增益最大,那么找出划分到熵最小的特征,所以不用求前者的熵。
View Code

(2)ID3算法():

ID3算法是一种贪心算法,用来构造决策树。ID3算法起源于概念学习系统(CLS),以信息熵的下降速度为选取测试属性的标准,即在每个节点选取还尚未被用来划分的具有最高信息增益的属性作为划分标准,然后继续这个过程,直到生成的决策树能完美分类训练样例。
View Code

 

2,决策树的代码:

(1)引入问题:

Deadline? Is there a dota game? Bad mood? Activity
Urgent Yes Yes Dota
Urgent No Yes Study
Near Yes Yes Dota
None Yes No Dota
None No Yes Dating
None Yes No Dota
Near No No Study
Near No Yes Basketball
Near Yes Yes Dota
Urgent No No Study

预测:
urgent yes no
none no no
View Code

完整代码:

import java.util.ArrayList;
import java.util.HashSet;

public class DesicionTree {
    static int MAX_CLASS_NUMBER = 8;
    static int MAX_FEATURE_VALUE_NUMBER = 32;
    // 因为标注的都是整数,那么为了直观表示,我们建立一个对应。
    // 特征名:0代表deadline,1代表 “is there a dota game”, 2代表“bad mood”
    static String[] getFeatureName = {"Deadlien?", "Is there a dota game?", "Bad mood?"};
    // 类别标记里面。dota标为0,study标为1,dating标为2,basketball标为3
    static String[] getClassName = {"Dota", "Study", "Dating", "Bascketball"};
    // 所有的对应都建立出来
    static String[][] everyFeatureValue = {{"Deadline is Node,", "Deadline is near,", "Deadline is urgent,"}, {"没有刀塔游戏,", "有刀塔游戏,"}, {"心情不好", "心情好"}};
    public static void main(String[] args) {
        int[][] dataOri = {{2, 1, 1}, {2, 0, 1}, {1, 1, 1}, {0, 1, 0}, {0, 0, 1}, {0, 1, 0}, {1, 0, 0}, {1, 0, 1}, {1, 1, 1}, {2, 0, 0}};
        int[] classOri = {0, 1, 0, 0, 2, 0, 1, 3, 0, 1};
        // 为了后面的操作方便用list来存储
        ArrayList<ArrayList<Integer>> data = new ArrayList();
        ArrayList<Integer> classValue = new ArrayList();
        ArrayList<Integer> featureName = new ArrayList();
        // 样本的三个组成元素就是上面这三个,所以后面的处理都是围绕着他们三个
        // 下面把他们三个给存好
        for (int i = 0; i < dataOri.length; ++i) {
            ArrayList<Integer> temp = new ArrayList();
            for (int j = 0; j < dataOri[i].length; ++j) {
                temp.add(dataOri[i][j]);
            }
            data.add(temp);
            classValue.add(classOri[i]);
        }
        for (int i = 0; i < dataOri[0].length; ++i) {
            featureName.add(i);
        }
        // 存好了需要的数据后,开始建立树
        ArrayList<Node> tree = new ArrayList(); // 所有节点都放在这里吗,第一个就是root
        constructDesicionTree(data, classValue, featureName, tree);
        // 建好了树就可以用它来做预测了
        // 预测的样本是:
        int[] sample1 = {2, 1, 0};
        int[] sample2 = {0, 0, 0};
        System.out.print("当情况是:");
        for (int i = 0; i < sample1.length; ++i) {
            System.out.print(everyFeatureValue[i][sample1[i]] +" ");
        }
        System.out.println("的时候,预测出他要" + getClassName[predict(sample1, tree.get(0))]);
        System.out.print("当情况是:");
        for (int i = 0; i < sample2.length; ++i) {
            System.out.print(everyFeatureValue[i][sample2[i]] +" ");
        }
        System.out.println("的时候,预测出他要" + getClassName[predict(sample2, tree.get(0))]);
        // 打印以下这棵树:
        System.out.println("这棵树如下所示:");
        preorderDecisionTree(tree.get(0), tree.get(0), 0, 0);
    }
    // 我们这里的需求就是打印以下这棵树,这个步骤完全不影响我们的预测需求,没有他也可以预测
    public static void preorderDecisionTree(Node father, Node node, int value, int level) {
        // value 表示分支指针
        System.out.print("第" + level + "层: ");
        for (int i = 0; i < level + 1; ++i) {
            System.out.print("---");
        }
        if (node.type != -1) {
            // != -1 表示:是叶子节点
            System.out.print(" 这是一个叶子 节点(" + getClassName[node.type] + ")");
        } else {
            System.out.print(" 这是一个内部节点(" + getFeatureName[node.featureName] + ")");
        }
        if (level != 0) {
            System.out.println(" 父亲是:" + getFeatureName[father.featureName] + " 分支特征值是:" + everyFeatureValue[father.featureName][value]);
        } else {
            System.out.println();
        }
        // 上面的print结束了,开始要遍历所有儿子了
        for (int i = 0; i < MAX_FEATURE_VALUE_NUMBER; ++i) {
            if (node.child[i] != null) {
                preorderDecisionTree(node, node.child[i], i, level + 1);
            }
        }
    }
    // 根据给出的样本用建立好的树来进行预测
    public static int predict(int[] sample, Node root) {
        Node ptr = root;
        while (ptr.type == -1) { // != -1,意味着是叶子节点,而且有具体值
            int value = sample[ptr.featureName]; // value表示的是ptr.featureName下的特征值。
            ptr = ptr.child[value]; // child[value]表示的就是特征值是value的时候建立的树。反正大概思想就是这么一直走,直到走到叶节点
        }
        return ptr.type;
    }
    public static Node constructDesicionTree(ArrayList<ArrayList<Integer>> data, ArrayList<Integer> classValue, ArrayList<Integer> featureName, ArrayList<Node> tree) {
        Node node = new Node();
        tree.add(node);
        node.type = checkIsLeaf(classValue, featureName.size());
        // node.type != -1,说明类别只有一种或者是特征已经用光了。必须建立叶子节点了
        if (node.type != -1) {
            return node;
        }
        // 既然没走到必须建立叶子节点那一步,那么就要找出能够带来最大增益的那个特征
        int[] bestFeatureName = new int[1];
        int[] bestFeatureNamePos = new int[1];
        findBestFeatureName(data, classValue, featureName, bestFeatureName, bestFeatureNamePos);
        node.featureName = bestFeatureName[0]; // 把找出来的最好的特征放入建立的node
        // 把最佳特征的那一列(也就是他的所有的特征值存下来),后面用来划分样本
        HashSet<Integer> valueSet = new HashSet(); // 这里是因为要用一种特征值来划分,所以要hash
        for (int i = 0; i < data.size(); ++i) {
            valueSet.add(data.get(i).get(bestFeatureNamePos[0])); // 把第bestFeatureNamePos列的特征值种类给拿出来
        }
        for (int s : valueSet) {
            // 所谓的获取子样本,目的就一个,就是为了得到新的data,新的class,新的featureName。
            // 这个特征下的每一种特征值得到一组新样本
            ArrayList<ArrayList<Integer>> newData = new ArrayList();
            ArrayList<Integer> newClassValue = new ArrayList();
            ArrayList<Integer> newFeatureName = new ArrayList();
            getDataByFeatureValue(data, classValue, featureName, bestFeatureNamePos[0], s, newData, newClassValue, newFeatureName);
            // 上面这一步,得到了新的子样本,那么一组子样本作为上一组的儿子,接着建树。递归
            node.child[s] = constructDesicionTree(newData, newClassValue, newFeatureName, tree);
            //这里建树,child[s]就表示,特征值s的这一支,一会前序遍历要用到
        }
        return node;
    }
    // 因为进行了一次划分之后,划分成了好多子样本,所以,需要写一个获取子样本的函数。主要是获取一下:
    // 样本最重要的三个严肃:特征名,原数据,活动标记类型。这是条件是:在指定的这一个特征的某一个特征值下
    public static void getDataByFeatureValue(ArrayList<ArrayList<Integer>> data, ArrayList<Integer> classValue, ArrayList<Integer> featureName, int featurePos, int featureValue, ArrayList<ArrayList<Integer>> newData, ArrayList<Integer> newClassValue, ArrayList<Integer> newFeatureName) {
        // newFeatureNmae 少了建树的这一组
        for (int i = 0; i < featureName.size(); ++i) {
            if (i == featurePos) {
                continue;
            }
            newFeatureName.add(featureName.get(i));
        }
        // 将指定的特征值的样本找出,函数入口指定了
        // 找出每一行的新data,class的数据,记得跳过他自己
        for (int i = 0; i < data.size(); ++i) { // 第i行
            if (data.get(i).get(featurePos) == featureValue) { // 只有属于这个特征值得样本才取
                ArrayList<Integer> tempList = new ArrayList();
                // 也要跳过他自己
                for (int j = 0; j < data.get(i).size(); ++j) {
                    if (j == featurePos) {
                        continue;
                    }
                    tempList.add(data.get(i).get(j));
                }
                newData.add(tempList);
                newClassValue.add(classValue.get(i));
            }
        }
    }
    // 下面这个函数是找出这一次能够达到信息增益最大的特征
    public static void findBestFeatureName(ArrayList<ArrayList<Integer>> data, ArrayList<Integer> classValue, ArrayList<Integer> featureName, int[] bestFeatureName, int[] bestFeatureNamePos) {
        // 算出每一个特征去划分之后得到的划分集合的熵
        ArrayList<Double> splitingEntropy = new ArrayList();
        for (int j = 0; j < featureName.size(); ++j) {
            // 第j个特征
            //把第j列特征给拿出来
            ArrayList<Integer> featureValue = new ArrayList();
            for (int i = 0; i < data.size(); ++i) {
                featureValue.add(data.get(i).get(j));
            }
            double ret = calculateAfterSplitingEntropy(featureValue, classValue); // 第j个特征的熵
            // 把所有熵存下来,后面找熵最小的
            splitingEntropy.add(ret);
        }
        bestFeatureName[0] = featureName.get(0); //最后的特征
        bestFeatureNamePos[0] = 0; // 最后的特征所处的列
        double minEntropy = splitingEntropy.get(0);
        for (int i = 0; i < splitingEntropy.size(); ++i) {
            if (minEntropy > splitingEntropy.get(i)) {
                minEntropy = splitingEntropy.get(i);
                bestFeatureName[0] = featureName.get(i);
                bestFeatureNamePos[0] = i;
            }
        }
    }
    public static int checkIsLeaf(ArrayList<Integer> classValue, int featureNumber) {
        int[] classValueCount = new int[MAX_CLASS_NUMBER]; // 统计此刻下每个样本出现了几次
        int classNumber = 0; // 活动个数。因为可能活动只有一个了。
        for (int i = 0; i < classValue.size(); ++i) {
            // 数一下有几种活动/标记
            if (classValueCount[classValue.get(i)] == 0) { // 数活动标记个数的时候,每种只记一次
                classNumber++;
            }
            classValueCount[classValue.get(i)]++; // 这个活动出现了几次
        }
        // 接下来为什么要找最多的。是因为假如已经到了最后了。无特征可分了。那么就用最多的来做结点
        // 这里找最多,指的是出现做多的活动标记
        int maxCount = -1;
        int leafClass = -1;
        for (int i = 0; i < MAX_CLASS_NUMBER; ++i) {
            if (maxCount < classValueCount[i]) {
                maxCount = classValueCount[i];
                leafClass = i; // 这里表示把最多的那个来代替所有的作为叶子结点。
            }
        }
        if (classNumber == 1 || featureNumber == 0) {
            // 如果只有一种活动标记了,或者是没有特征了。那么就是出现叶节点了
            return leafClass; // 把需要代表的叶子结点返回出去。当然如果已经是最优了,那么也是正确的。
        }
        // 如果还没有划分完,那么返回-1
        return -1;
    }
    // 这里就做一件事,求子样本的熵,也就是集合划分后的熵。等于每一个特征的概率乘以他划分出去的子样本的熵
    public static double calculateAfterSplitingEntropy(ArrayList<Integer> featureValue, ArrayList<Integer> classValue) {
        // 用一个数组来统计每个特征值有几个,为了后面求这个特征的概率。
        int[] featureValueCount = new int[MAX_FEATURE_VALUE_NUMBER];
        // 用一个二维数组来统计某个特征值下,每个标记(活动)的个数。
        int[][] featureValueClassCount = new int[MAX_FEATURE_VALUE_NUMBER][MAX_CLASS_NUMBER];
        // 开始统计出来
        int sampleNumber = classValue.size();
        for (int i = 0; i < sampleNumber; ++i) {
            featureValueCount[featureValue.get(i)]++;
            featureValueClassCount[featureValue.get(i)][classValue.get(i)]++; //第i行这个特征的样本是第i行getclass出来的
        }
        // 开始计算
        double result = 0;
        for (int i = 0; i < MAX_FEATURE_VALUE_NUMBER; ++i) {
            // 如果特征值下样本为0, 后面的是一定有空的。这里设置32.但是只有3个
            if (featureValueCount[i] == 0) {
                continue;
            }
            // 计算第i个特征值划分出来的熵
            double entropy = 0;
            double pf = (double)featureValueCount[i] / sampleNumber; // featureValueCount[i]是表示这个特征下特征值标为i的有多少个
            for (int j = 0; j < MAX_CLASS_NUMBER; ++j) {
                double p = (double) featureValueClassCount[i][j] / featureValueCount[i];
                // featureValueClassCount[i][j]表示的是:这个特征下特征值是i的并且样本是j的有多少个
                entropy += computeEntropy(p);
            }
            result += pf * entropy;
        }
        return result;
    }
    public static double computeEntropy(double p) {
        if (p == 0) {
            return 0;
        }
        return -1 * p * (Math.log(p) / Math.log(2));
    }
}
class Node {
    private static final int MAX_FEATURE_VALUE_NUMBER = 32; // 这里是重点。
    int featureName; // 存储内部节点。即特征名。例如:deadline,is there a dota
    int type; // 最终活动类型。学习,打刀塔,玩篮球,约会。
    Node[] child = new Node[MAX_FEATURE_VALUE_NUMBER];
    Node() {
        featureName = -1;
        type = -1;
    }
}    
View Code

 

 

-------------------------------------------------------------------------------------------------

第二课,线性回归:

-------------------------------------------------------------------------------------------------

 问题引入:房价的预测。

 (1)单变量线线性回归(用梯度下降法实现):

代码:

public class LinearRegression {
    /**
     * 这里用了梯度下降来做处理。如果alpha = 0.001.这时候就会溢出。迭代速率过大。如果alpha = 0.000000000001.那么迭代速率过小。5000次内得不到比较好的结果。
     */
    public static void main(String[] args) {
        double[] x = {96.79, 110.39, 70.25, 99.96, 118.15, 115.08};
        double[] y = {287, 343, 199, 298, 340, 350};
        int sampleNumber = x.length;
        double alpha = 0.000000001;
        int iterateNumber = 500000000;
        double[] w0 = new double[1];
        double[] w1 = new double[1];
        gradientDescent(x, y, sampleNumber, alpha, iterateNumber, w0, w1);
        double cost = computeCost(x, y, w0, w1, sampleNumber);
        System.out.println("平均cost是:" + cost);
        System.out.println("w0 = " + w0[0] + " w1 = " + w1[0]);
        System.out.println("开始用来预测:");
        System.out.println("predict(112) = " + predict(w0, w1, 112));
        System.out.println("predict(110) = " + predict(w0, w1, 110));
        
    }
    
    
    public static double predict(double[] w0, double[] w1, double x) {
        return w0[0] + w1[0] * x;
    }
    
    
    public static void gradientDescent(double[] x, double[] y, int sampleNumber, double alpha, int iterateNumber, double[] w0, double[] w1) {
        w0[0] = 0;
        w1[0] = 1;
        while (iterateNumber-- > 0) {
            double temp0 = w0[0] - alpha * computeGradient0(x, y, w0, w1, sampleNumber);
            double temp1 = w1[0] - alpha * computeGradient1(x, y, w0, w1, sampleNumber);
            w0[0] = temp0;
            w1[0] = temp1;
        }
    }
    
    public static double computeGradient0(double[] x, double[] y, double[] w0, double[] w1, int sampleNumber) {
        double sum = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            sum += w0[0] + w1[0] * x[i] - y[i];
        }
        return sum / sampleNumber;
    }
    public static double computeGradient1(double[] x, double[] y, double[] w0, double[] w1, int sampleNumber) {
        double sum = 0;
        for (int i = 0; i <sampleNumber; ++i) {
            sum += (w0[0] + w1[0] * x[i] - y[i]) * x[i];
        }
        return sum / sampleNumber;
    }
    
    // cost
    public static double computeCost(double[] x, double[] y, double[] w0, double[] w1, int sampleNumber) {
        double sum = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            sum += (w0[0] + w1[0] * x[i] - y[i]) *(w0[0] + w1[0] * x[i] - y[i]);
        }
        return sum / (2 * sampleNumber);
    }
    
}
View Code

 (2)多变量线性回归(用梯度下降法实现)

代码:注意,所谓的求梯度,指的是求每一个w[i]的梯度(偏导)。

import java.util.Arrays;



/**
 * 
 * 注意计算梯度指的是计算每一个w[j]的梯度。然后把它负方向变化
 *
 */
public class MultivariateLinearRegression {
    
    public static void main(String[] args) {
        double[][] x = {{1, 96.79, 2, 1, 2}, {1, 110.39, 3, 1, 0}, {1, 70.25, 1, 0, 2}, {1, 99.96, 2, 1, 1}, {1, 118.15, 3, 1, 0}, {1, 115.08, 3, 1, 2}};
        int[] y = {287, 343, 199, 298, 340, 350};
        int sampleNumber = y.length;
        double alpha = 0.0001;
        int iterateNumber = 1500;
        int featureNumber = 4;
        double[] w = new double[featureNumber + 1];
        gradientDescent(x, y, w, featureNumber, sampleNumber, alpha, iterateNumber);
        double cost = computeCost(x, y, w, featureNumber, sampleNumber);
        System.out.println("平均代价是:" + cost);
        
        //预测
        double[] test1 = {1, 112, 3, 1, 0};
        double[] test2 = {1, 110, 3, 1, 1};
        System.out.println(Arrays.toString(w));
        System.out.println("预测" + Arrays.toString(test1) + "原价是:" + "360" + ",预测结果是:" + predict(w, test1, 4));
        System.out.println("预测" + Arrays.toString(test2) + "原价是:" + "355" + ",预测结果是:" + predict(w, test2, 4));
        
    }
    
    
    
    // 实用:预测
    public static double predict(double[] w, double[] x, int featureNumber) {
        double output = 0;
        for (int i = 0; i <= featureNumber; ++i) {
            output += w[i] * x[i];
        }
        return output;
    }
    // 计算代价
    public static double computeCost(double[][] x, int[] y, double[] w, int featureNumber, int sampleNumber) {
        double sum = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            double output = 0; // output是每一个样本的输出
            for (int j = 0; j <= featureNumber; ++j) {
                output += x[i][j] * w[j];
            }
            sum += (output - y[i]) * (output - y[i]);
        }
        return sum / (2 * sampleNumber);
    }
    
    // 计算梯度
    public static void gradientDescent(double[][] x, int[] y, double[] w, int featureNumber, int sampleNumber, double alpha, int iterateNumber) {
        for (int i = 0; i < iterateNumber; ++i) {
//            for (int j = 0; j <= featureNumber; ++j) {
//                w[j] = w[j] - alpha * computeGradient(x, y, w, featureNumber, j, sampleNumber);
//            }
            double[] temp = new double[featureNumber + 1];
            for (int j = 0; j <= featureNumber; ++j) {
                //这是错的w[j] = w[j] - alpha * computeGradient(x, y, w, featureNumber, j, sampleNumber);
                temp[j] = w[j] - alpha * computeGradient(x, y, w, featureNumber, j, sampleNumber);
            }
            for (int j = 0; j <= featureNumber; ++j) {
                w[j] = temp[j];
            }
//            w = temp.clone(); 克隆会出错。
        }
        
    }
    public static double computeGradient(double[][] x, int[] y, double[] w, int featureNumber, int featurePos, int sampleNumber) {
        double sum = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            double output = 0; // 算第i行的output。也就是第i个特征的样本。
            for (int j = 0; j <= featureNumber; ++j) {
                output += x[i][j] * w[j];
            }
            sum += (output - y[i]) * x[i][featurePos]; // x[i][wj] 是指每一行的第wj列。也就是每个样本的第j个特征。因为偏导的时候只有第j个特征与w[j]想乘有关系。
        }
        return sum / sampleNumber;
    }
}
View Code

 

运行结果:

运行结果:

平均代价是:40.68620039474375
[-0.05344697964427516, 2.9759116205979734, 0.2911002731876694, 0.1697308897827469, -0.23033734738066725]
预测[1.0, 112.0, 3.0, 1.0, 0.0]原价是:360,预测结果是:334.2916862366745
预测[1.0, 110.0, 3.0, 1.0, 1.0]原价是:355,预测结果是:328.1095256480979
View Code 

 (3)课后作业:

Problem  Linear regression 线性回归
题目描述:
线性回归是利用线性回归方程对一个或多个自变量和因变量之间关系进行建模的一种回归分析。线性回归是求解机器学习回归问题的最基本方法。训练线性回归模型时,线性方程中的自变量x即为样本中的特征,而因变量y为样本的预测结果。在本题中,你需要使用梯度下降算法(gradient descent)实现一个线性回归训练器。
 θθ

具体如下: 
已知线性回归方程,即估计函数(hypothesis)为:
h_θ (x)=θ^T x=θ_0+θ_1 X_1+θ_2 x_2+⋯θ_(n-1) X_(n-1)+θ_n x_n
其中n为特征个数,θ_0至θ_n为待学习的模型参数,x1至xn为特征。
线性回归训练过程中,需要优化的代价函数cost function为:
 
其中m为训练样本的个数。
在梯度下降算法中,依据(α为迭代速度)公式:
 
对θ_(0,)  ,θ_(1  ,) θ_(2 ,…,) θn不断地进行迭代,最终使得J(θ_(0,)  ,θ_(1  ,) θ_(2 ,…,) θn)取得全局最小。
此时的θ_(0,)  ,θ_(1  ,) θ_(2 ,…,) θn即为线性回归模型的训练结果。

在线性回归训练过程中,一般需要对特征进行归一化(feature normalize),特征归一化使用如下方法 :
对于某个特征
x(i) = ( x(i)- average(X) )/ standard_deviation(X)
解释:
    求出特征X在n个样本中的平均值 average(X)。
    求出特征X在n个样本中的标准差standard_deviation (X)
    对特征X的n个样本中的每个值x(i),使用上述公式进行归一化。
标准差standard_deviation (x)公式使用:
 
题目的输入输出:
输入:首先为正整数n、m、α、t,分别代表特征个数、训练样本个数、迭代速率与迭代次数,随后为m行,每行有n+1个整数。其中(1<n<=1000,1<m<=1000,1<=t<=1000)
在后续的m行中,每行代表一个样本中的n个特征值(x1,x2,…,xn)与样本的实际观测结果y。
输出:首先是t行,代表每一次迭代后计算出的J值,然后是一行,n+1个浮点数,分别代表θ_(0,)  ,θ_(1  ,) θ_(2 ,…,) θn。
Sample Input1:
2 10 0.01 10
2104 3 399900
1600 3 329900
2400 3 369000
1416 2 232000
3000 4 539900
1985 4 299900
1534 3 314900
1427 3 198999
1380 3 212000
1494 3 242500

Sample Output1:
53052890086.237
51993904900.868
50956770155.817
49941026552.120
48946224657.273
47971924687.609
47017696295.619
46083118362.109
45167778793.089
44271274321.262
30014.457 8183.543 4763.016
View Code

答案:

import java.util.Arrays;
import java.util.Scanner;

/**
 * 注意计算梯度指的是计算每一个w[j]的梯度。然后把它负方向变化
 */
public class MultivariateLinearRegression {
    
    public static void main(String[] args) {
        Scanner in = new Scanner(System.in);
        int n = in.nextInt();
        int m = in.nextInt();
        double a = in.nextDouble();
        int t = in.nextInt();
        double[][] x = new double[m][n + 1];
        int[] y = new int[m];
        for (int i = 0; i < m; ++i){
            x[i][0] = 1;
            for (int j = 1; j < n + 1; j++) {
                x[i][j] = in.nextDouble();
            }
            y[i] = in.nextInt();
        }
        // 归一化
        featureNormalize(x, n, m);
        // 计算
        double[] w = new double[n + 1];
        gradientDescent(x, y, w, n, m, a, t);
        
    }
    public static void featureNormalize(double[][] x, int featureNumber, int sampleNumber) {
        double[] average = new double[featureNumber + 1];
        double[] s = new double[featureNumber + 1];
        for (int j = 1; j <= featureNumber; ++j) {
            double sum = 0;
            for (int i = 0; i < sampleNumber; ++i) {
                sum += x[i][j];
            }
            average[j] = sum / sampleNumber;
        }
        for (int j = 1; j <= featureNumber; ++j) {
            double sum2 = 0;
            for (int i = 0; i < sampleNumber; ++i) {
                sum2 += (x[i][j] - average[j]) * (x[i][j] - average[j]);
            }
            s[j] = Math.sqrt(sum2 / (sampleNumber - 1));
        }
        
        // [0.0, 535.667600082157, 0.5676462121975467]
        for (int j = 1; j <= featureNumber; ++j) {
            for (int i = 0; i < sampleNumber; ++i) {
                x[i][0] = 1;
                x[i][j] = (x[i][j] - average[j]) / s[j];
            }
        }
        System.out.println(Arrays.toString(s));
    }
    // 实用:预测
    public static double predict(double[] w, double[] x, int featureNumber) {
        double output = 0;
        for (int i = 0; i <= featureNumber; ++i) {
            output += w[i] * x[i];
        }
        return output;
    }
    // 计算代价
    public static double computeCost(double[][] x, int[] y, double[] w, int featureNumber, int sampleNumber) {
        double sum = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            double output = 0; // output是每一个样本的输出
            for (int j = 0; j <= featureNumber; ++j) {
                output += x[i][j] * w[j];
            }
            sum += (output - y[i]) * (output - y[i]);
        }
        return sum / (2 * sampleNumber);
    }
    
    // 计算梯度
    public static void gradientDescent(double[][] x, int[] y, double[] w, int featureNumber, int sampleNumber, double alpha, int iterateNumber) {
        for (int i = 0; i < iterateNumber; ++i) {
            double[] temp = new double[featureNumber + 1];
            // 这里犯过的错误:直接w[j] = w[j] - ....是错的。一定要用temp记下来先。
            for (int j = 0; j <= featureNumber; ++j) {
                temp[j] = w[j] - alpha * computeGradient(x, y, w, featureNumber, j, sampleNumber);
            }
//            w = temp.clone();克隆会出错
            for (int j = 0; j <= featureNumber; ++j) {
                w[j] = temp[j];
            }
            System.out.println("第" + i + "次迭代:");
            System.out.println(computeCost(x, y, w, featureNumber, sampleNumber));
        }
        System.out.println("最终的w:" + Arrays.toString(w));
    }
    public static double computeGradient(double[][] x, int[] y, double[] w, int featureNumber, int featurePos, int sampleNumber) {
        double sum = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            double output = 0; // 算第i行的output。也就是第i个特征的样本。
            for (int j = 0; j <= featureNumber; ++j) {
                output += x[i][j] * w[j];
            }
            sum += (output - y[i]) * x[i][featurePos]; // x[i][wj] 是指每一行的第wj列。也就是每个样本的第j个特征。因为偏导的时候只有第j个特征与w[j]想乘有关系。
        }
        return sum / sampleNumber;
    }
}
View Code

 

-------------------------------------------------------------------------------------------------

第一课,感知器:

-------------------------------------------------------------------------------------------------

(1)感知器的实现(没有优化):

import java.util.Arrays;

public class Perceptron {
    public static void main(String[] args) {
        System.out.println("here");
    }
    // 激活函数的实现
    public static int computeActivation(double[] x, double[] weights, int featureNumber) {
        double sum = 0;
        for (int i = 0; i <= featureNumber; ++i) {
            sum += x[i] * weights[i];
        }
        if (sum > 0) {
            return 1;
        }
        return 0;
    }
    // 简单的训练函数的实现,没有优化
    public static void perceptronTrain(double[][] x, int[] y, double[] weights, int featureNumber, int sampleNumber, double alpha, int iterateNumber) {
        // 设置x0 = 1
        for (int i = 0; i < sampleNumber; ++i) {
            x[i][0] = 1;
        }
        for (int i = 0; i < iterateNumber; ++i) {
            double[] delta = new double[featureNumber + 1];
            for (int j = 0; j < sampleNumber; ++j) {
                // 算出每一行(每个样本)的output
                int output = computeActivation(x[j], weights, featureNumber);
                for (int k = 0; k <= featureNumber; ++k) {
                    delta[k] += alpha * (y[j] - output) * x[j][k];
                }
            }
            // 这一次迭代之后。更新weights.
            for (int j = 0; j <= featureNumber; ++j) {
                weights[j] += delta[j];
            }
            System.out.println(i + ":" + Arrays.toString(weights));
        }
    }
    

}
View Code

(2)使用举例,逻辑OR运算。

public class perceptronOr {

    public static void main(String[] args) {
        double[][] x = {{1, 0, 0}, {1, 0, 1}, {1, 1, 0}, {1, 1, 1}};
        int[] y = {0, 1, 1, 1};
        double[] weights = {0.1, -0.1, 0.2};
        Perceptron p = new Perceptron();
        int featureNumber = 2;
        int sampleNumber = 4;
        double alpha = 0.25;
        int iterateNumber = 6;
        p.perceptronTrain(x, y, weights, featureNumber, sampleNumber, alpha, iterateNumber);
    }
}
View Code

结果如下:4次迭代之后就稳定了。

0:[0.1, 0.15, 0.2]
1:[-0.15, 0.15, 0.2]
2:[0.1, 0.4, 0.2]
3:[-0.15, 0.4, 0.2]
4:[-0.15, 0.4, 0.2]
5:[-0.15, 0.4, 0.2]
View Code

(3)使用举例,逻辑异或XOR。

代码及运行结果如下。未能解决问题:

public class PerceptronXOR {

    public static void main(String[] args) {
        double[][] x = {{1, 0, 0}, {1, 0, 1}, {1, 1, 0}, {1, 1, 1}};
        int[] y = {0, 1, 1, 0};
        double[] weights = {0.1, -0.1, 0.2};
        Perceptron p = new Perceptron();
        int featureNumber = 2;
        int sampleNumber = 4;
        double alpha = 0.25;
        int iterateNumber = 6;
        p.perceptronTrain(x, y, weights, featureNumber, sampleNumber, alpha, iterateNumber);
    }
}


运行结果如下:

0:[-0.15, -0.1, -0.04999999999999999]
1:[0.35, 0.15, 0.2]
2:[-0.15000000000000002, -0.1, -0.04999999999999999]
3:[0.35, 0.15, 0.2]
4:[-0.15000000000000002, -0.1, -0.04999999999999999]
5:[0.35, 0.15, 0.2]
View Code

如果改进:增加一维特征。迭代了12轮就稳定了。

public class PerceptronXOR {

    public static void main(String[] args) {
        double[][] x = {{1, 0, 0, 1}, {1, 0, 1, 0}, {1, 1, 0, 0}, {1, 1, 1, 0}};
        int[] y = {0, 1, 1, 0};
        double[] weights = {0.1, -0.1, 0.2, 0.3};
        Perceptron p = new Perceptron();
        int featureNumber = 3;
        int sampleNumber = 4;
        double alpha = 0.25;
        int iterateNumber = 16;
        p.perceptronTrain(x, y, weights, featureNumber, sampleNumber, alpha, iterateNumber);
    }
}

输出结果:
0:[-0.15, -0.1, -0.04999999999999999, 0.04999999999999999]
1:[0.35, 0.15, 0.2, 0.04999999999999999]
2:[-0.15000000000000002, -0.1, -0.04999999999999999, -0.2]
3:[0.35, 0.15, 0.2, -0.2]
4:[-0.15000000000000002, -0.1, -0.04999999999999999, -0.45]
5:[0.35, 0.15, 0.2, -0.45]
6:[0.09999999999999998, -0.1, -0.04999999999999999, -0.45]
7:[0.35, 0.15, -0.04999999999999999, -0.45]
8:[0.09999999999999998, -0.1, -0.3, -0.45]
9:[0.6, 0.15, -0.04999999999999999, -0.45]
10:[0.09999999999999998, -0.1, -0.3, -0.7]
11:[0.6, 0.15, -0.04999999999999999, -0.7]
12:[0.35, -0.1, -0.3, -0.7]
13:[0.35, -0.1, -0.3, -0.7]
14:[0.35, -0.1, -0.3, -0.7]
15:[0.35, -0.1, -0.3, -0.7]
View Code

(4)优化:之所以优化,是因为,样本非线性可分,且无法添加新特征使他可分。这样就得不到最后的最优解。那么有必要在有限的迭代次数内找出相对最优的一组解。

其实就是算一下输出的y的正确率是否比以往的maxCount要多。多的话更新。

 

import java.util.Arrays;

public class Perceptron {
    public static void main(String[] args) {
        System.out.println("here");
    }
    // 激活函数的实现
    public static int computeActivation(double[] x, double[] weights, int featureNumber) {
        double sum = 0;
        for (int i = 0; i <= featureNumber; ++i) {
            sum += x[i] * weights[i];
        }
        if (sum > 0) {
            return 1;
        }
        return 0;
    }
    // 简单的训练函数的实现,没有优化
    public static void perceptronTrain(double[][] x, int[] y, double[] weights, int featureNumber, int sampleNumber, double alpha, int iterateNumber) {
        // 设置x0 = 1
        for (int i = 0; i < sampleNumber; ++i) {
            x[i][0] = 1;
        }
        int[] optimizedOutputCount = {0};
        double[] optimizedWeights = new double[featureNumber + 1];
        for (int i = 0; i < iterateNumber; ++i) {
            double[] delta = new double[featureNumber + 1];
            for (int j = 0; j < sampleNumber; ++j) {
                // 算出每一行(每个样本)的output
                int output = computeActivation(x[j], weights, featureNumber);
                for (int k = 0; k <= featureNumber; ++k) {
                    delta[k] += alpha * (y[j] - output) * x[j][k];
                }
            }
            // 这一次迭代之后。更新weights.
            for (int j = 0; j <= featureNumber; ++j) {
                weights[j] += delta[j];
            }
            System.out.println(i + ":" + Arrays.toString(weights));
            findOptimizedWeights(x, y, optimizedWeights, weights, featureNumber, sampleNumber, optimizedOutputCount);         }
    }
    public static void findOptimizedWeights(double[][] x, int[] y, double[] optimizedWeights, double[] weights, int featureNumber, int sampleNumber, int[] optimizedOutputCount)
    {
        int rightOutputCount = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            int output = computeActivation(x[i], weights, featureNumber);
            if (y[i] == output) {
                rightOutputCount++;
            }
        }
        if (optimizedOutputCount[0] <= rightOutputCount) {
            System.out.println(Arrays.toString(weights));
            optimizedWeights = (double[]) weights.clone();
            System.out.println("更新了:" + Arrays.toString(optimizedWeights));
        }
        
    }


}
View Code

到这里,感知器就算是完全结束了。记住,如果训练样本线性可分,那么感知器一定找得到最优解。

 (5)课后作业解答:

题目的输入输出:
输入:
首先为正整数n、m、α(浮点数)、t,分别代表特征个数、训练样本个数、迭代速率与迭代次数。
随后为m行,每行有n+1个整数。其中(1<n<=1000,1<m<=1000,1<=t<=1000)。
在后续的m行中,每行代表一个样本中的n个特征值(x1,x2,…,xn)与样本的实际观测结果y(0或1)。
最后为n+1个浮点数w_(0,)  ,w_(1  ,) w_(2 ,…,) w_n,代表感知器模型n+1个参数的初始值。
输出: 
t行,每行有n+1个浮点数,代表每一次迭代后计算出的w_(0,)  ,w_(1  ,) w_(2 ,…,) w_n。
Sample Input1:
2 4 0.25 5
0 0 0
0 1 1
1 0 1
1 1 1
0.06230 0.01123 -0.07335

Sample Output1:
0.062 0.011 0.177
-0.188 0.011 0.177
0.312 0.261 0.427
0.062 0.261 0.427
-0.188 0.261 0.427
View Code

答案:

import java.util.Arrays;
import java.util.Scanner;

public class Perceptron {
    public static void main(String[] args) {
        Scanner in = new Scanner(System.in);
        int n = in.nextInt();
        int m = in.nextInt();
        double a = in.nextDouble();
        int t = in.nextInt();
        double[][] x = new double[m][n + 1];
        int[] y = new int[m];
        for (int i = 0; i < m; ++i) {
            x[i][1] = in.nextInt();
            x[i][2] = in.nextInt();
            y[i] = in.nextInt();
        }
        double[] w = new double[n + 1];
        for (int i = 0; i < n + 1; ++i) {
            w[i] = in.nextDouble();
        }
        perceptronTrain(x, y, w, n, m, a, t);
        
    }
    // 激活函数的实现
    public static int computeActivation(double[] x, double[] weights, int featureNumber) {
        double sum = 0;
        for (int i = 0; i <= featureNumber; ++i) {
            sum += x[i] * weights[i];
        }
        if (sum > 0) {
            return 1;
        }
        return 0;
    }
    // 简单的训练函数的实现,没有优化
    public static void perceptronTrain(double[][] x, int[] y, double[] weights, int featureNumber, int sampleNumber, double alpha, int iterateNumber) {
        // 设置x0 = 1
        for (int i = 0; i < sampleNumber; ++i) {
            x[i][0] = 1;
        }
        int[] optimizedOutputCount = {0};
        double[] optimizedWeights = new double[featureNumber + 1];
        for (int i = 0; i < iterateNumber; ++i) {
            double[] delta = new double[featureNumber + 1];
            for (int j = 0; j < sampleNumber; ++j) {
                // 算出每一行(每个样本)的output
                int output = computeActivation(x[j], weights, featureNumber);
                for (int k = 0; k <= featureNumber; ++k) {
                    delta[k] += alpha * (y[j] - output) * x[j][k];
                }
            }
            // 这一次迭代之后。更新weights.
            for (int j = 0; j <= featureNumber; ++j) {
                weights[j] += delta[j];
            }
            System.out.println(i + ":" + Arrays.toString(weights));
            findOptimizedWeights(x, y, optimizedWeights, weights, featureNumber, sampleNumber, optimizedOutputCount);         }
    }
    public static void findOptimizedWeights(double[][] x, int[] y, double[] optimizedWeights, double[] weights, int featureNumber, int sampleNumber, int[] optimizedOutputCount)
    {
        int rightOutputCount = 0;
        for (int i = 0; i < sampleNumber; ++i) {
            int output = computeActivation(x[i], weights, featureNumber);
            if (y[i] == output) {
                rightOutputCount++;
            }
        }
        if (optimizedOutputCount[0] <= rightOutputCount) {
            System.out.println(Arrays.toString(weights));
            optimizedWeights = (double[]) weights.clone();
            System.out.println("更新了:" + Arrays.toString(optimizedWeights));
        }
        
    }


}
View Code

 

posted @ 2016-05-02 11:34  创业-李春跃-增长黑客  阅读(805)  评论(0编辑  收藏  举报