随笔分类 -  Machine Learning

摘要:This learning curve shows high error on the test sets but comparatively low error on training set, so the algorithm is suffering from high variance. T 阅读全文
posted @ 2020-09-27 03:30 Zhentiw 阅读(178) 评论(0) 推荐(0) 编辑
摘要:StatisticSolution Accuracy (85 + 10) / (1000) = .095 Precision (85) / (85 + 890) = 0.087 Recall There are 85 true positives and 15 false negatives, so 阅读全文
posted @ 2020-09-27 03:16 Zhentiw 阅读(250) 评论(0) 推荐(0) 编辑
摘要:Training an algorithm on a very few number of data points (such as 1, 2 or 3) will easily have 0 errors because we can always find a quadratic curve t 阅读全文
posted @ 2020-09-25 15:29 Zhentiw 阅读(111) 评论(0) 推荐(0) 编辑
摘要: 阅读全文
posted @ 2020-09-22 18:24 Zhentiw 阅读(91) 评论(0) 推荐(0) 编辑
摘要:In this section we examine the relationship between the degree of the polynomial d and the underfitting or overfitting of our hypothesis. We need to d 阅读全文
posted @ 2020-09-22 15:44 Zhentiw 阅读(288) 评论(0) 推荐(0) 编辑
摘要:Just because a learning algorithm fits a training set well, that does not mean it is a good hypothesis. It could over fit and as a result your predict 阅读全文
posted @ 2020-09-21 23:03 Zhentiw 阅读(202) 评论(0) 推荐(0) 编辑
摘要: 阅读全文
posted @ 2020-09-16 03:45 Zhentiw 阅读(113) 评论(0) 推荐(0) 编辑
摘要:First, pick a network architecture; choose the layout of your neural network, including how many hidden units in each layer and how many layers in tot 阅读全文
posted @ 2020-09-16 03:32 Zhentiw 阅读(133) 评论(0) 推荐(0) 编辑
摘要:Gradient checking will assure that our backpropagation works as intended. We can approximate the derivative of our cost function with: epsilon = 1e-4; 阅读全文
posted @ 2020-09-15 23:23 Zhentiw 阅读(159) 评论(0) 推荐(0) 编辑
摘要:With neural networks, we are working with sets of matrices: In order to use optimizing functions such as "fminunc()", we will want to "unroll" all the 阅读全文
posted @ 2020-09-13 02:12 Zhentiw 阅读(157) 评论(0) 推荐(0) 编辑
摘要:"Backpropagation" is neural-network terminology for minimizing our cost function, just like what we were doing with gradient descent in logistic and l 阅读全文
posted @ 2020-09-10 02:58 Zhentiw 阅读(178) 评论(0) 推荐(0) 编辑
摘要:Let's first define a few variables that we will need to use: 阅读全文
posted @ 2020-09-10 02:44 Zhentiw 阅读(145) 评论(0) 推荐(0) 编辑
摘要:30 -20x1 - 20x2 0 0 1 0 1 1 1 0 1 1 1 0 So NOTx1 AND NOTx2 阅读全文
posted @ 2020-09-08 03:36 Zhentiw 阅读(177) 评论(0) 推荐(0) 编辑
摘要:To classify data into multiple classes, we let our hypothesis function return a vector of values. Say we wanted to classify our data into one of four 阅读全文
posted @ 2020-09-07 03:05 Zhentiw 阅读(242) 评论(0) 推荐(0) 编辑
摘要:Combine X1 & X1 with !X1 & !X2 to get first hidden layer as result. Then hidden layer with X1 OR X2 to get final result. 阅读全文
posted @ 2020-09-07 02:53 Zhentiw 阅读(119) 评论(0) 推荐(0) 编辑
摘要:theta(1) = S(j+1) * (Sj + 1) theta(1) = 4 * (2 + 1) = 4 * 3 阅读全文
posted @ 2020-09-04 01:51 Zhentiw 阅读(138) 评论(0) 推荐(0) 编辑
摘要:If lamda is large then theta should be small in order to minize the cost function. Too large lamda, cause underfitting the data. 阅读全文
posted @ 2020-08-31 02:10 Zhentiw 阅读(123) 评论(0) 推荐(0) 编辑
摘要:If we have overfitting from our hypothesis function, we can reduce the weight that some of the terms in our function carry by increasing their cost. S 阅读全文
posted @ 2020-08-31 01:48 Zhentiw 阅读(200) 评论(0) 推荐(0) 编辑
摘要: 阅读全文
posted @ 2020-08-31 01:33 Zhentiw 阅读(84) 评论(0) 推荐(0) 编辑

点击右上角即可分享
微信分享提示