随笔分类 -  Machine Learning

摘要:We can compress our cost function's two conditional cases into one case: We can fully write out our entire cost function as follows: A vectorized impl 阅读全文
posted @ 2020-08-28 04:07 Zhentiw 阅读(143) 评论(0) 推荐(0) 编辑
摘要:We cannot use the same cost function that we use for linear regression because the Logistic Function will cause the output to be wavy, causing many lo 阅读全文
posted @ 2020-08-26 15:52 Zhentiw 阅读(218) 评论(0) 推荐(0) 编辑
摘要:x2: midterm exam ^ 2, apply mean normalization: mean value = (7921 + 5184 + 8836 +4761) / 4 = 6675.5 range = 8836-4761 = 4075 ans = (4761 - 6675./5) / 阅读全文
posted @ 2020-08-24 02:05 Zhentiw 阅读(185) 评论(0) 推荐(0) 编辑
摘要: 阅读全文
posted @ 2020-08-24 01:29 Zhentiw 阅读(126) 评论(0) 推荐(0) 编辑
摘要:We have used gradient descent where in order to minimize the cost function J(theta), we would take this iterative algorithm that takes many steps, mul 阅读全文
posted @ 2020-08-24 01:20 Zhentiw 阅读(195) 评论(0) 推荐(0) 编辑
摘要:Quadratic function: Cubic function: 阅读全文
posted @ 2020-08-22 16:50 Zhentiw 阅读(229) 评论(0) 推荐(0) 编辑
摘要:Feature scaling: it make gradient descent run much faster and converge in a lot fewer other iterations. Bad cases: Good cases: We can speed up gradien 阅读全文
posted @ 2020-08-22 16:33 Zhentiw 阅读(235) 评论(0) 推荐(0) 编辑
摘要:The gradient descent equation itself is generally the same form; we just have to repeat it for our 'n' features: In other words: The following image c 阅读全文
posted @ 2020-08-22 16:20 Zhentiw 阅读(232) 评论(0) 推荐(0) 编辑
摘要:Linear regression with multiple variables is also known as "multivariate linear regression". We now introduce notation for equations where we can have 阅读全文
posted @ 2020-08-21 14:48 Zhentiw 阅读(154) 评论(0) 推荐(0) 编辑
摘要:To compute this formula, you need to do: Actually, to make it simpler, we can do Vectorization, that formula is actually equals to: So we can code it 阅读全文
posted @ 2020-08-19 19:10 Zhentiw 阅读(85) 评论(0) 推荐(0) 编辑
摘要:For: v = zeros(10, 1); for i=1:10, v(i) = 2^i; end; # the same as indices=1:10 for i=indices, disp(i) end; while & if & break: i=1; while i <=5, v(i) 阅读全文
posted @ 2020-08-19 02:49 Zhentiw 阅读(123) 评论(0) 推荐(0) 编辑
摘要:t=[0:0.01:0.98]; y1 = sin(2*pi*4*t); plot(t, y1) If you draw tha cos function, it will replace the sin function figure to a new one y2 = cos(2*pi*4*t) 阅读全文
posted @ 2020-08-19 02:27 Zhentiw 阅读(162) 评论(0) 推荐(0) 编辑
摘要:Mutiplate materix: Everytime you see '.' mean element wise operator. >> A = [1 2; 3 4; 5 6]; >> B = [11 12; 13 14; 15 16]; >> C = [1 1; 2 2]; >> A*C a 阅读全文
posted @ 2020-08-17 22:04 Zhentiw 阅读(127) 评论(0) 推荐(0) 编辑
摘要:Load data: load featuresX.dat who: Check how many variables in your current session: who whos: details about variables: whos Clear one variable: clear 阅读全文
posted @ 2020-08-17 21:36 Zhentiw 阅读(102) 评论(0) 推荐(0) 编辑
摘要:You can do baisc math: 5+6 32-8 1/2 2^3 1 == 2 % ans = 0 means false 1 ~=1 % 1 not equals to 2. ans = 1 means true 1 && 0 1 || 0 xor(1, 0) Change the 阅读全文
posted @ 2020-08-17 20:39 Zhentiw 阅读(101) 评论(0) 推荐(0) 编辑
摘要:We can measure the accuracy of our hypothesis function by using a cost function. This takes an average difference (actually a fancier version of an av 阅读全文
posted @ 2020-08-12 02:29 Zhentiw 阅读(145) 评论(0) 推荐(0) 编辑
摘要:Evaluation Metrics are how you can tell if your machine learning algorithm is getting better and how well you are doing overall. Accuracy x x x Accura 阅读全文
posted @ 2020-08-06 21:04 Zhentiw 阅读(197) 评论(0) 推荐(0) 编辑
摘要:The k-means algorithm captures the insight that each point in a cluster should be near to the center of that cluster. It works like this: first we cho 阅读全文
posted @ 2020-07-17 20:52 Zhentiw 阅读(250) 评论(0) 推荐(0) 编辑
摘要:Decision trees can handle none linear speratable dataset, in the picture, there is none separable dataset When we use dscision tree, we ask multi line 阅读全文
posted @ 2020-07-03 21:57 Zhentiw 阅读(191) 评论(0) 推荐(0) 编辑
摘要:A support vector machine (SVM) is a supervised machine learning model that uses classification algorithms for two-group classification problems. After 阅读全文
posted @ 2020-06-26 03:36 Zhentiw 阅读(147) 评论(0) 推荐(0) 编辑

点击右上角即可分享
微信分享提示