MXNet | 在R语言中使用

亚马逊将MXNet指定为官方深度学习平台,1月23日MXNet成为Apache的卵化项目。

无疑,这些将MXNet推向深度学习的热潮中,成为热捧的项目。当然,学习MXNet也是很有必要的。哈哈,加油深度学习。

目前支持以下的语言:

这里介绍基于R语言的安装和基本使用:

安装

1 install.packages("drat", repos="https://cran.rstudio.com")
2 drat:::addRepo("dmlc")
3 install.packages("mxnet")

若是安装过程中有问题,可以去https://cran.rstudio.com下载drat的本地文件”drat.zip”

在 https://cran.r-project.org/web/packages/drat/下载。 
drat包下载

分类

下面以一个二分类的数据为例:

 1 >require(mlbench)
 2 >require(mxnet)
 3 >data(Sonar, package="mlbench")
 4 > str(Sonar)
 5 'data.frame':   208 obs. of  61 variables:
 6  $ V1   : num  0.02 0.0453 0.0262 0.01 0.0762 0.0286 0.0317 0.0519 0.0223 0.0164 ...
 7  $ V2   : num  0.0371 0.0523 0.0582 0.0171 0.0666 0.0453 0.0956 0.0548 0.0375 0.0173 ...
 8  $ V3   : num  0.0428 0.0843 0.1099 0.0623 0.0481 ...
 9  $ V4   : num  0.0207 0.0689 0.1083 0.0205 0.0394 ...
10  $ V5   : num  0.0954 0.1183 0.0974 0.0205 0.059 ...
11  $ V6   : num  0.0986 0.2583 0.228 0.0368 0.0649 ...
12  $ V7   : num  0.154 0.216 0.243 0.11 0.121 ...
13  $ V8   : num  0.16 0.348 0.377 0.128 0.247 ...
14  $ V9   : num  0.3109 0.3337 0.5598 0.0598 0.3564 ...
15  $ V10  : num  0.211 0.287 0.619 0.126 0.446 ...
16  $ V11  : num  0.1609 0.4918 0.6333 0.0881 0.4152 ...
17  $ V12  : num  0.158 0.655 0.706 0.199 0.395 ...
18  $ V13  : num  0.2238 0.6919 0.5544 0.0184 0.4256 ...
19  $ V14  : num  0.0645 0.7797 0.532 0.2261 0.4135 ...
20  $ V15  : num  0.066 0.746 0.648 0.173 0.453 ...
21  $ V16  : num  0.227 0.944 0.693 0.213 0.533 ...
22  $ V17  : num  0.31 1 0.6759 0.0693 0.7306 ...
23  $ V18  : num  0.3 0.887 0.755 0.228 0.619 ...
24  $ V19  : num  0.508 0.802 0.893 0.406 0.203 ...
25  $ V20  : num  0.48 0.782 0.862 0.397 0.464 ...
26  $ V21  : num  0.578 0.521 0.797 0.274 0.415 ...
27  $ V22  : num  0.507 0.405 0.674 0.369 0.429 ...
28  $ V23  : num  0.433 0.396 0.429 0.556 0.573 ...
29  $ V24  : num  0.555 0.391 0.365 0.485 0.54 ...
30  $ V25  : num  0.671 0.325 0.533 0.314 0.316 ...
31  $ V26  : num  0.641 0.32 0.241 0.533 0.229 ...
32  $ V27  : num  0.71 0.327 0.507 0.526 0.7 ...
33  $ V28  : num  0.808 0.277 0.853 0.252 1 ...
34  $ V29  : num  0.679 0.442 0.604 0.209 0.726 ...
35  $ V30  : num  0.386 0.203 0.851 0.356 0.472 ...
36  $ V31  : num  0.131 0.379 0.851 0.626 0.51 ...
37  $ V32  : num  0.26 0.295 0.504 0.734 0.546 ...
38  $ V33  : num  0.512 0.198 0.186 0.612 0.288 ...
39  $ V34  : num  0.7547 0.2341 0.2709 0.3497 0.0981 ...
40  $ V35  : num  0.854 0.131 0.423 0.395 0.195 ...
41  $ V36  : num  0.851 0.418 0.304 0.301 0.418 ...
42  $ V37  : num  0.669 0.384 0.612 0.541 0.46 ...
43  $ V38  : num  0.61 0.106 0.676 0.881 0.322 ...
44  $ V39  : num  0.494 0.184 0.537 0.986 0.283 ...
45  $ V40  : num  0.274 0.197 0.472 0.917 0.243 ...
46  $ V41  : num  0.051 0.167 0.465 0.612 0.198 ...
47  $ V42  : num  0.2834 0.0583 0.2587 0.5006 0.2444 ...
48  $ V43  : num  0.282 0.14 0.213 0.321 0.185 ...
49  $ V44  : num  0.4256 0.1628 0.2222 0.3202 0.0841 ...
50  $ V45  : num  0.2641 0.0621 0.2111 0.4295 0.0692 ...
51  $ V46  : num  0.1386 0.0203 0.0176 0.3654 0.0528 ...
52  $ V47  : num  0.1051 0.053 0.1348 0.2655 0.0357 ...
53  $ V48  : num  0.1343 0.0742 0.0744 0.1576 0.0085 ...
54  $ V49  : num  0.0383 0.0409 0.013 0.0681 0.023 0.0264 0.0507 0.0285 0.0777 0.0092 ...
55  $ V50  : num  0.0324 0.0061 0.0106 0.0294 0.0046 0.0081 0.0159 0.0178 0.0439 0.0198 ...
56  $ V51  : num  0.0232 0.0125 0.0033 0.0241 0.0156 0.0104 0.0195 0.0052 0.0061 0.0118 ...
57  $ V52  : num  0.0027 0.0084 0.0232 0.0121 0.0031 0.0045 0.0201 0.0081 0.0145 0.009 ...
58  $ V53  : num  0.0065 0.0089 0.0166 0.0036 0.0054 0.0014 0.0248 0.012 0.0128 0.0223 ...
59  $ V54  : num  0.0159 0.0048 0.0095 0.015 0.0105 0.0038 0.0131 0.0045 0.0145 0.0179 ...
60  $ V55  : num  0.0072 0.0094 0.018 0.0085 0.011 0.0013 0.007 0.0121 0.0058 0.0084 ...
61  $ V56  : num  0.0167 0.0191 0.0244 0.0073 0.0015 0.0089 0.0138 0.0097 0.0049 0.0068 ...
62  $ V57  : num  0.018 0.014 0.0316 0.005 0.0072 0.0057 0.0092 0.0085 0.0065 0.0032 ...
63  $ V58  : num  0.0084 0.0049 0.0164 0.0044 0.0048 0.0027 0.0143 0.0047 0.0093 0.0035 ...
64  $ V59  : num  0.009 0.0052 0.0095 0.004 0.0107 0.0051 0.0036 0.0048 0.0059 0.0056 ...
65  $ V60  : num  0.0032 0.0044 0.0078 0.0117 0.0094 0.0062 0.0103 0.0053 0.0022 0.004 ...
66  $ Class: num  1 1 1 1 1 1 1 1 1 1 ...
67 > dim(Sonar)
68 [1] 208  61
69 
70 >Sonar[,61] = as.numeric(Sonar[,61])-1
71 >train.ind = c(1:50, 100:150)
72 >train.x = data.matrix(Sonar[train.ind, 1:60])
73 >train.y = Sonar[train.ind, 61]
74 >test.x = data.matrix(Sonar[-train.ind, 1:60])
75 >test.y = Sonar[-train.ind, 61]

采用mx.mlp函数来训练模型,这里给出对该函数参数的介绍:

1 mx.mlp(data, label, hidden_node = 1, out_node, dropout = NULL,
2   activation = "tanh", out_activation = "softmax",
3   device = mx.ctx.default(), ...)

data :为数据 
label :标签 
hidden_node: 隐藏层节点数,默认为1 
out_node : 输出成节点数 
dropout : 丢失率,为[0,1) 
activation : 激活函数 
out_activation:输出成激活函数,默认为softmax 
device = mx.ctx.default() : 这里用于设置是GPU还是CPU来训练 
其他的参数 : 用于传递给 mx.model.FeedForward.create 
eval_metric : 评估矩阵

 1 #设置种子
 2 > mx.set.seed(0)
 3 # 模型,num.round表示迭代数,array.batch.size表示批规模
 4 > model <- mx.mlp(train.x, train.y, hidden_node=10, out_node=2,out_activation="softmax", num.round=20, array.batch.size=15, learning.rate=0.07, momentum=0.9, eval.metric=mx.metric.accuracy)
 5 
 6 Start training with 1 devices
 7 [1] Train-accuracy=0.488888888888889
 8 [2] Train-accuracy=0.514285714285714
 9 [3] Train-accuracy=0.514285714285714
10 [4] Train-accuracy=0.514285714285714
11 [5] Train-accuracy=0.514285714285714
12 [6] Train-accuracy=0.523809523809524
13 [7] Train-accuracy=0.619047619047619
14 [8] Train-accuracy=0.695238095238095
15 [9] Train-accuracy=0.695238095238095
16 [10] Train-accuracy=0.761904761904762
17 [11] Train-accuracy=0.828571428571429
18 [12] Train-accuracy=0.771428571428571
19 [13] Train-accuracy=0.742857142857143
20 [14] Train-accuracy=0.733333333333333
21 [15] Train-accuracy=0.771428571428571
22 [16] Train-accuracy=0.847619047619048
23 [17] Train-accuracy=0.857142857142857
24 [18] Train-accuracy=0.838095238095238
25 [19] Train-accuracy=0.838095238095238
26 [20] Train-accuracy=0.838095238095238
27 
28 > preds = predict(model, test.x)
29 > pred.label = max.col(t(preds))-1
30 > table(pred.label, test.y)
31           test.y
32 pred.label  0  1
33          0 24 14
34          1 36 33
35 > (24+33) / (24+14+36+33)
36 [1] 0.5327103

准确率= 0.5327103 
完整的代码:

 1 ###########分类
 2 require(mlbench)
 3 require(mxnet)
 4 data(Sonar, package="mlbench")
 5 Sonar[,61] = as.numeric(Sonar[,61])-1
 6 train.ind = c(1:50, 100:150)
 7 train.x = data.matrix(Sonar[train.ind, 1:60])
 8 train.y = Sonar[train.ind, 61]
 9 test.x = data.matrix(Sonar[-train.ind, 1:60])
10 test.y = Sonar[-train.ind, 61]
11 
12 
13 mx.set.seed(0)
14 model <- mx.mlp(train.x, train.y, hidden_node=10, out_node=2,out_activation="softmax", num.round=20, array.batch.size=15, learning.rate=0.07, momentum=0.9, eval.metric=mx.metric.accuracy)
15 
16 
17 preds = predict(model, test.x)
18 
19 pred.label = max.col(t(preds))-1
20 table(pred.label, test.y)
21 
22 (24+33) / (24+14+36+33)

回归

下面给一个回归的模型例子:

数据的介绍

 1 data(BostonHousing, package="mlbench")
 2 
 3 > str(BostonHousing)
 4 'data.frame':   506 obs. of  14 variables:
 5  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
 6  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
 7  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
 8  $ chas   : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
 9  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
10  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
11  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
12  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
13  $ rad    : num  1 2 2 3 3 3 5 5 5 5 ...
14  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
15  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
16  $ b      : num  397 397 393 395 397 ...
17  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
18  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
19 > dim(BostonHousing)
20 [1] 506  14

数据的划分,训练集合验证集

1 > train.ind = seq(1, 506, 3)
2 > train.x = data.matrix(BostonHousing[train.ind, -14])
3 > train.y = BostonHousing[train.ind, 14]
4 > test.x = data.matrix(BostonHousing[-train.ind, -14])
5 > test.y = BostonHousing[-train.ind, 14]

定义节点之间的连接方式

1 > # 定义输入数据
2 > data <- mx.symbol.Variable("data")
3 > # 完整连接的隐藏层
4 > # data: 输入源
5 > # num_hidden: 该层的节点数
6 > fc1 <- mx.symbol.FullyConnected(data, num_hidden=1)

定义损失函数

1 # 针对回归任务,定义损失函数
2 > lro <- mx.symbol.LinearRegressionOutput(fc1)

模型

 1 >mx.set.seed(0)
 2 #模型的定义,基于cpu训练,迭代50次,每批次为20,学习率为2e-6,评估矩阵为相对误差
 3 >model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y, ctx=mx.cpu(), num.round=50, array.batch.size=20, learning.rate=2e-6, momentum=0.9, eval.metric=mx.metric.rmse)
 4 
 5 
 6 Start training with 1 devices
 7 [1] Train-rmse=16.063282524034
 8 [2] Train-rmse=12.2792375712573
 9 [3] Train-rmse=11.1984634005885
10 [4] Train-rmse=10.2645236892904
11 [5] Train-rmse=9.49711005504284
12 [6] Train-rmse=9.07733734175182
13 [7] Train-rmse=9.07884450847991
14 [8] Train-rmse=9.10463850277417
15 [9] Train-rmse=9.03977049028532
16 [10] Train-rmse=8.96870685004475
17 [11] Train-rmse=8.93113287361574
18 [12] Train-rmse=8.89937257821847
19 [13] Train-rmse=8.87182096922953
20 [14] Train-rmse=8.84476075083586
21 [15] Train-rmse=8.81464673014974
22 [16] Train-rmse=8.78672567900196
23 [17] Train-rmse=8.76265872846474
24 [18] Train-rmse=8.73946101419974
25 [19] Train-rmse=8.71651926303267
26 [20] Train-rmse=8.69457600919277
27 [21] Train-rmse=8.67354928674563
28 [22] Train-rmse=8.65328755392436
29 [23] Train-rmse=8.63378039680078
30 [24] Train-rmse=8.61488162586984
31 [25] Train-rmse=8.5965105183022
32 [26] Train-rmse=8.57868133563275
33 [27] Train-rmse=8.56135851937663
34 [28] Train-rmse=8.5444819772098
35 [29] Train-rmse=8.52802114610432
36 [30] Train-rmse=8.5119504512622
37 [31] Train-rmse=8.49624261719241
38 [32] Train-rmse=8.48087453238701
39 [33] Train-rmse=8.46582689119887
40 [34] Train-rmse=8.45107881002491
41 [35] Train-rmse=8.43661331401712
42 [36] Train-rmse=8.42241575909639
43 [37] Train-rmse=8.40847217331365
44 [38] Train-rmse=8.39476931796395
45 [39] Train-rmse=8.38129658373974
46 [40] Train-rmse=8.36804269059018
47 [41] Train-rmse=8.35499817678397
48 [42] Train-rmse=8.34215505742154
49 [43] Train-rmse=8.32950441908131
50 [44] Train-rmse=8.31703985777311
51 [45] Train-rmse=8.30475363906755
52 [46] Train-rmse=8.29264031506106
53 [47] Train-rmse=8.28069372820073
54 [48] Train-rmse=8.26890902770415
55 [49] Train-rmse=8.25728089053853
56 [50] Train-rmse=8.24580511500735

或自定义评估矩阵

1 #自定义评估矩阵
2 >demo.metric.mae <- mx.metric.custom("mae", function(label, pred) {
3   res <- mean(abs(label-pred))
4   return(res)
5 })

模型结果

> mx.set.seed(0)
> model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y, ctx=mx.cpu(), num.round=50, array.batch.size=20, learning.rate=2e-6, momentum=0.9, eval.metric=demo.metric.mae)
Start training with 1 devices
[1] Train-mae=13.1889538083225
[2] Train-mae=9.81431959337658
[3] Train-mae=9.21576419870059
[4] Train-mae=8.38071537613869
[5] Train-mae=7.45462437611487
[6] Train-mae=6.93423301743136
[7] Train-mae=6.91432357016537
[8] Train-mae=7.02742733055105
[9] Train-mae=7.00618194618469
[10] Train-mae=6.92541576984028
[11] Train-mae=6.87530243690643
[12] Train-mae=6.84757369098564
[13] Train-mae=6.82966501611388
[14] Train-mae=6.81151759574811
[15] Train-mae=6.78394182841811
[16] Train-mae=6.75914719419347
[17] Train-mae=6.74180388773481
[18] Train-mae=6.725853071279
[19] Train-mae=6.70932178215848
[20] Train-mae=6.6928868798746
[21] Train-mae=6.6769521329138
[22] Train-mae=6.66184809505939
[23] Train-mae=6.64754504809777
[24] Train-mae=6.63358514060577
[25] Train-mae=6.62027640889088
[26] Train-mae=6.60738245232238
[27] Train-mae=6.59505546771818
[28] Train-mae=6.58346195800437
[29] Train-mae=6.57285477783945
[30] Train-mae=6.56259003960424
[31] Train-mae=6.5527790788975
[32] Train-mae=6.54353428422991
[33] Train-mae=6.5344172368447
[34] Train-mae=6.52557652526432
[35] Train-mae=6.51697905850079
[36] Train-mae=6.50847898812758
[37] Train-mae=6.50014844106303
[38] Train-mae=6.49207674844397
[39] Train-mae=6.48412070125341
[40] Train-mae=6.47650500999557
[41] Train-mae=6.46893867486053
[42] Train-mae=6.46142131653097
[43] Train-mae=6.45395035048326
[44] Train-mae=6.44652914123403
[45] Train-mae=6.43916216409869
[46] Train-mae=6.43183777381976
[47] Train-mae=6.42455544223388
[48] Train-mae=6.41731406417158
[49] Train-mae=6.41011292926139
[50] Train-mae=6.40312503493494

完整的代码:

 1 ########回归
 2 data(BostonHousing, package="mlbench")
 3 str(BostonHousing)
 4 dim(BostonHousing)
 5 train.ind = seq(1, 506, 3)
 6 train.x = data.matrix(BostonHousing[train.ind, -14])
 7 train.y = BostonHousing[train.ind, 14]
 8 test.x = data.matrix(BostonHousing[-train.ind, -14])
 9 test.y = BostonHousing[-train.ind, 14]
10 
11 
12 # 定义输入数据
13 data <- mx.symbol.Variable("data")
14 # 完整连接的隐藏层
15 # data: 输入源
16 # num_hidden: 该层的节点数
17 fc1 <- mx.symbol.FullyConnected(data, num_hidden=1)
18 
19 # 针对回归任务,定义损失函数
20 lro <- mx.symbol.LinearRegressionOutput(fc1)
21 
22 
23 
24 mx.set.seed(0)
25 model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y, ctx=mx.cpu(), num.round=50, array.batch.size=20, learning.rate=2e-6, momentum=0.9, eval.metric=mx.metric.rmse)
26 
27 
28 
29 demo.metric.mae <- mx.metric.custom("mae", function(label, pred) {
30   res <- mean(abs(label-pred))
31   return(res)
32 })
33 mx.set.seed(0)
34 model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y, ctx=mx.cpu(), num.round=50, array.batch.size=20, learning.rate=2e-6, momentum=0.9, eval.metric=demo.metric.mae)

总结: 代码很直接明白,简介清晰,容易上手

转自:http://blog.csdn.net/xxzhangx/article/details/54729055

posted @ 2017-04-12 15:38  payton数据之旅  阅读(1074)  评论(0编辑  收藏  举报