1. BN层
torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
输入:\((N,C) \ or \ (N, C, L)\), \(\ C\)对应num_features
.
torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
输入:\((N,C, H, W) \ or \ (N, C, H, W)\), \(\ C\)对应num_features
.
2. 卷积层
torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')
输入:\((N, C, L)\), \(\ C\)对应num_features
.
torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')
输入:\((N, C, H, W)\), \(\ C\)对应num_features
.
torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')
输入:\((N, C, D, H, W)\), \(\ C\)对应num_features
.
3. 激活层
torch.nn.ReLU(inplace=False)
输入:\((N,*)\).
torch.nn.Sigmoid()
输入:\((N,*)\).
4. 全连接层
torch.nn.Linear(in_features, out_features, bias=True)
输入:Input: \((N, *, C_{in})\).
5. LSTM
torch.nn.LSTM(input_size, hidden_size, num_layers, bias=True, batch_first=False, dropout=0, bidirectional=False, proj_size=0)
输入:
\(inputs: (T, N, C)\),\(C\)是输入维度
\(h_0: (num\_layers * num\_directions,\ N,\ hidden\_size)\)
\(c_0: (num\_layers * num\_directions,\ N,\ hidden\_size)\)
输出:
\(outputs: (T,\ N,\ num\_directions * hidden\_size)\)
\(h_0: (num\_layers * num\_directions,\ N,\ hidden\_size)\)
\(c_0: (num\_layers * num\_directions,\ N,\ hidden\_size)\)
posted @
2021-05-13 19:51
Guang'Jun
阅读(
340)
评论()
编辑
收藏
举报