You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: torchvision/ops/misc.py
+46-2Lines changed: 46 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -129,7 +129,7 @@ class Conv2dNormActivation(ConvNormActivation):
129
129
padding (int, tuple or str, optional): Padding added to all four sides of the input. Default: None, in which case it will calculated as ``padding = (kernel_size - 1) // 2 * dilation``
130
130
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1
131
131
norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``torch.nn.BatchNorm2d``
132
-
activation_layer (Callable[..., torch.nn.Module], optinal): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU``
132
+
activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU``
133
133
dilation (int): Spacing between kernel elements. Default: 1
134
134
inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True``
135
135
bias (bool, optional): Whether to use bias in the convolution layer. By default, biases are included if ``norm_layer is None``.
@@ -179,7 +179,7 @@ class Conv3dNormActivation(ConvNormActivation):
179
179
padding (int, tuple or str, optional): Padding added to all four sides of the input. Default: None, in which case it will calculated as ``padding = (kernel_size - 1) // 2 * dilation``
180
180
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1
181
181
norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``torch.nn.BatchNorm3d``
182
-
activation_layer (Callable[..., torch.nn.Module], optinal): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU``
182
+
activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU``
183
183
dilation (int): Spacing between kernel elements. Default: 1
184
184
inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True``
185
185
bias (bool, optional): Whether to use bias in the convolution layer. By default, biases are included if ``norm_layer is None``.
"""This block implements the multi-layer perceptron (MLP) module.
260
+
261
+
Args:
262
+
in_channels (int): Number of channels of the input
263
+
hidden_channels (List[int]): List of the hidden channel dimensions
264
+
norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``None``
265
+
activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU``
266
+
inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True``
267
+
bias (bool): Whether to use bias in the linear layer. Default ``True``
268
+
dropout (float): The probability for the dropout layer. Default: 0.0
0 commit comments