inferno.extensions.layers package¶
Submodules¶
inferno.extensions.layers.activations module¶
-
class
inferno.extensions.layers.activations.
SELU
[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
inferno.extensions.layers.building_blocks module¶
-
class
inferno.extensions.layers.building_blocks.
ResBlockBase
(in_channels, out_channels, dim, size=2, force_skip_op=False, activated=True)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.building_blocks.
ResBlock
(in_channels, out_channels, dim, size=2, activated=True, activation='ReLU', batchnorm=True, force_skip_op=False, conv_kwargs=None)[source]¶ Bases: inferno.extensions.layers.building_blocks.ResBlockBase
inferno.extensions.layers.convolutional module¶
-
class
inferno.extensions.layers.convolutional.
ConvActivation
(in_channels, out_channels, kernel_size, dim, activation, stride=1, dilation=1, groups=None, depthwise=False, bias=True, deconv=False, initialization=None)[source]¶ Bases: torch.nn.modules.module.Module
Convolutional layer with ‘SAME’ padding followed by an activation.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.convolutional.
ConvELU2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvELU3D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvSigmoid2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with ‘SAME’ padding, Sigmoid and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvSigmoid3D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with ‘SAME’ padding, Sigmoid and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
DeconvELU2D
(in_channels, out_channels, kernel_size=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D deconvolutional layer with ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
DeconvELU3D
(in_channels, out_channels, kernel_size=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D deconvolutional layer with ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
StridedConvELU2D
(in_channels, out_channels, kernel_size, stride=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D strided convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
StridedConvELU3D
(in_channels, out_channels, kernel_size, stride=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D strided convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
DilatedConvELU2D
(in_channels, out_channels, kernel_size, dilation=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D dilated convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
DilatedConvELU3D
(in_channels, out_channels, kernel_size, dilation=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D dilated convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.convolutional.
Conv2D
(in_channels, out_channels, kernel_size, dilation=1, stride=1, activation=None)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D convolutional layer with same padding and orthogonal weight initialization. By default, this layer does not apply an activation function.
-
class
inferno.extensions.layers.convolutional.
Conv3D
(in_channels, out_channels, kernel_size, dilation=1, stride=1, activation=None)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D convolutional layer with same padding and orthogonal weight initialization. By default, this layer does not apply an activation function.
-
class
inferno.extensions.layers.convolutional.
BNReLUConv2D
(in_channels, out_channels, kernel_size, stride=1)[source]¶ Bases: inferno.extensions.layers.convolutional._BNReLUSomeConv, inferno.extensions.layers.convolutional.ConvActivation
2D BN-ReLU-Conv layer with ‘SAME’ padding and He weight initialization.
-
class
inferno.extensions.layers.convolutional.
BNReLUConv3D
(in_channels, out_channels, kernel_size, stride=1)[source]¶ Bases: inferno.extensions.layers.convolutional._BNReLUSomeConv, inferno.extensions.layers.convolutional.ConvActivation
3D BN-ReLU-Conv layer with ‘SAME’ padding and He weight initialization.
-
class
inferno.extensions.layers.convolutional.
BNReLUDepthwiseConv2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional._BNReLUSomeConv, inferno.extensions.layers.convolutional.ConvActivation
2D BN-ReLU-Conv layer with ‘SAME’ padding, He weight initialization and depthwise convolution. Note that depthwise convolutions require in_channels == out_channels.
-
class
inferno.extensions.layers.convolutional.
ConvSELU2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with SELU activation and the appropriate weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvSELU3D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with SELU activation and the appropriate weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvReLU2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with ‘SAME’ padding, ReLU and Kaiming normal weight initialization.
-
class
inferno.extensions.layers.convolutional.
ConvReLU3D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with ‘SAME’ padding, ReLU and Kaiming normal weight initialization.
inferno.extensions.layers.device module¶
-
class
inferno.extensions.layers.device.
DeviceTransfer
(target_device, device_ordinal=None, async=False)[source]¶ Bases: torch.nn.modules.module.Module
Layer to transfer variables to a specified device.
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.device.
OnDevice
(module, target_device, device_ordinal=None, async=False)[source]¶ Bases: torch.nn.modules.module.Module
Moves a module to a device. The advantage of using this over torch.nn.Module.cuda is that the inputs are transferred to the same device as the module, enabling easy model parallelism.
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
inferno.extensions.layers.identity module¶
inferno.extensions.layers.prefab module¶
-
class
inferno.extensions.layers.prefab.
PreActSimpleResidualBlock
(in_channels, num_hidden_channels, upsample=False, downsample=False)[source]¶
-
class
inferno.extensions.layers.prefab.
ResidualBlock
(layers, resample=None)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
inferno.extensions.layers.res_unet module¶
-
class
inferno.extensions.layers.res_unet.
ResBlockUNet
(in_channels, dim, out_channels, unet_kwargs=None, res_block_kwargs=None, activated=True, side_out_parts=None)[source]¶ Bases: inferno.extensions.layers.unet_base.UNetBase
TODO.
ACCC-
activated
¶ TYPE – Description
-
dim
¶ TYPE – Description
-
res_block_kwargs
¶ TYPE – Description
-
side_out_parts
¶ TYPE – Description
-
unet_kwargs
¶ TYPE – Description
-
inferno.extensions.layers.reshape module¶
-
class
inferno.extensions.layers.reshape.
View
(as_shape)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.reshape.
As3D
(channel_as_z=False, num_channels_or_num_z_slices=1)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.reshape.
As2D
(z_as_channel=True)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.reshape.
Concatenate
(dim=1)[source]¶ Bases: torch.nn.modules.module.Module
Concatenate input tensors along a specified dimension.
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.reshape.
Cat
(dim=1)[source]¶ Bases: inferno.extensions.layers.reshape.Concatenate
An alias for Concatenate. Hey, everyone knows who Cat is.
-
class
inferno.extensions.layers.reshape.
ResizeAndConcatenate
(target_size, pool_mode='average', dim=1)[source]¶ Bases: torch.nn.modules.module.Module
Resize input tensors spatially (to a specified target size) before concatenating them along the a given dim (channel, i.e. 1 by default). The down-sampling mode can be specified (‘average’ or ‘max’), but the up-sampling is always ‘nearest’.
-
POOL_MODE_MAPPING
= {'average': 'avg', 'avg': 'avg', 'max': 'max', 'mean': 'avg'}¶
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.reshape.
PoolCat
(target_size, pool_mode='average', dim=1)[source]¶ Bases: inferno.extensions.layers.reshape.ResizeAndConcatenate
Alias for ResizeAndConcatenate, just to annoy snarky web developers.
-
class
inferno.extensions.layers.reshape.
GlobalMeanPooling
[source]¶ Bases: inferno.extensions.layers.reshape.ResizeAndConcatenate
Global mean pooling layer.
-
class
inferno.extensions.layers.reshape.
GlobalMaxPooling
[source]¶ Bases: inferno.extensions.layers.reshape.ResizeAndConcatenate
Global max pooling layer.
-
class
inferno.extensions.layers.reshape.
Sum
[source]¶ Bases: torch.nn.modules.module.Module
Sum all inputs.
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.reshape.
SplitChannels
(channel_index)[source]¶ Bases: torch.nn.modules.module.Module
Split input at a given index along the channel axis.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.reshape.
Squeeze
[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.reshape.
RemoveSingletonDimension
(dim=1)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
inferno.extensions.layers.sampling module¶
-
class
inferno.extensions.layers.sampling.
AnisotropicUpsample
(scale_factor)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
inferno.extensions.layers.unet_base module¶
-
class
inferno.extensions.layers.unet_base.
UNetBase
(in_channels, out_channels, dim, depth=3, gain=2, residual=False, upsample_mode=None, p_dropout=None)[source]¶ Bases: torch.nn.modules.module.Module
- Base class for implementing UNets.
- The depth and dimension of the UNet is flexible. The deriving classes must implement conv_op_factory and can implement upsample_op_factory and downsample_op_factory.
-
in_channels
¶ int – Description
-
out_channels
¶ int – Description
-
dim
¶ int – Spatial dimension of data (must be 2 or 3)
-
depth
¶ int – How many down-sampling / up-sampling steps shall be performed
-
gain
¶ int – Multiplicative increase of channels while going down in the UNet. The same factor is used to decrease the number of channels while going up in the UNet.
-
residual
¶ bool – If residual is true, the output of the down-streams are added to the up-stream results. Otherwise the results are concatenated.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Module contents¶
-
class
inferno.extensions.layers.
SELU
[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
ConvActivation
(in_channels, out_channels, kernel_size, dim, activation, stride=1, dilation=1, groups=None, depthwise=False, bias=True, deconv=False, initialization=None)[source]¶ Bases: torch.nn.modules.module.Module
Convolutional layer with ‘SAME’ padding followed by an activation.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
ConvELU2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.
ConvELU3D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.
ConvSigmoid2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with ‘SAME’ padding, Sigmoid and orthogonal weight initialization.
-
class
inferno.extensions.layers.
ConvSigmoid3D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with ‘SAME’ padding, Sigmoid and orthogonal weight initialization.
-
class
inferno.extensions.layers.
DeconvELU2D
(in_channels, out_channels, kernel_size=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D deconvolutional layer with ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.
DeconvELU3D
(in_channels, out_channels, kernel_size=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D deconvolutional layer with ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.
StridedConvELU2D
(in_channels, out_channels, kernel_size, stride=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D strided convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.
StridedConvELU3D
(in_channels, out_channels, kernel_size, stride=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D strided convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.
DilatedConvELU2D
(in_channels, out_channels, kernel_size, dilation=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D dilated convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.
DilatedConvELU3D
(in_channels, out_channels, kernel_size, dilation=2)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D dilated convolutional layer with ‘SAME’ padding, ELU and orthogonal weight initialization.
-
class
inferno.extensions.layers.
Conv2D
(in_channels, out_channels, kernel_size, dilation=1, stride=1, activation=None)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D convolutional layer with same padding and orthogonal weight initialization. By default, this layer does not apply an activation function.
-
class
inferno.extensions.layers.
Conv3D
(in_channels, out_channels, kernel_size, dilation=1, stride=1, activation=None)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D convolutional layer with same padding and orthogonal weight initialization. By default, this layer does not apply an activation function.
-
class
inferno.extensions.layers.
BNReLUConv2D
(in_channels, out_channels, kernel_size, stride=1)[source]¶ Bases: inferno.extensions.layers.convolutional._BNReLUSomeConv, inferno.extensions.layers.convolutional.ConvActivation
2D BN-ReLU-Conv layer with ‘SAME’ padding and He weight initialization.
-
class
inferno.extensions.layers.
BNReLUConv3D
(in_channels, out_channels, kernel_size, stride=1)[source]¶ Bases: inferno.extensions.layers.convolutional._BNReLUSomeConv, inferno.extensions.layers.convolutional.ConvActivation
3D BN-ReLU-Conv layer with ‘SAME’ padding and He weight initialization.
-
class
inferno.extensions.layers.
BNReLUDepthwiseConv2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional._BNReLUSomeConv, inferno.extensions.layers.convolutional.ConvActivation
2D BN-ReLU-Conv layer with ‘SAME’ padding, He weight initialization and depthwise convolution. Note that depthwise convolutions require in_channels == out_channels.
-
class
inferno.extensions.layers.
ConvSELU2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with SELU activation and the appropriate weight initialization.
-
class
inferno.extensions.layers.
ConvSELU3D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with SELU activation and the appropriate weight initialization.
-
class
inferno.extensions.layers.
ConvReLU2D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
2D Convolutional layer with ‘SAME’ padding, ReLU and Kaiming normal weight initialization.
-
class
inferno.extensions.layers.
ConvReLU3D
(in_channels, out_channels, kernel_size)[source]¶ Bases: inferno.extensions.layers.convolutional.ConvActivation
3D Convolutional layer with ‘SAME’ padding, ReLU and Kaiming normal weight initialization.
-
class
inferno.extensions.layers.
DeviceTransfer
(target_device, device_ordinal=None, async=False)[source]¶ Bases: torch.nn.modules.module.Module
Layer to transfer variables to a specified device.
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
OnDevice
(module, target_device, device_ordinal=None, async=False)[source]¶ Bases: torch.nn.modules.module.Module
Moves a module to a device. The advantage of using this over torch.nn.Module.cuda is that the inputs are transferred to the same device as the module, enabling easy model parallelism.
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
View
(as_shape)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
As3D
(channel_as_z=False, num_channels_or_num_z_slices=1)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
As2D
(z_as_channel=True)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
Concatenate
(dim=1)[source]¶ Bases: torch.nn.modules.module.Module
Concatenate input tensors along a specified dimension.
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
Cat
(dim=1)[source]¶ Bases: inferno.extensions.layers.reshape.Concatenate
An alias for Concatenate. Hey, everyone knows who Cat is.
-
class
inferno.extensions.layers.
ResizeAndConcatenate
(target_size, pool_mode='average', dim=1)[source]¶ Bases: torch.nn.modules.module.Module
Resize input tensors spatially (to a specified target size) before concatenating them along the a given dim (channel, i.e. 1 by default). The down-sampling mode can be specified (‘average’ or ‘max’), but the up-sampling is always ‘nearest’.
-
POOL_MODE_MAPPING
= {'average': 'avg', 'avg': 'avg', 'max': 'max', 'mean': 'avg'}¶
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
PoolCat
(target_size, pool_mode='average', dim=1)[source]¶ Bases: inferno.extensions.layers.reshape.ResizeAndConcatenate
Alias for ResizeAndConcatenate, just to annoy snarky web developers.
-
class
inferno.extensions.layers.
GlobalMeanPooling
[source]¶ Bases: inferno.extensions.layers.reshape.ResizeAndConcatenate
Global mean pooling layer.
-
class
inferno.extensions.layers.
GlobalMaxPooling
[source]¶ Bases: inferno.extensions.layers.reshape.ResizeAndConcatenate
Global max pooling layer.
-
class
inferno.extensions.layers.
Sum
[source]¶ Bases: torch.nn.modules.module.Module
Sum all inputs.
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
SplitChannels
(channel_index)[source]¶ Bases: torch.nn.modules.module.Module
Split input at a given index along the channel axis.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
Squeeze
[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
RemoveSingletonDimension
(dim=1)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
UNetBase
(in_channels, out_channels, dim, depth=3, gain=2, residual=False, upsample_mode=None, p_dropout=None)[source]¶ Bases: torch.nn.modules.module.Module
- Base class for implementing UNets.
- The depth and dimension of the UNet is flexible. The deriving classes must implement conv_op_factory and can implement upsample_op_factory and downsample_op_factory.
-
in_channels
¶ int – Description
-
out_channels
¶ int – Description
-
dim
¶ int – Spatial dimension of data (must be 2 or 3)
-
depth
¶ int – How many down-sampling / up-sampling steps shall be performed
-
gain
¶ int – Multiplicative increase of channels while going down in the UNet. The same factor is used to decrease the number of channels while going up in the UNet.
-
residual
¶ bool – If residual is true, the output of the down-streams are added to the up-stream results. Otherwise the results are concatenated.
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
inferno.extensions.layers.
ResBlockUNet
(in_channels, dim, out_channels, unet_kwargs=None, res_block_kwargs=None, activated=True, side_out_parts=None)[source]¶ Bases: inferno.extensions.layers.unet_base.UNetBase
TODO.
ACCC-
activated
¶ TYPE – Description
-
dim
¶ TYPE – Description
-
res_block_kwargs
¶ TYPE – Description
-
side_out_parts
¶ TYPE – Description
-
unet_kwargs
¶ TYPE – Description
-
-
class
inferno.extensions.layers.
ResBlockBase
(in_channels, out_channels, dim, size=2, force_skip_op=False, activated=True)[source]¶ Bases: torch.nn.modules.module.Module
-
forward
(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
inferno.extensions.layers.
ResBlock
(in_channels, out_channels, dim, size=2, activated=True, activation='ReLU', batchnorm=True, force_skip_op=False, conv_kwargs=None)[source]¶ Bases: inferno.extensions.layers.building_blocks.ResBlockBase