inferno.extensions.containers package¶
Submodules¶
inferno.extensions.containers.graph module¶
-
class
inferno.extensions.containers.graph.
NNGraph
(incoming_graph_data=None, **attr)[source]¶ Bases: networkx.classes.digraph.DiGraph
A NetworkX DiGraph, except that node and edge ordering matters.
-
ATTRIBUTES_TO_NOT_COPY
= {'payload'}¶
-
adjlist_dict_factory
¶ alias of collections.OrderedDict
-
copy
(**init_kwargs)[source]¶ Return a copy of the graph.
The copy method by default returns a shallow copy of the graph and attributes. That is, if an attribute is a container, that container is shared by the original an the copy. Use Python’s copy.deepcopy for new containers.
If as_view is True then a view is returned instead of a copy.
Notes
All copies reproduce the graph structure, but data attributes may be handled in different ways. There are four types of copies of a graph that people might want.
Deepcopy – The default behavior is a “deepcopy” where the graph structure as well as all data attributes and any objects they might contain are copied. The entire graph object is new so that changes in the copy do not affect the original object. (see Python’s copy.deepcopy)
Data Reference (Shallow) – For a shallow copy the graph structure is copied but the edge, node and graph attribute dicts are references to those in the original graph. This saves time and memory but could cause confusion if you change an attribute in one graph and it changes the attribute in the other. NetworkX does not provide this level of shallow copy.
Independent Shallow – This copy creates new independent attribute dicts and then does a shallow copy of the attributes. That is, any attributes that are containers are shared between the new graph and the original. This is exactly what dict.copy() provides. You can obtain this style copy using:
>>> G = nx.path_graph(5) >>> H = G.copy() >>> H = G.copy(as_view=False) >>> H = nx.Graph(G) >>> H = G.fresh_copy().__class__(G)
Fresh Data – For fresh data, the graph structure is copied while new empty data attribute dicts are created. The resulting graph is independent of the original and it has no edge, node or graph attributes. Fresh copies are not enabled. Instead use:
>>> H = G.fresh_copy() >>> H.add_nodes_from(G) >>> H.add_edges_from(G.edges)
View – Inspired by dict-views, graph-views act like read-only versions of the original graph, providing a copy of the original structure without requiring any memory for copying the information.
See the Python copy module for more information on shallow and deep copies, https://docs.python.org/2/library/copy.html.
Parameters: as_view (bool, optional (default=False)) – If True, the returned graph-view provides a read-only view of the original graph without actually copying any data. Returns: G – A copy of the graph. Return type: Graph See also
- to_directed()
- return a directed copy of the graph.
Examples
>>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc >>> H = G.copy()
-
node_dict_factory
¶ alias of collections.OrderedDict
-
-
class
inferno.extensions.containers.graph.
Graph
(graph=None)[source]¶ Bases: torch.nn.modules.module.Module
A graph structure to build networks with complex architectures. The resulting graph model can be used like any other torch.nn.Module. The graph structure used behind the scenes is a networkx.DiGraph. This internal graph is exposed by the apply_on_graph method, which can be used with any NetworkX function (e.g. for plotting with matplotlib or GraphViz).
Examples
The naive inception module (without the max-pooling for simplicity) with ELU-layers of 64 units can be built as following, (assuming 64 input channels):
>>> from inferno.extensions.layers.reshape import Concatenate >>> from inferno.extensions.layers.convolutional import ConvELU2D >>> import torch >>> from torch.autograd import Variable >>> # Build the model >>> inception_module = Graph() >>> inception_module.add_input_node('input') >>> inception_module.add_node('conv1x1', ConvELU2D(64, 64, 3), previous='input') >>> inception_module.add_node('conv3x3', ConvELU2D(64, 64, 3), previous='input') >>> inception_module.add_node('conv5x5', ConvELU2D(64, 64, 3), previous='input') >>> inception_module.add_node('cat', Concatenate(), >>> previous=['conv1x1', 'conv3x3', 'conv5x5']) >>> inception_module.add_output_node('output', 'cat') >>> # Build dummy variable >>> input = Variable(torch.rand(1, 64, 100, 100)) >>> # Get output >>> output = inception_module(input)
-
add_edge
(from_node, to_node)[source]¶ Add an edge between two nodes.
Parameters: - from_node (str) – Name of the source node.
- to_node (str) – Name of the target node.
Returns: self
Return type: Raises: AssertionError – if either of the two nodes is not in the graph, or if the edge is not ‘legal’.
-
add_input_node
(name)[source]¶ Add an input to the graph. The order in which input nodes are added is the order in which the forward method accepts its inputs.
Parameters: name (str) – Name of the input node. Returns: self Return type: Graph
-
add_node
(name, module, previous=None)[source]¶ Add a node to the graph.
Parameters: - name (str) – Name of the node. Nodes are identified by their names.
- module (torch.nn.Module) – Torch module for this node.
- previous (str or list of str) – (List of) name(s) of the previous node(s).
Returns: self
Return type:
-
add_output_node
(name, previous=None)[source]¶ Add an output to the graph. The order in which output nodes are added is the order in which the forward method returns its outputs.
Parameters: name (str) – Name of the output node. Returns: self Return type: Graph
-
forward
(*inputs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
get_module_for_nodes
(names)[source]¶ Gets the torch.nn.Module object for nodes corresponding to names.
Parameters: names (str or list of str) – Names of the nodes to fetch the modules of. Returns: Module or a list of modules corresponding to names. Return type: list or torch.nn.Module
-
graph
¶
-
graph_is_valid
¶ Checks if the graph is valid.
-
input_nodes
¶ Gets a list of input nodes. The order is relevant and is the same as that in which the forward method accepts its inputs.
Returns: A list of names (str) of the input nodes. Return type: list
-
is_node_in_graph
(name)[source]¶ Checks whether a node is in the graph.
Parameters: name (str) – Name of the node. Returns: Return type: bool
-
is_sink_node
(name)[source]¶ Checks whether a given node (by name) is a sink node. A sink node has no outgoing edges.
Parameters: name (str) – Name of the node. Returns: Return type: bool Raises: AssertionError – if node is not found in the graph.
-
is_source_node
(name)[source]¶ Checks whether a given node (by name) is a source node. A source node has no incoming edges.
Parameters: name (str) – Name of the node. Returns: Return type: bool Raises: AssertionError – if node is not found in the graph.
-
output_nodes
¶ Gets a list of output nodes. The order is relevant and is the same as that in which the forward method returns its outputs.
Returns: A list of names (str) of the output nodes. Return type: list
-
inferno.extensions.containers.sequential module¶
-
class
inferno.extensions.containers.sequential.
Sequential1
(*args)[source]¶ Bases: torch.nn.modules.container.Sequential
Like torch.nn.Sequential, but with a few extra methods.
-
class
inferno.extensions.containers.sequential.
Sequential2
(*args)[source]¶ Bases: inferno.extensions.containers.sequential.Sequential1
Another sequential container. Identitcal to torch.nn.Sequential, except that modules may return multiple outputs and accept multiple inputs.
-
forward
(*input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-