:mod:`blueoil.converter.core.optimizer`
=======================================
.. py:module:: blueoil.converter.core.optimizer
.. autoapi-nested-parse::
Module of optimization passes.
Module Contents
---------------
Functions
~~~~~~~~~
.. autoapisummary::
blueoil.converter.core.optimizer.pass_remove_identities
blueoil.converter.core.optimizer.pass_transpose
blueoil.converter.core.optimizer.pass_constant_folding
blueoil.converter.core.optimizer.pass_propagate_quantization_details_into_conv
blueoil.converter.core.optimizer.pass_compute_thresholds
blueoil.converter.core.optimizer.pass_pack_weights
blueoil.converter.core.optimizer.pass_quantize_convolutions
blueoil.converter.core.optimizer.pass_propagate_datatypes
blueoil.converter.core.optimizer.pass_propagate_format
blueoil.converter.core.optimizer.pass_insert_cast
blueoil.converter.core.optimizer.pass_lookup
blueoil.converter.core.optimizer.pass_simplify_batchnorm
.. function:: pass_remove_identities(graph: Graph) -> None
Removes those nodes of a Graph that satisfies the condition node.op_type() == Identity.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_transpose(graph: Graph) -> None
Changes the data order of every node to be NHWC.
The fastest changing dimension is C
N stands for batch size (on inference we assume is 1.
H and W are the height and width respectively.
C stands for channels)
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_constant_folding(graph: Graph) -> None
Given a node N, if the value of each input of N is known at compilation time then N will be executed.
The node N and its inputs will be replaced with a Constant node which holds the computed output of N.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
:param processed_nodes: The list of the processed nodes so far.
:type processed_nodes: list
.. function:: pass_propagate_quantization_details_into_conv(graph: Graph) -> None
Given a node N, it will propagate information about quantization into the convolution nodes.
There are two types of nodes. Those which preserve quantization (for example, Space2Depth because
does not affect the actual values of the input data, only changes it positions) and those which
destroy quantization (for example, BatchNormalization, because it involves float operations).
If there is path in the Graph which connect a Quantizer node Q to a Conv node C and every node between
Q and C preserve quantization (for example, Q -> Space2Depth -> Concat > Conv) then the details about the
quantizer Q should be propagated into the convolution node C.
This pass allows us to further process the convolution nodes later and maybe quantize these convolutions
based on these quantization details. Note that a convolution node has two inputs, input data and weights.
We propagate quantization details through both the input node branch and the weight node branch.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_compute_thresholds(graph: Graph) -> None
Given a Quantizer node Q:
- if there is a backward path between Q and a convolution node and,
- every node N of that path satisfies the condition N.is_monotonic and,
- the convolution node C (the end of this path) is a quantized convolution
then this pass construct an LUT per channel which maps a possible output value of the quantized convolution node
C to the corresponding output of the quantization node Q. This effectively compress the path C -> ... -> Q
into a list of LUTs that can be used during inference.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_pack_weights(graph: Graph) -> None
Given a Quantized convolution node C, it will pack the weights of C into 32 bit words.
If the node Q that apply quantization to the weights of C quantizes, for example, into 1 bit values
then one 32 bit word will contain 32 weights.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_quantize_convolutions(graph: Graph) -> None
Given a convolution node C, if C has proper quantization details, it will mark C as quantized and it will
assign the correct output data types to the node C and its quantizers. Note that the expected output data type
on the runtime is defined as QUANTIZED_NOT_PACKED.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_propagate_datatypes(graph) -> None
Further propagate output data types.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_propagate_format(graph) -> None
Further propagate output data types.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_insert_cast(graph: Graph) -> None
Insert Cast Operator if needed
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_lookup(graph: Graph, config: Config) -> None
Lookup.
:param graph: The input graph. It will be modified in-place.
:type graph: Graph
.. function:: pass_simplify_batchnorm(graph: Graph) -> None
Simplify BarchNorm operator.