8.1.1.2.1.1.1.6. blueoil.converter.core.operators

Definition of operators.

8.1.1.2.1.1.1.6.1. Module Contents

8.1.1.2.1.1.1.6.1.1. Classes

Operator

Base class of operators.

Variable

Variable class, which must be Input, Output or a constant.

Input

Input class. This is a placeholder.

Constant

Constant class. This object has data inside.

Output

Output class.

Identity

Identity operator.

Quantizer

Base class for quantizers.

BinaryMeanScalingQuantizer

Quantization operator using binary scaling.

SpaceToDepth

Space to Depth operator.

Transpose

Transpose operator.

Conv

Convolution operator.

BatchNormalization

Batch normalization operator.

LinearMidTreadHalfQuantizer

Quantization operator with ‘linear mid tread half’.

Add

Add operator.

Sub

Subtract operator.

Pool

Pooling operator.

MaxPool

Max pooling operator.

AveragePool

Average pooling operator.

Reshape

Reshape operator.

Softmax

Softmax operator.

Relu

Relu class.

LeakyRelu

Leaky relu class.

Flatten

Flatten class.

Dropout

Dropout operator.

Gemm

Gemm operator.

Mul

Mul operator.

BinaryChannelWiseMeanScalingQuantizer

Quantization operator using binary channel wise scaling.

ConcatOnDepth

Concatenation operator.

Maximum

Maximum operator.

DepthToSpace

Depth to Space operator.

ResizeNearestNeighbor

Resize Nearest Neighbor operator.

Split

Split operator (dummy).

Slice

Slice operator.

Pad

Pad operator.

MatMul

Matrix Multiplication operator.

Gather

Gather operator.

Unique

Unique operator (dummy).

UniqueValue

Unique operator (value version).

UniqueIndex

Unique operator (index version).

Cast

Cast operator.

Minimum

Minimum operator.

StridedSlice

StridedSlice operator.

Lookup

Lookup operator.

Prod

Prod operator.

Shape

Shape operator.

BatchNormalizationOptimized

Optimized batch normalization operator.

blueoil.converter.core.operators.Ops
blueoil.converter.core.operators.OutOps
blueoil.converter.core.operators.warning_sign
class blueoil.converter.core.operators.Operator(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: object

Base class of operators.

_input_names :List[str] = ['input']
_output_names :List[str] = ['output']
update_shape(self, shape: List[int], dimension_format: str) → None
__connect_to_outputs(self) → None

Connect input operators’ outputs to this object.

_assert(self, predicate: bool, message: str = '') → None

Assert a predicate. When it fails, raise an error.

This is a substitute for an assert statement. The assert is not checked in byte-compiled code, but this is always checked.

Parameters
  • predicate (bool) – Assertion to be true

  • message (str) – Error message in the failure of the assertion

_check_consistency(self) → None

Check data consistency in the initialization.

equals(self, other: Any) → bool

Return if these two objects are equivalent.

property name(self) → str

Return name. This must be a unique name in the graph.

property op_type(self) → str

Return the operation type.

property input_ops(self) → Ops

Return a dict of input operators.

Returns

Collection of input operators in a dictionary format.

The keys are input symbols, which can be taken from input_names property.

Return type

dict

input_names(cls) → List[str]

Return the input key names the operator provides.

For example, Conv has two inputs, ‘X’ for the input data and ‘W’ for the weight. So Conv.input_names returns the list [‘X’, ‘W’].

Returns

List of key names

Return type

list[str]

property input_nodes(self) → List['Operator']

Return a list of input operators in proper order (original protobuf argument order).

Returns

This list is already ordered following the order of the arguments in the original

protobuf operators (positional order in the list of arguments).

Return type

list[Operator]

property output_ops(self) → OutOps

Return a dict of output operators.

Returns

Collection of (list of) output operators in a dictionary format.

The keys are output symbols, which can be taken from output_names property.

Return type

dict

property output_op_list(self) → List['Operator']

Return a list of output operators.

Returns

List of output operators.

Return type

list[Operator]

output_names(cls) → List[str]

Return the output key names the operator provides.

For example, Conv has one output ‘Y’. So Conv.output_names returns the list [‘Y’].

Returns

List of key names

Return type

list[str]

add_input(self, ident: str, node: Operator) → None

Add input node.

Args

ident (str): key name of the input. This has to be in list input_names. node (Operator): Node to be registered as the input.

add_inputs(self, inputs: Ops) → None

Add input (possibly multiple) nodes at a once.

Parameters

outputs (dict) – Collection of pair of key name and a operator to be registered as the input. All the key names have to be in list input_names.

add_output(self, ident: str, node: Operator) → None

Add output node.

Parameters
  • ident (str) – key name of the output. This has to be in list output_names.

  • node (Operator) – Node to be registered as the output.

add_outputs(self, outputs: OutOps) → None

Add output (possibly multiple) nodes at a once.

Parameters
  • outputs (Dict of str to list of Operators) – Collection of pair of key name

  • a list of operators to be registered as the output. (and) – All the key names have to be in list output_names.

remove_input(self, ident: str) → None

Remove an input node.

Parameters

ident (str) – Key name of the input node to be removed. This key is in input_names, not the name of the operator.

remove_output(self, ident: str) → None

Remove an output node.

Parameters

ident (str) – Key name of the output node to be removed. This key is in output_names, not the name of the operator.

property shape(self) → List[int]

Get the shape defined in this node.

property dtype(self) → DataType

Get the data type defined in this node.

property ndims(self) → int

Get the number of dimension defined in this node.

property dimension(self) → str

Return dimension in string.

This dimension consists of ‘N’, ‘C’, ‘H’, and ‘W’, where ‘N’ is the number of batch size, ‘C’ is the number of channels, ‘H’ and ‘C’ are the height and the weight in the 2-D image.

property size(self) → int

Get the whole size of the output data.

property is_variable(self) → bool

Return if this node is a variable node (i.e. Input or Output).

property is_scalar(self) → bool

Return if this node is a scalar node (i.e. ndims == 0).

property height(self) → int

Get the size of height in the shape.

property width(self) → int

Get the size of width in the shape.

property channels(self) → int

Get the number of channels in the shape.

property batchsize(self) → int

Get the number of batch size in the shape.

property rank(self) → int
property available_buffer(self) → str
transpose(self, perm: List[int]) → None

Transpose the shape and format. This operation is destructive.

property data(self) → np.ndarray

Get the output data.

This value is valid only after run_forward() or some value has assigned with the setter.

property is_monotonic(self) → bool
abstract run(self, **kwargs) → Dict

The intermediate runtime, run the operator with external data

This is actually an abstract method and should be overridden.

abstract run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property _dispatch_name(self) → str
abstract classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → Sequence[Optional[int]]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Variable(name: str, shape: List[int], dtype: DataType, input_ops: Ops, data: np.ndarray, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Variable class, which must be Input, Output or a constant.

property is_variable(self) → bool

Return True, as this is a variable.

property is_monotonic(self) → bool
transpose(self, perm: List[int]) → None

Transpose the shape and format. This operation is destructive.

property data(self) → np.ndarray

Return data.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Input(name: str, shape: List[int], dtype: DataType, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Variable

Input class. This is a placeholder.

_input_names :List[str] = []
_output_names = ['output']
class blueoil.converter.core.operators.Constant(name: str, dtype: DataType, data: np.ndarray, dimension_format: str = 'OHWI', transposed_dimension_format: str = 'OHWI', packed: bool = False, actual_shape: List[int] = [], transposed_data: Optional[List[int]] = None, transposed_shape: Optional[List[int]] = None, kn2row_data: Optional[List[int]] = None, kn2row_dimension_format: str = 'HWOI', kn2row_shape: Optional[List[int]] = None)

Bases: blueoil.converter.core.operators.Variable

Constant class. This object has data inside.

_input_names :List[str] = []
_output_names = ['output']
run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_packed(self) → bool
property transposed_data(self) → Optional[List[int]]

Return transposed data.

property transposed_dimension_format(self) → str
property transposed_shape(self) → Optional[List[int]]
property kn2row_data(self) → Optional[List[int]]
property kn2row_dimension_format(self) → str
property kn2row_shape(self) → Optional[List[int]]
class blueoil.converter.core.operators.Output(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Variable

Output class.

_input_names = ['input']
_output_names :List[str] = []
_check_consistency(self) → None

Check data consistency in the initialization.

class blueoil.converter.core.operators.Identity(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Identity operator.

input

Input tensor

output

Tensor to copy input

_input_names = ['input']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Quantizer(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Base class for quantizers.

_input_names = ['input']
_output_names = ['output']
equals(self, other: Any) → bool

Return if these two objects are equivalent.

property nbit(self) → int
property max_v(self) → float
property scaling_factor(self) → np.float32
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

abstract binarizer(self, data: np.ndarray) → np.ndarray

Maps the quantized values into >= 0 integer values.

This is actually an abstract method and should be overridden.

class blueoil.converter.core.operators.BinaryMeanScalingQuantizer(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Quantizer

Quantization operator using binary scaling.

input

Input tensor, which must have float values.

output

Quantized tensor

_input_names = ['input']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property _dispatch_name(self) → str
run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

run_forward_no_scaling_factor(self) → np.ndarray
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

binarizer(self, data: np.ndarray) → np.ndarray

Maps the quantized values into >= 0 integer values.

class blueoil.converter.core.operators.SpaceToDepth(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC', block_size: int = 2)

Bases: blueoil.converter.core.operators.Operator

Space to Depth operator.

input

Input tensor

output

A tensor with reduced height and width and increased channels

block_sizeinteger

Input block size

_input_names = ['input']
_output_names = ['output']
_check_consistency(self) → None

This check the following constraints: Output channels must be 1. (multiple of kernel_size^2 * 32) OR 2. (kernel_size^2 * {8, 16}).

property is_monotonic(self) → bool
property _dispatch_name(self) → str
property block_size(self) → np.int32
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Transpose(name: str, shape: List[int], dtype: DataType, input_ops: Ops, perm: List[int] = [], dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Transpose operator.

Transpose the input tensor similar to numpy.transpose. For example, when perm=[3, 1, 0, 2], given an input tensor of shape [1, 2, 3, 4], the output shape will be [4, 2, 1, 3].

data

An input tensor.

transposed

Transposed output.

Attributes (optional constructor parameters) perm : list of ints

A list of integers. By default, reverse the dimensions, otherwise permute the axes according to the values given.

_input_names = ['data']
_output_names = ['transposed']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property permutation(self) → List[int]

Get transpose permutation in list of ints.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Conv(name: str, shape: List[int], dtype: DataType, input_ops: Ops, kernel_shape: List[int] = [], kernel_dimensions: int = 2, dimension_format: str = 'NHWC', kernel_dim_format: str = 'HW', dilations: List[int] = [1, 1], pads: List[int] = [0, 0, 0, 0], strides: List[int] = [1, 1], quantized: bool = False, thresholds: List[float] = [])

Bases: blueoil.converter.core.operators.Operator

Convolution operator.

The convolution operator consumes an input tensor and a weight, and computes the output. Currently this is only for 2-D images.

X

Input data tensor from previous layer. Note that this is for the 2D image.

W

The weight tensor that will be used in the convolutions.

B (Optional)

1D bias.

Y

Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.

kernel_shapelist of ints

The shape of the convolution kernel. If not present, should be inferred from input W.

kernel_dimensionsint

The dimension of the input. The default value is 2, which means 2-D image.

dimension_formatstr

Dimension denotation, which must consists of ‘N’, ‘C’, ‘H’, and ‘W’, where ‘N’ is the number of batch size, ‘C’ is the number of channels, ‘H’ and ‘W’ are the height and width of input image. The default is ‘NHWC’.

kernel_dim_formatstr

Dimension denotation, which must consists of ‘H’ and ‘W’, where ‘H’ and ‘W’ are the height and width of input image. The default is ‘HW’.

dilationslist of ints

Dilation value along each axis of the filter. If not present, the dilation defaults to 1 along each axis.

padslist of ints

Padding for the beginning and ending along each axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin, x1_end, x2_end], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. If not present, the padding defaults to 0 along start and end of each axis.

strideslist of ints

Stride along each axis. If not present, the stride defaults to 1 along each axis.

quantizedbool

Whether it is quantized. If not present, the switch defaults to False.

thresholdslist of floats

Threshold values that are used in threshold skipping. If not present, this defaults to an empty list. Ignored if quantized is not true.

_input_names = ['X', 'W', 'B']
_output_names = ['Y']
_check_consistency(self) → None

Check data consistency in the initialization.

property kernel_dimensions(self) → int

Get the number of dimensions.

property dilations(self) → List[int]

Get dilations.

property pads(self) → List[int]

Get pads.

property strides(self) → List[int]

Get strides.

property is_monotonic(self) → bool
property is_quantized(self) → bool

Return if this operator is quantized.

Currently it always returns False, as quantized version is not supported yet.

property scaling_factor(self) → float
property a_quantizer(self) → List[Quantizer]
property quantizer(self) → Optional[Quantizer]
property kernel_height(self) → int

Return the height in the kernel shape.

property kernel_width(self) → int

Return the weight in the kernel shape.

property has_thresholds(self) → bool
property thresholds(self) → List[float]
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its shape from inputs’ shapes.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.BatchNormalization(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC', epsilon: float = float(10**(-5)), is_test: bool = False)

Bases: blueoil.converter.core.operators.Operator

Batch normalization operator.

Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167.

X

The input 4-dimensional tensor.

scale

The scale as a 1-dimensional tensor of size C to be applied to the output.

B

The bias as a 1-dimensional tensor of size C to be applied to the output.

mean

The estimated mean (testing) as a 1-dimensional tensor of size C.

var

The estimated variance (testing) as a 1-dimensional tensor of size C.

Y

The output 4-dimensional tensor of the same shape as X.

dimension_formatstr

Dimension denotation, which must consists of ‘N’, ‘C’, ‘H’, and ‘W’, where ‘N’ is the number of batch size, ‘C’ is the number of channels, ‘H’ and ‘W’ are the height and width of input image. The default is ‘NHWC’.

epsilonfloat

The epsilon value to use to avoid division by zero, default is 1e-5f.

is_testbool

If set to True, run spatial batch normalization in test mode, default is False.

_input_names = ['X', 'scale', 'B', 'mean', 'var']
_output_names = ['Y']
_check_consistency(self) → None

Check data consistency in the initialization.

run(self, **kwargs) → Dict

Return the forward calculation results of batch normalization.

Currently this function is only used by threshold skipping optimization pass for recursively calculating thresholds of the skipping patterns.

de_run(self, **kwargs) → Dict

Return the reversed calculation results of batch normalization.

Currently this function is only used by threshold skipping optimization pass for recursively calculating thresholds of the skipping patterns.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
property epsilon(self) → float
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property _dispatch_name(self) → str
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.LinearMidTreadHalfQuantizer(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Quantizer

Quantization operator with ‘linear mid tread half’.

X

Input tensor

Y

Constant

Z

Another constant

_input_names = ['X', 'Y', 'Z']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

run(self, **kwargs) → Dict

Return the result of forward calculation of an activation quantizer.

Currently this function is only used by threshold skipping optimization pass for recursively calculating thresholds of the skipping patterns.

de_run(self, **kwargs) → Dict

Return the result of reversed calculation of an activation quantizer.

Currently this function is only used by threshold skipping optimization pass for recursively calculating thresholds of the skipping patterns.

run_forward(self) → np.ndarray

General function for this quantization operator.

This function returns numpy array.

property nbit(self) → int
property max_v(self) → float
property is_monotonic(self) → bool
property _dispatch_name(self) → str
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

binarizer(self, data: np.ndarray) → np.ndarray

Maps the quantized values into >= 0 integer values.

class blueoil.converter.core.operators.Add(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Add operator.

Performs element-wise binary addition (with Numpy-style broadcasting support). This operator supports multidirectional (i.e., Numpy-style) broadcasting.

A

First operand.

B

Second operand.

C

Result, has same element type as two inputs

_input_names = ['A', 'B']
_output_names = ['C']
_check_consistency(self) → None

Check data consistency in the initialization.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Sub(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Subtract operator.

Performs element-wise subtraction (with Numpy-style broadcasting support). This operator supports multidirectional (i.e., Numpy-style) broadcasting.

A

First operand.

B

Second operand.

C

Result, has same element type as two inputs

_input_names = ['A', 'B']
_output_names = ['C']
_check_consistency(self) → None

Check data consistency in the initialization.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Pool(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC', kernel_shape: List[int] = [2, 2], kernel_dim_format: str = 'HW', dimensions: int = 2, pads: List[int] = [0, 0, 0, 0], strides: List[int] = [1, 1])

Bases: blueoil.converter.core.operators.Operator

Pooling operator.

This is a base class and must not be instantiated directly.

_input_names = ['X']
_output_names = ['Y']
_check_consistency(self) → None

Check data consistency in the initialization.

property kernel_height(self) → int

Get the height in the kernel shape.

property kernel_width(self) → int

Get the Width in the kernel shape.

property pads(self) → List[int]
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its shape from inputs’ shapes.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.MaxPool(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC', kernel_shape: List[int] = [2, 2], kernel_dim_format: str = 'HW', dimensions: int = 2, pads: List[int] = [0, 0, 0, 0], strides: List[int] = [1, 1])

Bases: blueoil.converter.core.operators.Pool

Max pooling operator.

MaxPool consumes an input tensor X and applies max pooling across the the tensor according to kernel sizes, stride sizes, and pad lengths. max pooling consisting of computing the max on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing.

X

Input data tensor from the previous operator.

Y

Output data tensor from max pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used.

dimension_formatstr

Dimension denotation, which must consists of ‘N’, ‘C’, ‘H’, and ‘W’, where ‘N’ is the number of batch size, ‘C’ is the number of channels, ‘H’ and ‘W’ are the height and width of input image. The default is ‘NHWC’.

kernel_shapelist of ints

The size of the kernel along each axis.

kernel_dim_formatstr

Dimension denotation, which must consists of H’, and ‘W’, where ‘H’ and ‘W’ are the height and width of input image. The default is ‘HW’.

dimensionsint

Dimensions. This defaults to 2, which means 2-D image. Currently only 2 is available.

padslist of ints

Padding for the beginning and ending along each axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin, x1_end, x2_end], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. If not present, the padding defaults to 0 along start and end of each axis.

strideslist of ints

Stride along each axis. If not present, the stride defaults to 1 along each axis.

property _dispatch_name(self) → str
property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.AveragePool(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC', kernel_shape: List[int] = [2, 2], kernel_dim_format: str = 'HW', dimensions: int = 2, pads: List[int] = [0, 0, 0, 0], strides: List[int] = [1, 1])

Bases: blueoil.converter.core.operators.Pool

Average pooling operator.

AveragePool consumes an input tensor X and applies average pooling across the the tensor according to kernel sizes, stride sizes, and pad lengths. average pooling consisting of computing the average on all values of a subset of the input tensor according to the kernel size and downsampling the data into the output tensor Y for further processing.

X

Input data tensor from the previous operator.

Y

Output data tensor from average pooling across the input tensor. Dimensions will vary based on various kernel, stride, and pad sizes. Floor value of the dimension is used.

dimension_formatstr

Dimension denotation, which must consists of ‘N’, ‘C’, ‘H’, and ‘W’, where ‘N’ is the number of batch size, ‘C’ is the number of channels, ‘H’ and ‘W’ are the height and width of input image. The default is ‘NHWC’.

kernel_shapelist of ints

The size of the kernel along each axis.

kernel_dim_formatstr

Dimension denotation, which must consists of H’, and ‘W’, where ‘H’ and ‘W’ are the height and width of input image. The default is ‘HW’.

dimensionsint

Dimensions. This defaults to 2, which means 2-D image. Currently only 2 is available.

padslist of ints

Padding for the beginning and ending along each axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin, x1_end, x2_end], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. If not present, the padding defaults to 0 along start and end of each axis.

strideslist of ints

Stride along each axis. If not present, the stride defaults to 1 along each axis.

property _dispatch_name(self) → str
property is_monotonic(self) → bool
class blueoil.converter.core.operators.Reshape(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Reshape operator.

Reshape the input tensor similar to numpy.reshape.

It takes a tensor as input and an argument shape. It outputs the reshaped tensor.

At most one dimension of the new shape can be -1. In this case, the value is inferred from the size of the tensor and the remaining dimensions. A dimension could also be 0, in which case the actual dimension value is unchanged (i.e. taken from the input tensor).

data

An input tensor.

reshaped

Reshaped data.

_input_names = ['data', 'shape']
_output_names = ['reshaped']
_check_consistency(self) → None

Check data consistency in the initialization.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Softmax(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Softmax operator.

The operator computes the softmax (normalized exponential) values for each layer in the batch of the given input. The input is a 2-D tensor (Tensor) of size (batch_size x input_feature_dimensions). The output tensor has the same shape and contains the softmax values of the corresponding input.

X does not need to explicitly be a 2D vector; rather, it will be coerced into one. For an arbitrary n-dimensional tensor X in [a_0, a_1, …, a_{k-1}, a_k, …, a_{n-1}] and k is the axis provided, then X will be coerced into a 2-dimensional tensor with dimensions [a_0 * … * a_{k-1}, a_k * … * a_{n-1}]. For the default case where axis=1, this means the X tensor will be coerced into a 2D tensor of dimensions [a_0, a_1 * … * a_{n-1}], where a_0 is often the batch size. In this situation, we must have a_0 = N and a_1 * … * a_{n-1} = D. Each of these dimensions must be matched correctly, or else the operator will throw errors.

input

The input tensor that’s coerced into a 2D matrix of size (NxD) as described above.

output

The output values with the same shape as input tensor.

_input_names = ['input']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Relu(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Relu class.

Relu takes one input data (Tensor) and produces one output data (Tensor) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

X

Input tensor

Y

Output tensor

_input_names = ['X']
_output_names = ['Y']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.LeakyRelu(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC', alpha: float = 0.2)

Bases: blueoil.converter.core.operators.Operator

Leaky relu class.

Leaky_relu takes one input data (Tensor) and produces one output data (Tensor) where the function, y = max(input * alpha, input), is applied to the tensor elementwise.

X

Input tensor

Y

Output tensor

_input_names = ['X']
_output_names = ['Y']
_check_consistency(self) → None

Check data consistency in the initialization.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Flatten(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'HWCN', axis: int = 1)

Bases: blueoil.converter.core.operators.Operator

Flatten class.

Flattens the input tensor into a 2D matrix. If input tensor has shape [d_0, d_1, … d_n] then the output will have shape [d_0 X d_1 … d_(axis-1), d_axis X d_(axis+1) … X dn].

input

A tensor of rank >= axis.

output

A 2D tensor with the contents of the input tensor, with input dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output.

axis

(Default to 1) Indicate up to which input dimensions (exclusive) should be flattened to the outer dimension of the output. The value for axis must be in the range [0, R], where R is the rank of the input tensor. When axis = 0, the shape of the output tensor is (1, (d_0 X d_1 … d_n), where the shape of the input tensor is (d_0, d_1, … d_n).

Type

int

_input_names = ['input']
_output_names = ['output']
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Dropout(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'HWCN', ratio: float = 0.5)

Bases: blueoil.converter.core.operators.Operator

Dropout operator.

Dropout takes one input data (Tensor) and produces two Tensor outputs, output (Tensor) and mask (Tensor). Y will either be a random dropout, or a simple copy of the input. Note that our implementation of Dropout does scaling in the training phase, so during testing nothing needs to be done. This operator has optional inputs/outputs.

data

The input data as Tensor.

output

The output.

mask (optional)

The output mask.

ratio

(float, default 0.5) the ratio of random dropout

Type

float

_input_names = ['data']
_output_names = ['output', 'mask']
property ratio(self)
property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Gemm(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'HWCN', alpha: float = 1.0, beta: float = 1.0, transA: bool = False, transB: bool = False)

Bases: blueoil.converter.core.operators.Operator

Gemm operator.

General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3

A’ = transpose(A) if transA else A

B’ = transpose(B) if transB else B

Compute Y = alpha * A’ * B’ + beta * C, where input tensor A has shape [M, K] or [K, M], input tensor B has shape [K, N] or [N, K], input tensor C is broadcastable to shape [M, N], and output tensor Y has shape [M, N]. A will be transposed before doing the computation if attribute transA is True, same for B and transB. This operator supports unidirectional broadcasting (tensor C should be unidirectional broadcastable to tensor A * B); for more details please check the doc.

A

Input tensor A. The shape of A should be [M, K] if transA is False, or [K, M] if transA is True.

B

Input tensor B. The shape of B should be (K, N) if transB is False, or [N, K] if transB is True.

C

Input tensor C. The shape of C should be unidirectional broadcastable to [M, N].

Y

Output tensor of shape [M, N].

alpha

Scalar multiplier for the product of input tensors A * B

beta

Scalar multiplier for input tensor C

transA

Whether A should be transposed

transB

Whether B should be transposed

_input_names = ['A', 'B', 'C']
_output_names = ['Y']
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Mul(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Mul operator.

Performs element-wise binary multiplication (with Numpy-style broadcasting support). This operator supports multidirectional (i.e., Numpy-style) broadcasting.

A

First operand.

B

Second operand.

C

Result, has same element type as two inputs

_input_names = ['A', 'B']
_output_names = ['C']
_check_consistency(self) → None

Check data consistency in the initialization.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

class blueoil.converter.core.operators.BinaryChannelWiseMeanScalingQuantizer(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Quantizer

Quantization operator using binary channel wise scaling.

input

Input tensor, which must have float values.

output

Quantized tensor

_input_names = ['input']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property _dispatch_name(self) → str
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

run_forward_no_scaling_factor(self) → np.ndarray
binarizer(self, data: np.ndarray) → np.ndarray

Maps the quantized values into >= 0 integer values.

class blueoil.converter.core.operators.ConcatOnDepth(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Concatenation operator.

input1

Input tensor

input2

Input tensor

input3

Input tensor

input4

Input tensor

input5

Input tensor

output

A tensor which is the concatenation of the inputs in the channel axis

block_sizeinteger

Input block size

_input_names = ['input1', 'input2', 'input3', 'input4', 'input5', 'input6']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property _dispatch_name(self) → str
property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Maximum(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Maximum operator.

Performs element-wise max() operation.

A

First operand.

B

Second operand.

C

Result, has same shape and data type than the inputs

_input_names = ['A', 'B']
_output_names = ['C']
_check_consistency(self) → None

Check data consistency in the initialization.

property _dispatch_name(self) → str
property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.DepthToSpace(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC', block_size: int = 2)

Bases: blueoil.converter.core.operators.Operator

Depth to Space operator.

input

Input tensor

output

A tensor with increased height and width and decreased channels

block_sizeinteger

Input block size

_input_names = ['input']
_output_names = ['output']
_check_consistency(self) → None

This check the following constraints: 1. quantized-packed data requires channels of input must be multiple of kernel_size^2 * 32

property is_monotonic(self) → bool
property _dispatch_name(self) → str
property block_size(self) → np.int32
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.ResizeNearestNeighbor(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Resize Nearest Neighbor operator.

input

Input tensor

output

A tensor with resized height and width and same channels

Align corners is not supported.

_input_names = ['input']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property _dispatch_name(self) → str
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Split(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Split operator (dummy). Split operator is converted to Slice operators by Importer.

_input_names = ['A', 'B']
class blueoil.converter.core.operators.Slice(name: str, shape: List[int], dtype: DataType, input_ops: Ops, begin: int, size: int, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Slice operator.

Slice a tensor, along channels.

input

The tensor to slice

output

Output tensor after slice

begininteger

slice starts from

sizeinteger

Length of output channels

_input_names = ['A', 'B']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property _dispatch_name(self) → str
property is_monotonic(self) → bool
property begin(self) → int
property slice_size(self) → int
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Pad(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Pad operator. Pad a tensor. This operation pads a tensor according to the paddings specified Input —– A

The input to be padded

B

The padding size, this (B) is a numpy array that supports “CONSTANT” mode in tensorflow during importing, it has shape of [n, 2], where n is the rank of input A, assume input A has dimension of D the padded size of each dimension D of the output C is given by the formula below:

B[D, 0] + A.dim_size(D) + B[D, 1]

Note. currently only the channel-wise paddings are supported.

C

A result after being padded. Has the same type as input tensor

_input_names = ['A', 'B']
_output_names = ['C']
_check_consistency(self) → None

Check data consistency in the initialization.

property _dispatch_name(self) → str
property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[Optional[int]]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.MatMul(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Matrix Multiplication operator. Matrix product. Multiplies matrix a by matrix b, producing a * b Generalized universal function signature, e.g., (m,n),(n,p)->(m,p) for np.matmul. Input —– A

2-dimensional matrix A

B

2-dimensional matrix B

C

Matrix multiply results from A * B

_input_names = ['A', 'B']
_output_names = ['C']
_check_consistency(self) → None

Check data consistency in the initialization.

property _dispatch_name(self) → str
property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Gather(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Gather operator.

input

The input tensor.

output

The output.

_input_names = ['x', 'out_idx']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Unique(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Unique operator (dummy). Unique operetor is converted to UniqueValue and UniqueIndex operators by Importer.

_input_names = ['x']
class blueoil.converter.core.operators.UniqueValue(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Unique operator (value version).

input

The input tensor.

y

The output.

_input_names = ['x']
_output_names = ['y']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.UniqueIndex(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Unique operator (index version).

input

The input tensor.

idx

The index of each value of input in the uniquified output.

_input_names = ['x']
_output_names = ['idx']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Cast(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Cast operator.

input

The input tensor.

output

The output.

_input_names = ['x']
_output_names = ['y']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Minimum(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Minimum operator.

input

The input tensor.

output

The output.

_input_names = ['x', 'y']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.StridedSlice(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

StridedSlice operator.

input

The input tensor.

output

The output.

_input_names = ['input', 'begin', 'end', 'strides']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Lookup(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC', use_divide_by_255: bool = True)

Bases: blueoil.converter.core.operators.Quantizer

Lookup operator.

input

The input tensor.

output

The output.

_input_names = ['input', 'lsb', 'msb']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property nbit(self) → int
property max_v(self) → float
property use_divide_by_255(self) → bool
class blueoil.converter.core.operators.Prod(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Prod operator.

input

The input tensor.

output

The output.

_input_names = ['input', 'indices']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.Shape(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Shape operator.

input

The input tensor.

output

The output.

_input_names = ['input']
_output_names = ['output']
_check_consistency(self) → None

Check data consistency in the initialization.

property is_monotonic(self) → bool
property preserve_quantization(self) → bool

whether to preserve the operator for quantization

class blueoil.converter.core.operators.BatchNormalizationOptimized(name: str, shape: List[int], dtype: DataType, input_ops: Ops, dimension_format: str = 'NHWC')

Bases: blueoil.converter.core.operators.Operator

Optimized batch normalization operator. This operator for only inference.

X

The input 4-dimensional tensor.

scale

The scale as a 1-dimensional tensor of size C to be applied to the output.

bias

The bias as a 1-dimensional tensor of size C to be applied to the output.

Y

The output 4-dimensional tensor of the same shape as X.

_input_names = ['X', 'scale', 'bias']
_output_names = ['Y']
_check_consistency(self) → None

Check data consistency in the initialization.

run(self, **kwargs) → Dict

Return the forward calculation results of batch normalization.

Currently this function is only used by threshold skipping optimization pass for recursively calculating thresholds of the skipping patterns.

run_forward(self) → np.ndarray

Run the operator, calculate and set the result.

This is actually an abstract method and should be overridden.

property is_monotonic(self) → bool
classmethod infer_shape(cls, lists: Dict[str, List[int]], format: str, input_formats: List[str], attrs: Dict[str, Any]) → List[int]

Infer its output shape from inputs’ shapes.

This is actually an abstract method and should be overridden.

property _dispatch_name(self) → str
property preserve_quantization(self) → bool

whether to preserve the operator for quantization