8.1.2.1. blueoil.blocks

8.1.2.1.1. Module Contents

8.1.2.1.1.1. Functions

darknet(name, inputs, filters, kernel_size, is_training=tf.constant(False), activation=None, data_format=’NHWC’)

Darknet19 block.

lmnet_block(name, inputs, filters, kernel_size, custom_getter=None, is_training=tf.constant(True), activation=None, use_bias=True, use_batch_norm=True, is_debug=False, data_format=’channels_last’)

Block used in lmnet

conv_bn_act(name, inputs, filters, kernel_size, weight_decay_rate=0.0, is_training=tf.constant(False), activation=None, batch_norm_decay=0.99, data_format=’NHWC’, enable_detail_summary=False)

Block of convolution -> batch norm -> activation.

_densenet_conv_bn_act(name, inputs, growth_rate, bottleneck_rate, weight_decay_rate, is_training, activation, batch_norm_decay, data_format, enable_detail_summary)

Densenet block.

densenet_group(name, inputs, num_blocks, growth_rate, bottleneck_rate=4, weight_decay_rate=0.0, is_training=tf.constant(False), activation=None, batch_norm_decay=0.99, data_format=’NHWC’, enable_detail_summary=False)

Group of Densenet blocks.

blueoil.blocks.darknet(name, inputs, filters, kernel_size, is_training=tf.constant(False), activation=None, data_format='NHWC')

Darknet19 block.

Do convolution, batch_norm, bias, leaky_relu activation. Ref: https://arxiv.org/pdf/1612.08242.pdf

blueoil.blocks.lmnet_block(name, inputs, filters, kernel_size, custom_getter=None, is_training=tf.constant(True), activation=None, use_bias=True, use_batch_norm=True, is_debug=False, data_format='channels_last')

Block used in lmnet

Combine convolution, bias, weights quantization and activation quantization as one layer block.

Parameters
  • name (str) – Block name, as scope name.

  • inputs (tf.Tensor) – Inputs.

  • filters (int) – Number of filters for convolution.

  • kernel_size (int) – Kernel size.

  • custom_getter (callable) – Custom getter for tf.compat.v1.variable_scope.

  • is_training (tf.constant) – Flag if training or not.

  • activation (callable) – Activation function.

  • use_bias (bool) – If use bias.

  • use_batch_norm (bool) – If use batch norm.

  • is_debug (bool) – If is debug.

  • data_format (string) – channels_last for NHWC. channels_first for NCHW. Default is channels_last.

Returns

Output of current layer block.

Return type

tf.Tensor

blueoil.blocks.conv_bn_act(name, inputs, filters, kernel_size, weight_decay_rate=0.0, is_training=tf.constant(False), activation=None, batch_norm_decay=0.99, data_format='NHWC', enable_detail_summary=False)

Block of convolution -> batch norm -> activation.

Parameters
  • name (str) – Block name, as scope name.

  • inputs (tf.Tensor) – Inputs.

  • filters (int) – Number of filters (output channel) for convolution.

  • kernel_size (int) – Kernel size.

  • weight_decay_rate (float) – Number of L2 regularization be applied to convolution weights. Need tf.losses.get_regularization_loss() in loss function to apply this parameter to loss.

  • is_training (tf.constant) – Flag if training or not for batch norm.

  • activation (callable) – Activation function.

  • batch_norm_decay (float) – Batch norm decay rate.

  • data_format (string) – Format for inputs data. NHWC or NCHW.

  • enable_detail_summary (bool) – Flag for summarize feature maps for each operation on tensorboard.

Returns

Output of this block.

Return type

output (tf.Tensor)

blueoil.blocks._densenet_conv_bn_act(name, inputs, growth_rate, bottleneck_rate, weight_decay_rate, is_training, activation, batch_norm_decay, data_format, enable_detail_summary)

Densenet block.

In order to fast execute for quantization, use order of layers convolution -> batch norm -> activation instead of paper original’s batch norm -> activation -> convolution. This is not Dense block called by original paper, this is the part of Dense block.

blueoil.blocks.densenet_group(name, inputs, num_blocks, growth_rate, bottleneck_rate=4, weight_decay_rate=0.0, is_training=tf.constant(False), activation=None, batch_norm_decay=0.99, data_format='NHWC', enable_detail_summary=False)

Group of Densenet blocks.

paper: https://arxiv.org/abs/1608.06993 In the original paper, this method is called Dense block which consists of some 1x1 and 3x3 conv blocks which batch norm -> activation(relu) -> convolution(1x1) and batch norm -> activation -> convolution(3x3). But in this method, the order of each block change to convolution -> batch norm -> activation.

Parameters
  • name (str) – Block name, as scope name.

  • inputs (tf.Tensor) – Inputs.

  • num_blocks (int) – Number of dense blocks which consist of 1x1 and 3x3 conv.

  • growth_rate (int) – How many filters (out channel) to add each layer.

  • bottleneck_rate (int) – The factor to be calculated bottle-neck 1x1 conv output channel. bottleneck_channel = growth_rate * bottleneck_rate. The default value 4 is from original paper.

  • weight_decay_rate (float) – Number of L2 regularization be applied to convolution weights.

  • is_training (tf.constant) – Flag if training or not.

  • activation (callable) – Activation function.

  • batch_norm_decay (float) – Batch norm decay rate.

  • enable_detail_summary (bool) – Flag for summarize feature maps for each operation on tensorboard.

  • data_format (string) – Format for inputs data. NHWC or NCHW.

Returns

Output of current block.

Return type

tf.Tensor