:mod:`blueoil.blocks` ===================== .. py:module:: blueoil.blocks Module Contents --------------- Functions ~~~~~~~~~ .. autoapisummary:: blueoil.blocks.darknet blueoil.blocks.lmnet_block blueoil.blocks.conv_bn_act blueoil.blocks._densenet_conv_bn_act blueoil.blocks.densenet_group .. function:: darknet(name, inputs, filters, kernel_size, is_training=tf.constant(False), activation=None, data_format='NHWC') Darknet19 block. Do convolution, batch_norm, bias, leaky_relu activation. Ref: https://arxiv.org/pdf/1612.08242.pdf https://github.com/pjreddie/darknet/blob/3bf2f342c03b0ad22efd799d5be9990c9d792354/cfg/darknet19.cfg https://github.com/pjreddie/darknet/blob/8215a8864d4ad07e058acafd75b2c6ff6600b9e8/cfg/yolo.2.0.cfg .. function:: lmnet_block(name, inputs, filters, kernel_size, custom_getter=None, is_training=tf.constant(True), activation=None, use_bias=True, use_batch_norm=True, is_debug=False, data_format='channels_last') Block used in lmnet Combine convolution, bias, weights quantization and activation quantization as one layer block. :param name: Block name, as scope name. :type name: str :param inputs: Inputs. :type inputs: tf.Tensor :param filters: Number of filters for convolution. :type filters: int :param kernel_size: Kernel size. :type kernel_size: int :param custom_getter: Custom getter for `tf.compat.v1.variable_scope`. :type custom_getter: callable :param is_training: Flag if training or not. :type is_training: tf.constant :param activation: Activation function. :type activation: callable :param use_bias: If use bias. :type use_bias: bool :param use_batch_norm: If use batch norm. :type use_batch_norm: bool :param is_debug: If is debug. :type is_debug: bool :param data_format: channels_last for NHWC. channels_first for NCHW. Default is channels_last. :type data_format: string :returns: Output of current layer block. :rtype: tf.Tensor .. function:: conv_bn_act(name, inputs, filters, kernel_size, weight_decay_rate=0.0, is_training=tf.constant(False), activation=None, batch_norm_decay=0.99, data_format='NHWC', enable_detail_summary=False) Block of convolution -> batch norm -> activation. :param name: Block name, as scope name. :type name: str :param inputs: Inputs. :type inputs: tf.Tensor :param filters: Number of filters (output channel) for convolution. :type filters: int :param kernel_size: Kernel size. :type kernel_size: int :param weight_decay_rate: Number of L2 regularization be applied to convolution weights. Need `tf.losses.get_regularization_loss()` in loss function to apply this parameter to loss. :type weight_decay_rate: float :param is_training: Flag if training or not for batch norm. :type is_training: tf.constant :param activation: Activation function. :type activation: callable :param batch_norm_decay: Batch norm decay rate. :type batch_norm_decay: float :param data_format: Format for inputs data. NHWC or NCHW. :type data_format: string :param enable_detail_summary: Flag for summarize feature maps for each operation on tensorboard. :type enable_detail_summary: bool :returns: Output of this block. :rtype: output (tf.Tensor) .. function:: _densenet_conv_bn_act(name, inputs, growth_rate, bottleneck_rate, weight_decay_rate, is_training, activation, batch_norm_decay, data_format, enable_detail_summary) Densenet block. In order to fast execute for quantization, use order of layers convolution -> batch norm -> activation instead of paper original's batch norm -> activation -> convolution. This is not `Dense block` called by original paper, this is the part of `Dense block`. .. function:: densenet_group(name, inputs, num_blocks, growth_rate, bottleneck_rate=4, weight_decay_rate=0.0, is_training=tf.constant(False), activation=None, batch_norm_decay=0.99, data_format='NHWC', enable_detail_summary=False) Group of Densenet blocks. paper: https://arxiv.org/abs/1608.06993 In the original paper, this method is called `Dense block` which consists of some 1x1 and 3x3 conv blocks which batch norm -> activation(relu) -> convolution(1x1) and batch norm -> activation -> convolution(3x3). But in this method, the order of each block change to convolution -> batch norm -> activation. :param name: Block name, as scope name. :type name: str :param inputs: Inputs. :type inputs: tf.Tensor :param num_blocks: Number of dense blocks which consist of 1x1 and 3x3 conv. :type num_blocks: int :param growth_rate: How many filters (out channel) to add each layer. :type growth_rate: int :param bottleneck_rate: The factor to be calculated bottle-neck 1x1 conv output channel. `bottleneck_channel = growth_rate * bottleneck_rate`. The default value `4` is from original paper. :type bottleneck_rate: int :param weight_decay_rate: Number of L2 regularization be applied to convolution weights. :type weight_decay_rate: float :param is_training: Flag if training or not. :type is_training: tf.constant :param activation: Activation function. :type activation: callable :param batch_norm_decay: Batch norm decay rate. :type batch_norm_decay: float :param enable_detail_summary: Flag for summarize feature maps for each operation on tensorboard. :type enable_detail_summary: bool :param data_format: Format for inputs data. NHWC or NCHW. :type data_format: string :returns: Output of current block. :rtype: tf.Tensor