8.1.1.7.1.4.2.2. blueoil.networks.segmentation.lm_bisenet

8.1.1.7.1.4.2.2.1. Module Contents

8.1.1.7.1.4.2.2.1.1. Classes

LMBiSeNet

LM original semantic segmentation network reference to [BiSeNet](https://arxiv.org/abs/1808.00897)

LMBiSeNetQuantize

Following args are used for inference: activation_quantizer, activation_quantizer_kwargs,

class blueoil.networks.segmentation.lm_bisenet.LMBiSeNet(weight_decay_rate=0.0, auxiliary_loss_weight=0.5, use_feature_fusion=True, use_attention_refinement=True, use_tail_gap=True, *args, **kwargs)

Bases: blueoil.networks.segmentation.base.Base

LM original semantic segmentation network reference to [BiSeNet](https://arxiv.org/abs/1808.00897)

Major difference from BiSeNet: * Apply first convolution then branch into contextual and spatial path. * Use space to depth (s2d) and depth to space (d2s) as upsample and downsample. * Use only 1 stride 1x1 and 3x3 convolution (not use multi stride and dilated convolution)

for be limited by our convolution IP.

  • Use DensNet block in contextual part.

  • All of convolution out channels are less than BiSeNet for inference time.

  • Use attention refinement module reference to BiSeNet after last layer of 1/32

    (BiSeNet: both 1/16 and 1/32) in context path.

  • Use relu activation instead of sigmoid in attention refinement and feature fusion module.

  • In up-sampling followed by context path, alternate d2s and 1x1 conv for reducing channel size.

_space_to_depth(self, name, inputs=None, block_size=2)
_depth_to_space(self, name, inputs=None, block_size=2)
_batch_norm(self, inputs, training)
_conv_bias(self, name, inputs, filters, kernel_size)
_spatial(self, x)
_context(self, x)
_attention(self, name, x)
_fusion(self, sp, cx)

Feature fusion module

base(self, images, is_training, *args, **kwargs)

Base function contains inference.

Parameters
  • images – Input images.

  • is_training – A flag for if is training.

Returns

Inference result.

Return type

tf.Tensor

_cross_entropy(self, x, labels)
_weight_decay_loss(self)

L2 weight decay (regularization) loss.

loss(self, output, labels)

Loss.

Parameters
  • output – tensor from inference.

  • labels – labels tensor.

summary(self, output, labels=None)

Summary.

Parameters
  • output – tensor from inference.

  • labels – labels tensor.

metrics(self, output, labels)

Metrics.

Parameters
  • output – tensor from inference.

  • labels – labels tensor.

post_process(self, output)
class blueoil.networks.segmentation.lm_bisenet.LMBiSeNetQuantize(activation_quantizer=None, activation_quantizer_kwargs={}, weight_quantizer=None, weight_quantizer_kwargs={}, *args, **kwargs)

Bases: blueoil.networks.segmentation.lm_bisenet.LMBiSeNet

Following args are used for inference: activation_quantizer, activation_quantizer_kwargs,

weight_quantizer, weight_quantizer_kwargs.

Parameters
  • activation_quantizer (callable) – Weight quantizer. See more at blueoil.quantizations.

  • activation_quantizer_kwargs (dict) – Kwargs for activation_quantizer.

  • weight_quantizer (callable) – Activation quantizer. See more at blueoil.quantizations.

  • weight_quantizer_kwargs (dict) – Kwargs for weight_quantizer.

static _quantized_variable_getter(getter, name, weight_quantization=None, quantize_first_convolution=False, *args, **kwargs)

Get the quantized variables.

Use if to choose or skip the target should be quantized.

Parameters
  • getter – Default from tensorflow.

  • name – Default from tensorflow.

  • weight_quantization – Callable object which quantize variable.

  • args – Args.

  • kwargs – Kwargs.

base(self, images, is_training)

Base function contains inference.

Parameters
  • images – Input images.

  • is_training – A flag for if is training.

Returns

Inference result.

Return type

tf.Tensor