8.1.1.2.2.3. blueoil.converter.generate_project

Script that automatically runs all of the folllowing steps.

  • Import protocol buffer, lmnet’s export and config.

  • Generate all cpp source headers and other control files like Makefile.

8.1.1.2.2.3.1. Module Contents

8.1.1.2.2.3.1.1. Functions

optimize_graph_step(graph: Graph, config: Config) → None

Optimizing graph that imported from tensorflow pb.

generate_code_step(graph: Graph, config: Config) → None

Generate code for the model.

run(input_path: str, dest_dir_path: str, project_name: str, activate_hard_quantization: bool, threshold_skipping: bool = False, debug: bool = False, cache_dma: bool = False, use_divide_by_255: bool = True)

main(input_path, output_path, project_name, activate_hard_quantization, threshold_skipping, debug, cache_dma, use_divide_by_255)

blueoil.converter.generate_project.SCRITPS_DIR
blueoil.converter.generate_project.DLK_ROOT_DIR
blueoil.converter.generate_project.ROOT_DIR
blueoil.converter.generate_project.optimize_graph_step(graph: Graph, config: Config) → None

Optimizing graph that imported from tensorflow pb.

Parameters
  • graph (Graph) – Graph that optimization passes are applying to

  • config (Config) – Collection of configurations

Returns:

blueoil.converter.generate_project.generate_code_step(graph: Graph, config: Config) → None

Generate code for the model.

Parameters
  • graph (Graph) – Graph the code generation is based on

  • config (Config) – Collection of configurations

blueoil.converter.generate_project.run(input_path: str, dest_dir_path: str, project_name: str, activate_hard_quantization: bool, threshold_skipping: bool = False, debug: bool = False, cache_dma: bool = False, use_divide_by_255: bool = True)
blueoil.converter.generate_project.main(input_path, output_path, project_name, activate_hard_quantization, threshold_skipping, debug, cache_dma, use_divide_by_255)