Make a benchmarks In this week, I got the inference latency time for each model with a small dataset. (under 40 samples) As a result of checking roughly, the value of latency changed the most according to input shape, and the filter size also affected it a lot. And now I’m currently making larger dataset with below’s combinations. kernel_l... Read more 15 Jul 2021 - less than 1 minute read
1. Seperate all the code and Test it on the device Successfully I can get the result what we want, the benchmark, latency time. But even though I seperate the code, but it still looks complicated. 😂 Now I’m getting the benchmarks of 19200 samples (still running). run_bazel.py import os import subprocess import sqlite3 import numpy as np impo... Read more 09 Jul 2021 - 6 minute read
Baseline Code import os import subprocess from itertools import product import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras def run_bazel(model_name): proc = subprocess.Popen( ['sh', 'gsoc_proj/run.sh', './gsoc_proj/MODELS/{}'.format(model_name)], stdout = subprocess.PIPE ) o... Read more 28 Jun 2021 - 1 minute read
How to get all Operations of a TFLite model In the interpreter.py code, especially _get_ops_details() method, it returns tensor index of each layer. Below is the code of getting all Conv2D layers inside tflite model and get the input/output tensor size. import tensorflow as tf SAVED_MODEL_PATH = "./densenet.tflite" interpreter = tf.lite.Int... Read more 16 Jun 2021 - 2 minute read
In this week, I tried to find out input/output size of each layers. Following is the example code from Ekaterina. const tflite::Interpreter* interpreter = ...; // Initialize interpreter for (int op_index : interpreter->execution_plan()) { const auto* op_and_reg = interpreter->node_and_registration(op_index); if (op_and_reg-&... Read more 08 Jun 2021 - 1 minute read