Two MOST important things to do
1. Database
training one with manually generated data - NOW fixed!
- The amount of benchmarked models is 908
testing one from the real model
- Last week, I mentioned that I was able to print parametors of convolution of the real model
- In this week, I tried to apply the code to the real model and now I can print it out like below.
{'name': 'conv2d/kernel', 'kernel_size': '7', 'filter_size': '64', 'input_channel': '3', 'input_h': '224', 'input_w': '224', 'stride': 2}
{'name': 'block-0/denseblock-0-0/conv2d/kernel', 'kernel_size': '3', 'filter_size': '32', 'input_channel': '64', 'input_h': '56', 'input_w': '56', 'stride': 1}
{'name': 'block-0/denseblock-0-1/conv2d/kernel', 'kernel_size': '3', 'filter_size': '32', 'input_channel': '96', 'input_h': '56', 'input_w': '56', 'stride': 1}
{'name': 'block-0/denseblock-0-2/conv2d/kernel', 'kernel_size': '3', 'filter_size': '32', 'input_channel': '128', 'input_h': '56', 'input_w': '56', 'stride': 1}
I think I can get the latency time of each convolution layer.
But I wonder how can I build the testing database with this information. How can I benchmark them?
2. Estimator
In this week, I applied benchmark data to various kind of regressor.
Tree-based regression models performed betther than linear regression model.
from sklearn.tree import DecisionTreeRegressor
model = DecisionTreeRegressor(random_state=2020)
model.fit(X_train, y_train)
model.score(X_test, y_test)
>>> 0.9313011524652813
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(random_state=2020, max_depth=6)
model.fit(X_train, y_train)
model.score(X_test, y_test)
>>> 0.8713017617342651
On the other hand, there is some model that were worse than linear regression model.
from sklearn.ensemble import AdaBoostRegressor
model = AdaBoostRegressor(random_state=2020)
model.fit(X_train, y_train)
model.score(X_test, y_test)
>>> -10.103675152133759
I think it would be goot to choose models that how good performance and ensemble them.