參與:2
分享評論舉報
davison

davison

初學乍練

2 則回答

1得分


最佳解


I Have delivered the question to NXP AE ,and here is the replay:

 

The ONNX official doesn't provide model split tool like Tensorflow transform_graph tool, so we did it in eIQ Auto. 

you can do it by using airunner::LoadSubGraphFromONNX API as follow:

 

const std::string aGraphName3 = "data/airunner/yolov3-tiny.onnx";
std::vector<std::string> aInputName30 = {"input_1"};
std::vector<std::string> aOutputName30 = {"convolution_output1", "convolution_output"};
const std::string asubGraphName30 = "tiny_yolov3_subgraph1.onnx";
Status_t lStatus = LoadSubGraphFromONNX(aGraphName3, aInputName30, aOutputName30, asubGraphName30);

 

The tiny_yolov3_subgraph1.onnx will export to your board. Thanks

1F
一顿半斤饭

一顿半斤饭

爐火純青
2得分


最佳解


你好

想順便問說graph split 是eiq auto有提供特殊工具還是說模型自訓練時就要寫成兩個在各自轉成onnx當作graph split

謝謝

2F
davison

davison

初學乍練

我要回答