You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I convert tensorflow inceptionv4 model to onnx, then to caffe2, and test this model, it failed.
Original python traceback for operator 504 in network `Inception-v4` in exception above (most recent call last):
Exception KeyError: KeyError(<weakref at 0x7f15ace61998; to 'tqdm' at 0x7f15ace5ca90>,) in <bound method tqdm.__del__ of 0%| | 0/500 [00:17<?, ?it/s]> ignored
Traceback (most recent call last):
File "examples/evaluate_imagenet_net.py", line 131, in <module>
run_main(args)
File "examples/evaluate_imagenet_net.py", line 103, in run_main
workspace.RunNet(val_model.net)
File "/home/zhibin/qzhong/thirdparty/caffe2-python2/lib/python2.7/site-packages/caffe2/python/workspace.py", line 215, in RunNet
StringifyNetName(name), num_iter, allow_fail,
File "/home/zhibin/qzhong/thirdparty/caffe2-python2/lib/python2.7/site-packages/caffe2/python/workspace.py", line 177, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at tensor.h:495] IsType<T>(). Tensor type mismatch, caller expects elements to be int while tensor contains long Error from operator:
input: "InceptionV4/Logits/PreLogitsFlatten/flatten/Shape__639:0" input: "OC2_DUMMY_28" input: "OC2_DUMMY_30" output: "InceptionV4/Logits/PreLogitsFlatten/flatten/strided_slice:0" name: "InceptionV4/Logits/PreLogitsFlatten/flatten/strided_slice" type: "Slice" device_option { device_type: 0 cuda_gpu_id: 0 }
(caffe2-python2venv) zhibin at gpu80 ~/qzhong/caffe2/caffe2_model_zoo/scripts
produce onnx_predict_net.pb and onnx_init_net.pb successfully.
So I found that the reason for this error is:
Input(1) is int64 type (which is set in onnx_predict_net.pb with ConstantFill operator and dtype=10) but we only need int type.
we can fix this issue by modify file:
Python version: 3.6
Is CUDA available: N/A
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: Tesla K80
GPU 1: Tesla K80
GPU 2: Tesla K80
GPU 3: Tesla K80
🐛 Bug
When I convert tensorflow inceptionv4 model to onnx, then to caffe2, and test this model, it failed.
Slice operator in ONNX is :
The converted node to caffe2 is:
To Reproduce
Steps to reproduce the behavior:
1.Original model: tensorflow's inceptionv4 model
2. convert to onnx:
Expected behavior
produce onnx_predict_net.pb and onnx_init_net.pb successfully.
So I found that the reason for this error is:
Input(1) is int64 type (which is set in onnx_predict_net.pb with ConstantFill operator and dtype=10) but we only need int type.
we can fix this issue by modify file:
Environment
Python version: 3.6
Is CUDA available: N/A
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: Tesla K80
GPU 1: Tesla K80
GPU 2: Tesla K80
GPU 3: Tesla K80
Nvidia driver version: 410.48
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.2.1
Versions of relevant libraries:
[pip3] numpy==1.16.2
[pip3] torch==1.1.0a0+d35a587
[pip3] torchvision==0.2.3a0+ccbb322
[conda] Could not collect
The text was updated successfully, but these errors were encountered: