内容: 大家好, 我在使用ncnn对YOLO模型进行量化时,执行以下命令: ./ncnn2int8 yolo11n.ncnn-opt.param yolo11n.ncnn-opt.bin yolo11n-int8.param yolo11n-int8.bin yolo11n.ncnn.table 运行到某些卷积层时,出现报错: quantize_convolution conv_80 ... quantize_convolutiondepthwise convdw_174 quantize_convolutiondepthwise convdw_175 quantize_convolutiondepthwise convdw_176 quantize_convolutiondepthwise convdw_177 quantize_convolutiondepthwise convdw_178 quantize_convolutiondepthwise convdw_179 quantize_convolutiondepthwise convdw_180 Floating point exception (core dumped) Floating point exception (core dumped) 我已经检查过table文件和param/bin文件都是有效的。 不清楚为什么会出现floating point错误。 请问有谁对ncnn2int8量化有经验,能帮我分析原因并给出解决方法吗? 非常感谢!