-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Open
Labels
Description
🐛 Bug
Inference speed is abysmal - way slower than cpu. It works as expected on NVidia. I'm not really sure whether this is torch vision or torch itself at fault.
To Reproduce
Steps to reproduce the behavior:
- Dockerfile: https://gist.github.com/boniek83/0bbf858a24961816557c522007a11c11
- Lab: https://github.com/timesler/facenet-pytorch/blob/master/examples/infer.ipynb
- Data for lab: https://github.com/timesler/facenet-pytorch/tree/master/data/test_images
Expected behavior
Performance should be competitive.
Environment
- PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.6.0 / 0.7.0
- OS (e.g., Linux): CentOS 7.8
- How you installed PyTorch / torchvision (
conda
,pip
, source): source - Build command you used (if compiling from source): see Dockerfile above
- Python version: 3.6
- GPU models and configuration: Radeon VII
- Any other relevant information: ROCK 3.7 during runtime