Skip to content

ROCM version is slow #2682

@boniek83

Description

@boniek83

🐛 Bug

Inference speed is abysmal - way slower than cpu. It works as expected on NVidia. I'm not really sure whether this is torch vision or torch itself at fault.

To Reproduce

Steps to reproduce the behavior:

  1. Dockerfile: https://gist.github.com/boniek83/0bbf858a24961816557c522007a11c11
  2. Lab: https://github.com/timesler/facenet-pytorch/blob/master/examples/infer.ipynb
  3. Data for lab: https://github.com/timesler/facenet-pytorch/tree/master/data/test_images

Expected behavior

Performance should be competitive.

Environment

  • PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.6.0 / 0.7.0
  • OS (e.g., Linux): CentOS 7.8
  • How you installed PyTorch / torchvision (conda, pip, source): source
  • Build command you used (if compiling from source): see Dockerfile above
  • Python version: 3.6
  • GPU models and configuration: Radeon VII
  • Any other relevant information: ROCK 3.7 during runtime

cc @iotamudelta @ashishfarmer

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions