-
-
Notifications
You must be signed in to change notification settings - Fork 657
Closed
Description
🐛 Bug description
The documentation states that ignite.metrics.FID is inspired by the reference Pytorch implementation of FID here. However, when I compare ignite's calculated FID score to the reference implementation's, I see that their scores are off by about five orders of magnitude.
Here is a script based on the example in ignite's documentation.
import torch
import torchvision
import tqdm
from ignite.metrics.gan import FID
torch.manual_seed(0)
m = FID()
y_pred, y = torch.rand(100, 3, 299, 299), torch.rand(100, 3, 299, 299)
for i in tqdm.tqdm(range(len(y_pred))):
torchvision.utils.save_image(y_pred[i], f'pred/{i}.png')
torchvision.utils.save_image(y[i], f'gt/{i}.png')
m.update((y_pred[i:i+1], y[i:i+1]))
print('ignite online FID', m.compute()) # 8.98434690701287e-05
m = FID()
m.update((y_pred, y))
print('ignite batch FID', m.compute()) # 8.98434072559458e-05
This snippet of code saves y_pred to a folder called pred and y to a folder called gt. I then installed pytorch-fid from the reference implementation's repo and ran
python -m pytorch_fid pred gt --num-workers 8
and the FID score I got was 5.980631998318767.
I would have expected the FID scores to be the same, or at least within numerical error. However, 5.980631998318767 and 8.98434072559458e-05 are off by 5 orders of magnitude.
Environment
- PyTorch Version (e.g., 1.4): 1.7.0
- Ignite Version (e.g., 0.3.0): 0.4.7
- OS (e.g., Linux): MacOS 10.14.6
- How you installed Ignite (
conda,pip, source): pip - Python version: 3.8.5
- Any other relevant information: