Skip to content

Commit 9e6fda1

Browse files
committed
Addressing code review comments.
1 parent f47a590 commit 9e6fda1

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

references/classification/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@ torchrun --nproc_per_node=8 train.py\
208208
```
209209

210210
Note that the above command corresponds to training on a single node with 8 GPUs.
211-
For generatring the pre-trained weights, we trained with 2 nodes, each with 8 GPUs (for a total of 32 GPUs),
211+
For generatring the pre-trained weights, we trained with 2 nodes, each with 8 GPUs (for a total of 16 GPUs),
212212
and `--batch_size 64`.
213213

214214
## Mixed precision training

torchvision/prototype/models/convnext.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None:
2323
self.channels_last = kwargs.pop("channels_last", False)
2424
super().__init__(*args, **kwargs)
2525

26-
def forward(self, x):
26+
def forward(self, x: Tensor) -> Tensor:
2727
# TODO: Benchmark this against the approach described at https://github.com/pytorch/vision/pull/5197#discussion_r786251298
2828
if not self.channels_last:
2929
x = x.permute(0, 2, 3, 1)
@@ -34,7 +34,9 @@ def forward(self, x):
3434

3535

3636
class CNBlock(nn.Module):
37-
def __init__(self, dim, layer_scale: float, stochastic_depth_prob: float, norm_layer: Callable[..., nn.Module]):
37+
def __init__(
38+
self, dim, layer_scale: float, stochastic_depth_prob: float, norm_layer: Callable[..., nn.Module]
39+
) -> None:
3840
super().__init__()
3941
self.block = nn.Sequential(
4042
ConvNormActivation(

0 commit comments

Comments
 (0)