Skip to content

Commit ca87833

Browse files
Merge pull request #593 from mlcommons/dev
Dev -> main
2 parents 7598e02 + 8e21b4e commit ca87833

File tree

2 files changed

+1
-4
lines changed

2 files changed

+1
-4
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Change Log
22

3-
## [0.1.0] - 2023-11-21
3+
## algoperf-benchmark-0.1.0 (2023-11-28)
44

55
First release of the AlgoPerf: Training algorithms benchmarking code.

DOCUMENTATION.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -577,6 +577,3 @@ The JAX and PyTorch versions of the Criteo, FastMRI, Librispeech, OGBG, and WMT
577577
Since we use PyTorch's [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel) implementation, there is one Python process for each device. Depending on the hardware and the settings of the cluster, running a TensorFlow input pipeline in each Python process can lead to errors, since too many threads are created in each process. See [this PR thread](https://github.com/mlcommons/algorithmic-efficiency/pull/85) for more details.
578578
While this issue might not affect all setups, we currently implement a different strategy: we only run the TensorFlow input pipeline in one Python process (with `rank == 0`), and [broadcast](https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast) the batches to all other devices. This introduces an additional communication overhead for each batch. See the [implementation for the WMT workload](https://github.com/mlcommons/algorithmic-efficiency/blob/main/algorithmic_efficiency/workloads/wmt/wmt_pytorch/workload.py#L215-L288) as an example.
579579

580-
### Pytorch Conformer CUDA OOM
581-
582-
The Conformer PyTorch workload may run out of memory in the current state. Please set the `submission_runner.py` flag `reduce_pytorch_max_split_size` to `True` as a temporary workaround if you encounter this issue. This will set `max_split_size_mb:256`. Note that this will adversely impact the performance of the submission on this workload. See [tracking issue](https://github.com/mlcommons/algorithmic-efficiency/issues/497).

0 commit comments

Comments
 (0)