You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Announcing the AlgoPerf: Training Algorithms Benchmark Competition
4
+
5
+
Neural networks must be trained to be useful. However, training is a resource-intensive task, often demanding extensive compute and energy resources.
6
+
To promote faster training algorithms, the [MLCommons® Algorithms Working Group](https://mlcommons.org/en/groups/research-algorithms/) is delighted to present the **AlgoPerf: Training Algorithms** benchmark. This benchmark competition is designed to measure neural network training speedups due to *algorithmic improvements*. We welcome submissions that implement both novel and existing training algorithms, including, but not limited to:
7
+
8
+
- Optimizer update rules
9
+
- Hyperparameter tuning protocols, search spaces, or schedules
10
+
- Data sampling strategies
11
+
12
+
Submissions can compete under two hyperparameter tuning rulesets (with separate prizes and awards): an external tuning ruleset meant to simulate tuning with a fixed amount of parallel resources, or a self-tuning ruleset for hyperparameter-free algorithms.
13
+
14
+
## Dates
15
+
16
+
-**Call for submissions: November 28th, 2023**
17
+
- Registration deadline to express non-binding intent to submit: January 28th, 2024
18
+
-**Submission deadline: March 28th, 2024**
19
+
-**Deadline for self-reporting preliminary results: May 28th, 2024**
20
+
-[tentative] Announcement of all results: July 15th, 2024
21
+
22
+
For a detailed and up-to-date timeline see the [Competition Rules](/COMPETITION_RULES.md).
23
+
24
+
## Participation
25
+
26
+
For details on how to participate in the competition, please refer to our [Competition Rules](/COMPETITION_RULES.md). To learn more about the benchmark, see our [technical documentation](/DOCUMENTATION.md). The benchmark is further motivated, explained, and justified in the accompanying [paper](https://arxiv.org/abs/2306.07179). We require all submissions to be provided under the open-source [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
27
+
28
+
## Prize Money & Funding
29
+
30
+
MLCommons has provided a total of $50,000 in prize money for eligible winning submissions. We would also like to express our gratitude to Google for their generous support in providing computational resources to score the top submissions, and resources to help score some promising submissions from submitters with more limited resources.
-[Authentication for Google Cloud Container Registry](#authentication-for-google-cloud-container-registry)
9
10
-[Installation](#installation)
10
-
-[Docker workflows](#docker-workflows)
11
+
-[Docker Workflows](#docker-workflows)
11
12
-[Pre-built Images on Google Cloud Container Registry](#pre-built-images-on-google-cloud-container-registry)
12
-
-[Trigger rebuild and push of maintained images](#trigger-rebuild-and-push-of-maintained-images)
13
-
-[Trigger build and push of images on other branch](#trigger-build-and-push-of-images-on-other-branch)
13
+
-[Trigger Rebuild and Push of Maintained Images](#trigger-rebuild-and-push-of-maintained-images)
14
+
-[Trigger Build and Push of Images on Other Branch](#trigger-build-and-push-of-images-on-other-branch)
14
15
-[GCP Data and Experiment Integration](#gcp-data-and-experiment-integration)
15
16
-[Downloading Data from GCP](#downloading-data-from-gcp)
16
17
-[Saving Experiments to GCP](#saving-experiments-to-gcp)
@@ -19,10 +20,12 @@
19
20
-[Submitting PRs](#submitting-prs)
20
21
-[Testing](#testing)
21
22
-[Style Testing](#style-testing)
22
-
-[Unit and integration tests](#unit-and-integration-tests)
23
-
-[Regression tests](#regression-tests)
23
+
-[Unit and Integration Tests](#unit-and-integration-tests)
24
+
-[Regression Tests](#regression-tests)
24
25
25
-
We invite everyone to look through our rules and codebase and submit issues and pull requests, e.g. for rules changes, clarifications, or any bugs you might encounter. If you are interested in contributing to the work of the working group and influence the benchmark's design decisions, please [join the weekly meetings](https://mlcommons.org/en/groups/research-algorithms/) and consider becoming a member of the working group.
26
+
## Contributing to MLCommons
27
+
28
+
We invite everyone to look through our technical documentation and codebase and submit issues and pull requests, e.g. for changes, clarifications, or any bugs you might encounter. If you are interested in contributing to the work of the working group and influence the benchmark's design decisions, please [join the weekly meetings](https://mlcommons.org/en/groups/research-algorithms/) and consider becoming a member of the working group.
26
29
27
30
The best way to contribute to the MLCommons is to get involved with one of our many project communities. You find more information about getting involved with MLCommons [here](https://mlcommons.org/en/get-involved/#getting-started).
28
31
@@ -32,7 +35,7 @@ To get started contributing code, you or your organization needs to sign the MLC
32
35
33
36
MLCommons project work is tracked with issue trackers and pull requests. Modify the project in your own fork and issue a pull request once you want other developers to take a look at what you have done and discuss the proposed changes. Ensure that cla-bot and other checks pass for your Pull requests.
34
37
35
-
## Setup
38
+
## Setup for Contributing
36
39
37
40
### Setting up a Linux VM on GCP
38
41
@@ -51,7 +54,7 @@ Use the gcloud credential helper as documented [here](https://cloud.google.com/a
51
54
52
55
## Installation
53
56
54
-
If you have not installed the package and dependencies yet see [Installation](./README.md#installation).
57
+
If you have not installed the package and dependencies yet see [Installation](/README.md#installation).
55
58
56
59
To use the development tools such as `pytest` or `pylint` use the `dev` option:
57
60
@@ -62,14 +65,14 @@ pre-commit install
62
65
63
66
To get an installation with the requirements for all workloads and development, use the argument `[full_dev]`.
64
67
65
-
## Docker workflows
68
+
## Docker Workflows
66
69
67
70
We recommend developing in our Docker image to ensure a consistent environment between developing, testing and scoring submissions.
68
71
69
72
To get started see also:
70
73
71
-
-[Installation with Docker](./README.md#docker)
72
-
-[Running a submission inside a Docker Container](./getting_started.md#run-your-submission-in-a-docker-container)
74
+
-[Installation with Docker](/GETTING_STARTED.md#docker)
75
+
-[Running a submission inside a Docker Container](/GETTING_STARTED.md#run-your-submission-in-a-docker-container)
73
76
74
77
### Pre-built Images on Google Cloud Container Registry
75
78
@@ -100,15 +103,15 @@ Currently maintained images on the repository are:
100
103
To reference the pulled image you will have to use the full `image_path`, e.g.
To build and push all images (`pytorch`, `jax`, `both`) on maintained branches (`dev`, `main`).
106
109
107
110
```bash
108
111
bash docker/build_docker_images.sh -b <branch>
109
112
```
110
113
111
-
#### Trigger build and push of images on other branch
114
+
#### Trigger Build and Push of Images on Other Branch
112
115
113
116
You can also use the above script to build images from a different branch.
114
117
@@ -121,9 +124,7 @@ You can also use the above script to build images from a different branch.
121
124
122
125
### GCP Data and Experiment Integration
123
126
124
-
The Docker entrypoint script can transfer data to and from
125
-
our GCP buckets on our internal GCP project. If
126
-
you are an approved contributor you can get access to these resources to automatically download the datasets and upload experiment results.
127
+
The Docker entrypoint script can transfer data to and from our GCP buckets on our internal GCP project. If you are an approved contributor you can get access to these resources to automatically download the datasets and upload experiment results.
127
128
You can use these features by setting the `--internal_contributor` flag to 'true' for the Docker entrypoint script.
128
129
129
130
### Downloading Data from GCP
@@ -216,7 +217,7 @@ New PRs will be merged on the dev branch by default, given that they pass the pr
216
217
217
218
## Testing
218
219
219
-
We run tests with GitHub Actions, configured in the [.github/workflows](https://github.com/mlcommons/algorithmic-efficiency/tree/main/.github/workflows) folder.
220
+
We run tests with GitHub Actions, configured in the [.github/workflows](.github/workflows/) folder.
220
221
221
222
### Style Testing
222
223
@@ -253,14 +254,15 @@ pylint submission_runner.py
253
254
pylint tests
254
255
```
255
256
256
-
## Unit and integration tests
257
-
We run unit tests and integration tests as part of the of github actions as well.
257
+
### Unit and Integration Tests
258
+
259
+
We run unit tests and integration tests as part of the of github actions as well.
258
260
You can also use `python tests/reference_algorithm_tests.py` to run a single model update and two model evals for each workload using the reference algorithm in `reference_algorithms/target_setting_algorithms/`.
259
261
260
-
### Regression tests
262
+
### Regression Tests
261
263
262
-
We also have regression tests available in [.github/workflows/regression_tests.yml](https://github.com/mlcommons/algorithmic-efficiency/tree/main/.github/workflows/regression_tests.yml) that can be run semi-automatically.
263
-
The regression tests are shorter end-to-end submissions run in a containerized environment across all 8 workloads, in both the jax and pytorch frameworks.
264
+
We also have regression tests available in [.github/workflows/regression_tests.yml](.github/workflows/regression_tests.yml) that can be run semi-automatically.
265
+
The regression tests are shorter end-to-end submissions run in a containerized environment across all 8 workloads, in both the JAX and PyTorch frameworks.
264
266
The regression tests run on self-hosted runners and are triggered for pull requests that target the main branch. Typically these PRs will be from the `dev` branch
265
267
so the tests will run containers based on images build from the `dev` branch.
0 commit comments