Skip to content

Commit 20c25da

Browse files
mdeffmaximegmd
authored andcommitted
fix typos (meta-pytorch#1113)
1 parent 7f8a01a commit 20c25da

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

tests/torchtune/models/mistral/scripts/compare_mistral_classifier.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ def mistral(
3434
rope_base: int = 10_000,
3535
) -> TransformerDecoder:
3636
"""
37-
Build the decoder assoicated with the mistral model. This includes:
37+
Build the decoder associated with the mistral model. This includes:
3838
- Token embeddings
3939
- num_layers number of TransformerDecoderLayer blocks
4040
- RMS Norm layer applied to the output of the transformer

torchtune/models/llama2/_component_builders.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ def llama2(
5252
norm_eps: float = 1e-5,
5353
) -> TransformerDecoder:
5454
"""
55-
Build the decoder assoicated with the Llama2 model. This includes:
55+
Build the decoder associated with the Llama2 model. This includes:
5656
- Token embeddings
5757
- num_layers number of TransformerDecoderLayer blocks
5858
- RMS Norm layer applied to the output of the transformer

torchtune/models/mistral/_component_builders.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ def mistral(
4747
rope_base: int = 10_000,
4848
) -> TransformerDecoder:
4949
"""
50-
Build the decoder assoicated with the mistral model. This includes:
50+
Build the decoder associated with the mistral model. This includes:
5151
- Token embeddings
5252
- num_layers number of TransformerDecoderLayer blocks
5353
- RMS Norm layer applied to the output of the transformer

0 commit comments

Comments
 (0)