You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: bionemo-recipes/recipes/llama3_native_te/README.md
+11-6Lines changed: 11 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,16 +46,21 @@ Alternatively, the dependencies can be installed manually in an environment with
46
46
47
47
### Performance Benchmarks
48
48
49
-

50
-
51
-
We compared the performance and convergence of this Llama3 recipe (with FSDP2) against NeMo 2.0 (https://github.com/NVIDIA-NeMo/NeMo)
52
-
on the Lingua-1B dataset. See [Training on Natural Language Data (Lingua Reproduction)](#lingua-reproduction) for more
53
-
details. The figure above shows similar loss convergence and step time to the NeMo 2.0 training example, and the
54
-
following table shows downstream performance on various tasks using the
49
+
<palign="center">
50
+
<imgsrc="../../../docs/docs/assets/images/recipes/lingua-1b-loss-curve.png"alt="Llama 3 Lingua 1B Loss Curve"width="49%" />
51
+
<imgsrc="../../../docs/docs/assets/images/recipes/lingua-1b-step-time.png"alt="Llama 3 Lingua 1B Step Time"width="49%" />
52
+
</p>
53
+
54
+
We compared the performance and convergence of this Llama3 recipe (with FSDP2) against NeMo 2.0
55
+
(https://github.com/NVIDIA-NeMo/NeMo) and the [facebookresearch/lingua](https://github.com/facebookresearch/lingua)
56
+
implementation on the DCLM Baseline 1.0 dataset. See [Training on Natural Language Data (Lingua
57
+
Reproduction)](#lingua-reproduction) for more details. The figure above shows similar loss convergence and step time to
58
+
the NeMo 2.0 training example, and the following table shows downstream performance on various tasks using the
0 commit comments