Skip to content

Commit 0d76a65

Browse files
cmon chief
1 parent 9c4f0bb commit 0d76a65

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/tutorials/memory_optimizations.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -302,7 +302,7 @@ As above, these parameters are also specified under the ``model`` flag or config
302302
model.apply_lora_to_mlp=True \
303303
model.lora_attn_modules=["q_proj","k_proj","v_proj"] \
304304
model.lora_rank=32 \
305-
model.lora_rank=64
305+
model.lora_alpha=64
306306
307307
.. code-block:: yaml
308308
@@ -421,7 +421,7 @@ even more memory savings!
421421
apply_lora_to_mlp: True
422422
lora_attn_modules: ["q_proj", "k_proj", "v_proj"]
423423
lora_rank: 16
424-
lora_rank: 32
424+
lora_alpha: 32
425425
use_dora: True
426426
quantize_base: True
427427

0 commit comments

Comments
 (0)