Skip to content

Commit c1fabb9

Browse files
authored
[docs] Minor fix for quantization example code (#1297)
1 parent c3851fd commit c1fabb9

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/source/tutorials/e2e_flow.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -327,8 +327,8 @@ To quantize the fine-tuned model after installing torchao we can run the followi
327327
# we also support `int8_weight_only()` and `int8_dynamic_activation_int8_weight()`, see
328328
# https://github.com/pytorch/ao/tree/main/torchao/quantization#other-available-quantization-techniques
329329
# for a full list of techniques that we support
330-
from torchao.quantization.quant_api import quantize\_, int4_weight_only
331-
quantize\_(model, int4_weight_only())
330+
from torchao.quantization.quant_api import quantize_, int4_weight_only
331+
quantize_(model, int4_weight_only())
332332

333333
After quantization, we rely on torch.compile for speedups. For more details, please see `this example usage <https://github.com/pytorch/ao/blob/main/torchao/quantization/README.md#quantization-flow-example>`_.
334334

docs/source/tutorials/llama3.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -247,8 +247,8 @@ To quantize the fine-tuned model after installing torchao we can run the followi
247247
# we also support `int8_weight_only()` and `int8_dynamic_activation_int8_weight()`, see
248248
# https://github.com/pytorch/ao/tree/main/torchao/quantization#other-available-quantization-techniques
249249
# for a full list of techniques that we support
250-
from torchao.quantization.quant_api import quantize\_, int4_weight_only
251-
quantize\_(model, int4_weight_only())
250+
from torchao.quantization.quant_api import quantize_, int4_weight_only
251+
quantize_(model, int4_weight_only())
252252

253253
After quantization, we rely on torch.compile for speedups. For more details, please see `this example usage <https://github.com/pytorch/ao/blob/main/torchao/quantization/README.md#quantization-flow-example>`_.
254254

0 commit comments

Comments
 (0)