Skip to content

Conversation

@kinjalpatel27
Copy link
Contributor

@kinjalpatel27 kinjalpatel27 commented Dec 20, 2025

What does this PR do?

Type of change: Feature extention

Overview:
Added support to quantize KV cache in vLLM fakequant by adding quantization support for MLAAttention

Usage

Please refer to Readme

KV_QUANT_CFG=NVFP4_KV_CFG QUANT_CFG=NVFP4_DEFAULT_CFG python vllm_serve_fakequant.py deepseek-ai/DeepSeek-V2 --served-model-name deepseek-ai/DeepSeek-V2 --host 0.0.0.0 --port 8001 --trust-remote-code --enforce-eager --gpu-memory-utilization 0.8  

Testing

Locally tested KV Cache quantization

�(rotary_emb): DeepseekScalingRotaryEmbedding()
�(mla_attn): MultiHeadLatentAttentionWrapper(
�  (fused_qkv_a_proj): QuantMergedColumnParallelLinear(
�    in_features=5120, output_features=2112, bias=False, tp_size=1, gather_output=False
�    (input_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=141.0000 calibrator=MaxCalibrator quant)
�    (weight_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=1.4297 calibrator=MaxCalibrator quant)
�    (output_quantizer): TensorQuantizer(disabled)
�  )
�  (q_a_layernorm): RMSNorm(hidden_size=1536, eps=1e-06)
�  (q_b_proj): QuantColumnParallelLinear(
�    in_features=1536, output_features=3072, bias=False, tp_size=8, gather_output=False
�    (input_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=32.0000 calibrator=MaxCalibrator quant)
�    (weight_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=0.1670 calibrator=MaxCalibrator quant)
�    (output_quantizer): TensorQuantizer(disabled)
�  )
�  (kv_a_layernorm): RMSNorm(hidden_size=512, eps=1e-06)
�  (kv_b_proj): QuantColumnParallelLinear(
�    in_features=512, output_features=4096, bias=False, tp_size=8, gather_output=False
�    (input_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=7.5312 calibrator=MaxCalibrator quant)
�    (weight_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=0.2773 calibrator=MaxCalibrator quant)
�    (output_quantizer): TensorQuantizer(disabled)
�  )
�  (rotary_emb): DeepseekScalingRotaryEmbedding()
�  (o_proj): QuantRowParallelLinear(
�    in_features=2048, output_features=5120, bias=False, tp_size=8, reduce_results=True
�    (input_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=1.7188 calibrator=MaxCalibrator quant)
�    (weight_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=0.4336 calibrator=MaxCalibrator quant)
�    (output_quantizer): TensorQuantizer(disabled)
�  )
�  (mla_attn): QuantMLAAttention(
�    (q_bmm_quantizer): TensorQuantizer(disabled)
�    (kv_c_bmm_quantizer): TensorQuantizer((2, 1) bit fake block_sizes={-1: 16, 'type': 'dynamic', 'scale_bits': (4, 3)}, amax=7.5312 calibrator=MaxCalibrator quant)
�  )
�)

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?:No
  • Did you add or update any necessary documentation?:NA
  • Did you update Changelog?: NA

Additional Information

@codecov
Copy link

codecov bot commented Dec 20, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.65%. Comparing base (b286165) to head (4d2f50a).
⚠️ Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #714      +/-   ##
==========================================
- Coverage   74.73%   74.65%   -0.09%     
==========================================
  Files         192      192              
  Lines       18870    18909      +39     
==========================================
+ Hits        14103    14117      +14     
- Misses       4767     4792      +25     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants