Skip to content

feat: rules-based MTP export for quantized models#1494

Draft
yeyu-nvidia wants to merge 2 commits into
mainfrom
yeyu/mtp-export-rules
Draft

feat: rules-based MTP export for quantized models#1494
yeyu-nvidia wants to merge 2 commits into
mainfrom
yeyu/mtp-export-rules

Conversation

@yeyu-nvidia
Copy link
Copy Markdown
Contributor

Summary

  • Replace the hacky _get_mtp_state_dict() that copied BF16 weights from HF pretrained model with a proper rules-based export that handles quantized MTP weights (NVFP4, FP8)
  • Support both repeated MTP (Nemotron nested HybridStack with mixed Mamba/Attention layers) and non-repeated MTP (DeepSeek style)
  • Reuse existing decoder layer export methods by replacing backbonemtp in rule prefixes, mirroring the import side's is_mtp=True behavior

Context

Nemotron-3.5-Nano NVFP4 QAD pipeline needs to export quantized MTP weights. The old hack just copied BF16 weights from the original HF checkpoint, ignoring any quantization applied to MTP layers.

Verified against the Nemotron-3.5-Nano HF checkpoint — all 270 MTP weight keys match the expected naming convention.

Test plan

  • Existing GPU Megatron export tests pass
  • End-to-end export of quantized Nemotron-3.5-Nano with MTP produces correct safetensors
  • Exported checkpoint loads correctly in vLLM/TRT-LLM

🤖 Generated with Claude Code

Replace the hacky _get_mtp_state_dict that copied BF16 weights from
the HF pretrained model with a proper rules-based export that handles
quantized MTP weights (NVFP4, FP8) through the existing export rules
system.

Supports both repeated MTP (Nemotron nested HybridStack) and
non-repeated MTP (DeepSeek style). Uses backbone→mtp prefix
replacement to reuse decoder layer export methods for MTP inner
layers, mirroring the import side's is_mtp=True behavior.

Signed-off-by: Ye Yu <yey@nvidia.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 14, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 14, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 11627bff-6d0c-440b-9e30-ab04201a8941

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch yeyu/mtp-export-rules

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

PR Preview Action v1.8.1

QR code for preview link

🚀 View preview at
https://NVIDIA.github.io/Model-Optimizer/pr-preview/pr-1494/

Built to branch gh-pages at 2026-05-14 17:43 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 14, 2026

Codecov Report

❌ Patch coverage is 6.09756% with 77 lines in your changes missing coverage. Please review.
✅ Project coverage is 71.22%. Comparing base (5887410) to head (38e175b).
⚠️ Report is 72 commits behind head on main.

Files with missing lines Patch % Lines
modelopt/torch/export/unified_export_megatron.py 6.09% 77 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1494      +/-   ##
==========================================
- Coverage   75.69%   71.22%   -4.47%     
==========================================
  Files         467      479      +12     
  Lines       50334    57875    +7541     
==========================================
+ Hits        38099    41221    +3122     
- Misses      12235    16654    +4419     
Flag Coverage Δ
unit 52.50% <6.09%> (-0.22%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

The modelopt_gpt_hybrid_builder creates a HybridModel (not MambaModel)
when --export-model-type is MambaModel/HybridModel. Since MambaModel
inherits from HybridModel, the isinstance check needs to include
HybridModel directly.

Signed-off-by: Ye Yu <yeyu@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant