Skip to content

Commit 97759a4

Browse files
realAsmaclaude
andcommitted
Refactor llm_qat example with YAML configs and clean data pipeline
Replaces launch.sh with YAML-driven configs, adds ModelOptArgParser with --config support, and moves dataset processing params from blend YAML to DataArguments for full CLI overrideability. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: realAsma <akuriparambi@nvidia.com>
1 parent af2fe24 commit 97759a4

40 files changed

Lines changed: 2375 additions & 985 deletions

.pre-commit-config.yaml

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ repos:
9393
examples/llm_eval/lm_eval_hf.py|
9494
examples/llm_eval/mmlu.py|
9595
examples/llm_eval/modeling.py|
96-
examples/llm_qat/main.py|
96+
examples/llm_qat/train.py|
9797
examples/llm_sparsity/weight_sparsity/finetune.py|
9898
examples/specdec_bench/specdec_bench/models/specbench_medusa.py|
9999
examples/speculative_decoding/main.py|
@@ -122,6 +122,21 @@ repos:
122122
args: ["-c", "pyproject.toml", "-q"]
123123
additional_dependencies: ["bandit[toml]"]
124124

125+
- repo: local
126+
hooks:
127+
- id: generate-arguments-md
128+
name: Regenerate examples/llm_qat/ARGUMENTS.md
129+
entry: bash -c 'python examples/llm_qat/train.py --generate_docs examples/llm_qat/ARGUMENTS.md'
130+
language: system
131+
files: >-
132+
(?x)^(
133+
examples/llm_qat/arguments\.py|
134+
examples/llm_qat/train\.py|
135+
modelopt/torch/opt/plugins/transformers\.py|
136+
modelopt/torch/quantization/plugins/transformers_trainer\.py
137+
)$
138+
pass_filenames: false
139+
125140
- repo: https://github.com/DavidAnson/markdownlint-cli2
126141
rev: v0.18.1
127142
hooks:

CHANGELOG.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,9 @@ NVIDIA Model Optimizer Changelog
1313
- Enable PTQ workflow for the Step3.5-Flash MoE model with NVFP4 W4A4 + FP8 KV cache quantization. See `modelopt_recipes/models/Step3.5-Flash/nvfp4-mlp-only.yaml <https://github.com/NVIDIA/Model-Optimizer/blob/main/modelopt_recipes/models/Step3.5-Flash/nvfp4-mlp-only.yaml>`_ for more details.
1414
- Add support for vLLM fakequant reload using ModelOpt state for HF models. See `examples/vllm_serve/README.md <https://github.com/NVIDIA/Model-Optimizer/tree/main/examples/vllm_serve#load-qatptq-model-and-serve-in-vllm-wip>`_ for more details.
1515
- [Early Testing] Add Claude Code PTQ skill (``.claude/skills/ptq/``) for agent-assisted post-training quantization. The skill guides the agent through environment detection, model support checking, format selection, and execution via the launcher or manual SLURM/Docker/bare GPU paths. Includes handling for unlisted models with custom module patching. This feature is in early testing — use with caution.
16+
- Refactor ``llm_qat`` example with unified YAML-based configuration and flexible dataset blending.
17+
``ModelOptArgParser`` adds ``--config`` YAML support with CLI overrides and auto-generates ``ARGUMENTS.md`` from dataclass definitions.
18+
Dataset blending (``configs/dataset/blend.yaml``) supports HuggingFace datasets, local JSON/JSONL/Parquet files, and weighted multi-source blends.
1619

1720
**Backward Breaking Changes**
1821

examples/llm_qad/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
Quantization-Aware Distillation (QAD) training scripts for language models using Megatron-LM. These scripts enable training quantized (e.g., NVFP4) student models with knowledge distillation from full-precision teacher models.
44

5+
> **Note:** For Hugging Face LLM QAD, see the [LLM QAT QAD section](../llm_qat/README.md#end-to-end-qad-example).
6+
57
## Overview
68

79
| Script | Purpose |

examples/llm_qat/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
.cache/

examples/llm_qat/ARGUMENTS.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# Argument Reference
2+
3+
_Auto-generated — do not edit by hand._
4+
5+
## DistillArguments
6+
7+
| Argument | Type | Default | Description |
8+
|----------|------|---------|-------------|
9+
| `--distill` | `bool` | `False` | Enable training with knowledge distillation. |
10+
| `--teacher_model` | `str` | `None` | The name or path of the teacher model to use for distillation. |
11+
| `--criterion` | `str` | `"logits_loss"` | Distillation loss criterion. Currently only 'logits_loss' is supported. |
12+
13+
## DataArguments
14+
15+
| Argument | Type | Default | Description |
16+
|----------|------|---------|-------------|
17+
| `--dataset_config` | `str` | `"configs/dataset/blend.yaml"` | Path to a dataset blend YAML config file. |
18+
| `--train_samples` | `int` | `20000` | Number of training samples to draw from the blend. |
19+
| `--eval_samples` | `int` | `2000` | Number of evaluation samples to draw from the blend. |
20+
| `--dataset_seed` | `int` | `42` | Random seed for dataset shuffling. |
21+
| `--dataset_cache_dir` | `str` | `".dataset_cache/tokenized"` | Directory for caching tokenized datasets. |
22+
| `--shuffle` | `bool` | `True` | Whether to shuffle dataset sources (reservoir sampling). |
23+
| `--shuffle_buffer` | `int` | `10000` | Buffer size for streaming shuffle. |
24+
| `--num_proc` | `int` | `16` | Number of CPU workers for tokenization. |
25+
26+
## ModelArguments
27+
28+
| Argument | Type | Default | Description |
29+
|----------|------|---------|-------------|
30+
| `--model_name_or_path` | `str` | `"meta-llama/Llama-2-7b-hf"` | |
31+
| `--model_max_length` | `int` | `4096` | Maximum sequence length. Sequences will be right padded (and possibly truncated). |
32+
33+
## QuantizeArguments
34+
35+
| Argument | Type | Default | Description |
36+
|----------|------|---------|-------------|
37+
| `--recipe` | `str` | `None` | Path to a quantization recipe YAML file (built-in or custom). Built-in recipes can be specified by relative path, e.g. 'general/ptq/nvfp4_default-fp8_kv'. |
38+
| `--calib_size` | `int` | `512` | Specify the calibration size for quantization. The calibration dataset is used to setup the quantization scale parameters. |
39+
| `--calib_batch_size` | `int` | `1` | Batch size for calibration data during quantization. |
40+
| `--compress` | `bool` | `False` | Whether to compress the model weights after quantization for QLoRA. This is useful for reducing the model size. |
41+
| `--quantize_output_dir` | `str` | `"quantized_model"` | Directory to save the quantized model checkpoint. |
42+
43+
## TrainingArguments
44+
45+
Extends [HuggingFace TrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments). Only additional/overridden arguments are shown below.
46+
47+
| Argument | Type | Default | Description |
48+
|----------|------|---------|-------------|
49+
| `--cache_dir` | `str` | `None` | |
50+
| `--lora` | `bool` | `False` | Whether to add LoRA (Low-Rank Adaptation) adapter before training. When using real quantization, the LoRA adapter must be set, as quantized weights will be frozen during training. |

0 commit comments

Comments
 (0)