Conversation
There was a problem hiding this comment.
Pull request overview
Refactors AutoRound toward a new “context + compressor + algorithm” architecture, introducing new compressors_new/ and context/ modules and updating scheme parsing/export helpers to support the new flow.
Changes:
- Added new context singletons (
ModelContext,CompressContext) and a newcompressors_newimplementation path. - Expanded scheme parsing to reconcile
bits/data_typeand support user overrides + AutoScheme integration. - Added new calibration utilities and algorithm scaffolding for quantization backends (AutoRound/RTN).
Reviewed changes
Copilot reviewed 26 out of 26 changed files in this pull request and generated 18 comments.
Show a summary per file
| File | Description |
|---|---|
| auto_round/utils/model.py | Avoids runtime import cycles via TYPE_CHECKING for QuantizationScheme. |
| auto_round/schemes.py | Adds scheme override + parsing helpers and bits/dtype reconciliation. |
| auto_round/formats.py | Switches divisibility checks to global supported-layer constants. |
| auto_round/context/model_context.py | Introduces model lifecycle/loading + AMP setup and forward-hook management. |
| auto_round/context/compress_context.py | Introduces device/device_map and memory-usage knobs as shared context. |
| auto_round/context/base.py | Adds simple singleton context base. |
| auto_round/context/init.py | Package init for new context module. |
| auto_round/compressors_new/utils.py | New utility module (layer config, gguf mapping, caching helpers, forward helpers). |
| auto_round/compressors_new/shard_writer.py | New shard-based saver with optional safetensors support. |
| auto_round/compressors_new/config.py | Introduces extra/legacy config dataclasses for the new compressor path. |
| auto_round/compressors_new/base.py | New “BaseCompressor” implementation wiring contexts, formats, caching, quant loop. |
| auto_round/compressors_new/init.py | Package init for compressors_new. |
| auto_round/compressors/utils.py | Extends legacy layer-config resolution to include safetensors-only tensors and skip missing modules. |
| auto_round/calibration/utils.py | Adds helpers for “early stop” caching and input reshaping for block tuning. |
| auto_round/calibration/init.py | Package init for calibration. |
| auto_round/algorithms/quantization/rtn/rtn.py | Adds placeholder RTN quantization module file. |
| auto_round/algorithms/quantization/rtn/config.py | Adds RTN algorithm config stub. |
| auto_round/algorithms/quantization/rtn/init.py | Package init for RTN quantization. |
| auto_round/algorithms/quantization/base.py | Adds base quantization class stub. |
| auto_round/algorithms/quantization/auto_round/quantize.py | Adds new AutoRound quantizer implementation (algorithm object). |
| auto_round/algorithms/quantization/auto_round/config.py | Adds new AutoRound algorithm config. |
| auto_round/algorithms/quantization/auto_round/init.py | Package init for AutoRound quantization algorithm. |
| auto_round/algorithms/quantization/init.py | Package init for quantization algorithms. |
| auto_round/algorithms/base.py | Adds base algorithm stub. |
| auto_round/algorithms/alg_config.py | Adds base algorithm config stub. |
| auto_round/algorithms/init.py | Package init for algorithms. |
|
If there is already an algorithm folder, what is the purpose of the compressor folder? |
…uo/new_ar_arch
…uo/new_ar_arch
| import torch | ||
|
|
||
|
|
||
| class ExtraConfig: |
There was a problem hiding this comment.
ExtraConfig is a monolithic catch-all config class.
ExtraConfig bundles tuning, scheme, MLLM, and diffusion settings into a single class — the opposite of llm-compressor's approach where each modifier owns its own typed config. This "one object owns everything" pattern makes it harder to add new algorithms independently and is a carryover from the old monolithic design rather than a step toward the intended modular architecture.
There was a problem hiding this comment.
Despite this PR's goal of separating concerns into Context/Algorithm/Compressor, BaseCompressor still owns everything: config parsing, calibration data collection, forward hook management, quantization loop control, and model saving. By contrast, llm-compressor distributes these responsibilities across dedicated Pipeline (calibration), Modifier (algorithm logic), Session (lifecycle orchestration), and entrypoint (API) layers. The refactor restructures the file layout without achieving real decoupling.
…uo/new_ar_arch
Signed-off-by: n1ck-guo <heng.guo@intel.com>
for more information, see https://pre-commit.ci
| set_module(self.model, name, m) | ||
| tuning_device = m.tuning_device if hasattr(m, "tuning_device") else self.compress_context.device | ||
| # Step 1: let gguf merge layers or rename module first and we will handle the RTN is gguf specific logic | ||
| if self.compress_context.is_immediate_packing and self.compress_context.formats[0].is_gguf(): |
There was a problem hiding this comment.
better decouple formats from algorithms
…uo/new_ar_arch
Signed-off-by: n1ck-guo <heng.guo@intel.com>
Signed-off-by: n1ck-guo <heng.guo@intel.com>
…uo/new_ar_arch
…nsor helper, safetensor_only_matched, dispatch None guard, extend ignore_layers Signed-off-by: n1ck-guo <heng.guo@intel.com>
Signed-off-by: n1ck-guo <heng.guo@intel.com>
…uo/new_ar_arch
…uo/new_ar_arch
| The output tensor of the block. | ||
| """ | ||
|
|
||
| output = defaultdict(list) |
There was a problem hiding this comment.
better wrap this function to a global funtion, like get_block_outputs(index, save_output), the developer does not need to care about different model type or cache device
| ) | ||
| from auto_round.logger import logger | ||
| from auto_round.modeling.fused_moe.replace_modules import materialize_model_ | ||
| from auto_round.sign_sgd import SignSGD |
There was a problem hiding this comment.
mv sign_sgd to this foler
|
|
||
|
|
||
| class ARQuantizer(BaseQuantizers): | ||
| is_adam: bool = False |
There was a problem hiding this comment.
It would be better to move is_adam; an algorithm should only be responsible for its own logic.
| ) | ||
| # Call this before quantization and after applying the block wrapper. | ||
| if self.config.is_nv_fp: # enable qkv and moe structure global_scale fuse. | ||
| from auto_round.data_type.utils import update_fused_layer_global_scales |
There was a problem hiding this comment.
Is there a way to move this somewhere else?
| ) | ||
|
|
||
| if self.compress_context.low_gpu_mem_usage: | ||
| clear_memory(device_list=self.compress_context.device_list) # clear cached memory during training |
There was a problem hiding this comment.
how bout coding a context to conduct the wrapper and unwrapper and clear memory
| Returns: | ||
| dict: Empty dict (zero-shot RTN has no tunable parameters to return). | ||
| """ | ||
| shard_writer = ShardWriter.get_shard_writer() |
There was a problem hiding this comment.
better decouple shardwriter
| if lm_head_name is not None: | ||
| tied_weights_layers.append(lm_head_name) | ||
|
|
||
| materialize_model_(block) |
There was a problem hiding this comment.
All of the above should be moved elsewhere and do not require algorithm development to implement.
| m.to("meta") | ||
|
|
||
| # Move remaining GPU tensors to CPU; offload to disk if low_cpu_mem_usage. | ||
| if not self.compress_context.is_immediate_saving: |
Signed-off-by: n1ck-guo <heng.guo@intel.com>
| **kwargs, | ||
| ): | ||
| self.quantize_config = None | ||
| self.rotation_configs: list[BaseRotationConfig] = [] |
There was a problem hiding this comment.
It is not supported to use two or more rotation configs sequentially on the same model. we can support laywise rotation configs (this is not related to this pr). So, in this line, I think we just support one rotation config here
|
|
||
| def __init__( | ||
| self, | ||
| layer_config: dict[str, Union[str, dict]] = None, |
There was a problem hiding this comment.
Please move layer_config and data_config(nsamples,seqlen) to the API (i.e., the compressor in this PR). As discussed offline, compressor should be renamed to AutoRound. Algorithm-specific kwargs may override data_config, but should not override layer_config.
There was a problem hiding this comment.
AutoRound(alg_config=[xxx], nsamples=512, layer_config=xxx)
@n1ck-guo For the new API usage, would it be better to determine the order of applying configs based on the order in the config list? |
Yes, you’re right. We should preserve the algorithm order as provided by the user unless there are technical limitations. However, for unsupported or suboptimal orders, such as applying AR before Hadamard, we should log a warning and provide a recommended order. |
Description
Main entry point responsible for orchestrating the workflow, invoking different algorithms, and handling model persistence. Supports block-wise or layer-wise quantization strategies. Primary subclasses include TuneCompressor and ZeroShotCompressor.
Usage of new api:
Type of Change
Related Issues
Fixes or relates to #
Checklist Before Submitting