Skip to content

Conversation

@JyotinderSingh
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @JyotinderSingh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the quantization capabilities within Keras by introducing a flexible QuantizationConfig API. This new configuration object allows users to specify custom quantizers for both weights and activations, moving beyond predefined quantization settings. The change integrates this new API into key layers like Dense, EinsumDense, Embedding, and ReversibleEmbedding, and refactors GPTQConfig to build upon this new base. This provides a more extensible and user-friendly approach to model quantization, allowing for more tailored optimization strategies.

Highlights

  • New Quantization API: Introduced a new QuantizationConfig API that allows for customizable weight and activation quantizers, providing greater flexibility in defining quantization schemes.
  • Layer Integration: Core layers such as Dense, EinsumDense, Embedding, and ReversibleEmbedding have been updated to accept and utilize the new QuantizationConfig object, enabling fine-grained control over their quantization behavior.
  • GPTQConfig Inheritance: The GPTQConfig now inherits from QuantizationConfig, streamlining the quantization configuration hierarchy and ensuring consistency across different quantization methods.
  • Model Quantization Method Update: The quantize method in the Model class has been enhanced to accept the config argument for all quantization modes, removing the previous restriction that it was only supported for GPTQ mode.
  • Weight-Only Quantization Support: Added explicit handling for weight-only quantization scenarios where inputs_quantizer might be None, ensuring correct scaling in such cases.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@JyotinderSingh JyotinderSingh marked this pull request as draft December 4, 2025 17:56
@JyotinderSingh JyotinderSingh changed the title Introduces customizable quantization API using QuantizationConfig Introduces QuantizationConfig for fine-grained quantization control Dec 4, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a QuantizationConfig to provide a more flexible and customizable quantization API. This is a significant improvement, allowing users to specify their own quantizers for weights and activations, and enabling features like weight-only quantization. The changes are well-implemented across various layers including Dense, EinsumDense, Embedding, and ReversibleEmbedding, as well as the model-level quantize method. The new QuantizationConfig class is well-designed with serialization support, and the accompanying tests are comprehensive. I have a couple of suggestions for minor code improvements to reduce redundancy and enhance clarity.

Comment on lines 787 to 796
# Handle activation quantization
if config and config.activation_quantizer:
self.inputs_quantizer = config.activation_quantizer
elif config and config.activation_quantizer is None:
# Weight-only quantization
pass
else:
# Default behavior
self.inputs_quantizer = quantizers.AbsMaxQuantizer(axis=-1)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block for handling activation quantization is redundant. The logic is already correctly and sufficiently handled inside _int8_build (and _int4_build) via QuantizationConfig.activation_quantizer_or_default. Removing this block will make the code cleaner and avoid duplication without changing the behavior.

Comment on lines 252 to 276
if self.tie_weights:
embeddings = ops.transpose(self._embeddings)
kernel = ops.transpose(self._embeddings)
scale = ops.transpose(self.embeddings_scale)
pack_axis = 0
orig_dim = self.output_dim
else:
embeddings = self.reverse_embeddings
kernel = self.reverse_embeddings
scale = self.reverse_embeddings_scale
unpacked_embeddings = quantizers.unpack_int4(
embeddings, self.output_dim, axis=0
pack_axis = 0
orig_dim = self.output_dim
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assignments for pack_axis and orig_dim are duplicated in both branches of the if self.tie_weights: condition. You can move these assignments out of the if/else block to reduce code duplication and improve readability.

Suggested change
if self.tie_weights:
embeddings = ops.transpose(self._embeddings)
kernel = ops.transpose(self._embeddings)
scale = ops.transpose(self.embeddings_scale)
pack_axis = 0
orig_dim = self.output_dim
else:
embeddings = self.reverse_embeddings
kernel = self.reverse_embeddings
scale = self.reverse_embeddings_scale
unpacked_embeddings = quantizers.unpack_int4(
embeddings, self.output_dim, axis=0
pack_axis = 0
orig_dim = self.output_dim
pack_axis = 0
orig_dim = self.output_dim
if self.tie_weights:
kernel = ops.transpose(self._embeddings)
scale = ops.transpose(self.embeddings_scale)
else:
kernel = self.reverse_embeddings
scale = self.reverse_embeddings_scale

@codecov-commenter
Copy link

codecov-commenter commented Dec 4, 2025

Codecov Report

❌ Patch coverage is 90.57592% with 18 lines in your changes missing coverage. Please review.
✅ Project coverage is 61.45%. Comparing base (9fc8185) to head (a3668d5).
⚠️ Report is 4 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/quantizers/quantization_config.py 88.04% 5 Missing and 6 partials ⚠️
keras/src/layers/core/einsum_dense.py 85.00% 2 Missing and 1 partial ⚠️
keras/src/layers/core/dense.py 90.90% 1 Missing and 1 partial ⚠️
keras/src/layers/core/reversible_embedding.py 92.30% 1 Missing and 1 partial ⚠️

❗ There is a different number of reports uploaded between BASE (9fc8185) and HEAD (a3668d5). Click for more details.

HEAD has 6 uploads less than BASE
Flag BASE (9fc8185) HEAD (a3668d5)
keras 5 2
keras-torch 1 0
keras-tensorflow 1 0
keras-jax 1 0
Additional details and impacted files
@@             Coverage Diff             @@
##           master   #21896       +/-   ##
===========================================
- Coverage   82.36%   61.45%   -20.92%     
===========================================
  Files         578      580        +2     
  Lines       59816    60047      +231     
  Branches     9387     9428       +41     
===========================================
- Hits        49270    36903    -12367     
- Misses       8147    20811    +12664     
+ Partials     2399     2333       -66     
Flag Coverage Δ
keras 61.44% <90.57%> (-20.75%) ⬇️
keras-jax ?
keras-numpy 57.44% <90.57%> (+<0.01%) ⬆️
keras-openvino 34.29% <32.98%> (-0.03%) ⬇️
keras-tensorflow ?
keras-torch ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@JyotinderSingh JyotinderSingh force-pushed the quantization-customization branch 2 times, most recently from fad1ed2 to 2ae1e37 Compare December 5, 2025 08:11
@JyotinderSingh JyotinderSingh force-pushed the quantization-customization branch from 2ae1e37 to a3668d5 Compare December 5, 2025 08:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants