Skip to content

Conversation

@yashwantbezawada
Copy link
Contributor

Fixes #21772

The CosineDecay documentation was misleading about how the alpha parameter works.

Current docs say: learning rate decays "to alpha"
Reality: learning rate decays to initial_learning_rate * alpha

The parameter description correctly states alpha is "a fraction of initial_learning_rate", but the explanation text contradicted this. Updated the explanation to match the actual implementation and parameter description.

The documentation incorrectly stated that learning rate decays
'to alpha', when it actually decays to 'initial_lr * alpha'.

Updated the docstring to make it clear that alpha is a fraction/
multiplier, not an absolute target value.
@google-cla
Copy link

google-cla bot commented Nov 6, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yashwantbezawada, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an inaccuracy in the documentation for the CosineDecay learning rate schedule within Keras. The primary goal is to ensure that users clearly understand the behavior of the alpha parameter, which acts as a multiplier for the final learning rate, rather than being the final learning rate itself. This change improves the clarity and correctness of the Keras API documentation, preventing potential misunderstandings for developers using this schedule.

Highlights

  • Documentation Clarification: The documentation for the CosineDecay learning rate schedule has been updated to accurately reflect how the alpha parameter functions. Previously, it incorrectly stated that the learning rate decays "to alpha", but it actually decays to initial_learning_rate * alpha (or warmup_target * alpha if warmup is used).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is a great documentation fix that clarifies how the alpha parameter in CosineDecay works. The updated description correctly states that alpha is a multiplier for the learning rate at the start of the decay phase. I've added one minor suggestion to format the new expressions as code for consistency. Additionally, for future consideration, the description of alpha in the Arguments section could also be updated to reflect its behavior when warmup_target is used, to make the documentation fully consistent.

@yashwantbezawada
Copy link
Contributor Author

Signed the CLA

@codecov-commenter
Copy link

codecov-commenter commented Nov 6, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.66%. Comparing base (b4d9c89) to head (3ea7ed4).

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #21827   +/-   ##
=======================================
  Coverage   82.66%   82.66%           
=======================================
  Files         577      577           
  Lines       59460    59460           
  Branches     9322     9322           
=======================================
  Hits        49152    49152           
  Misses       7905     7905           
  Partials     2403     2403           
Flag Coverage Δ
keras 82.48% <ø> (ø)
keras-jax 63.30% <ø> (ø)
keras-numpy 57.54% <ø> (ø)
keras-openvino 34.35% <ø> (ø)
keras-tensorflow 64.11% <ø> (ø)
keras-torch 63.59% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the PR

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Nov 6, 2025
@fchollet fchollet merged commit 68fb291 into keras-team:master Nov 6, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kokoro:force-run ready to pull Ready to be merged into the codebase size:XS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unclear description of tf.keras.optimizers.schedules.CosineDecay

4 participants