Skip to content

IGNORE THIS PR #21389

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 87 commits into
base: master
Choose a base branch
from

Conversation

divyashreepathihalli
Copy link
Collaborator

just using this for some tests

@codecov-commenter
Copy link

codecov-commenter commented Jun 16, 2025

Codecov Report

Attention: Patch coverage is 17.74194% with 204 lines in your changes missing coverage. Please review.

Project coverage is 82.49%. Comparing base (22ab5a3) to head (7493c68).
Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/backend/jax/core.py 6.30% 103 Missing and 1 partial ⚠️
keras/src/models/functional.py 10.52% 49 Missing and 2 partials ⚠️
keras/src/backend/config.py 30.55% 20 Missing and 5 partials ⚠️
keras/src/layers/layer.py 42.85% 5 Missing and 3 partials ⚠️
keras/src/backend/jax/layer.py 20.00% 3 Missing and 1 partial ⚠️
keras/src/backend/common/variables.py 0.00% 2 Missing and 1 partial ⚠️
keras/src/backend/jax/trainer.py 66.66% 2 Missing and 1 partial ⚠️
keras/src/ops/operation.py 25.00% 2 Missing and 1 partial ⚠️
keras/src/backend/jax/__init__.py 50.00% 1 Missing and 1 partial ⚠️
keras/api/_tf_keras/keras/config/__init__.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21389      +/-   ##
==========================================
- Coverage   82.75%   82.49%   -0.27%     
==========================================
  Files         565      565              
  Lines       55346    55571     +225     
  Branches     8634     8691      +57     
==========================================
+ Hits        45804    45842      +38     
- Misses       7440     7614     +174     
- Partials     2102     2115      +13     
Flag Coverage Δ
keras 82.30% <17.74%> (-0.27%) ⬇️
keras-jax 63.13% <17.74%> (-0.21%) ⬇️
keras-numpy 58.36% <13.30%> (-0.20%) ⬇️
keras-openvino 33.69% <12.50%> (-0.08%) ⬇️
keras-tensorflow 63.55% <13.30%> (-0.22%) ⬇️
keras-torch 63.17% <13.30%> (-0.21%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@divyashreepathihalli divyashreepathihalli marked this pull request as draft June 17, 2025 05:54
@divyashreepathihalli divyashreepathihalli marked this pull request as ready for review June 17, 2025 18:50
@divyashreepathihalli divyashreepathihalli marked this pull request as draft June 17, 2025 18:50
@divyashreepathihalli divyashreepathihalli force-pushed the nnx-tests branch 3 times, most recently from 64b99f0 to f0b10ef Compare June 20, 2025 02:34
@divyashreepathihalli
Copy link
Collaborator Author

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates Flax NNX support into the JAX backend, involving configuration, core components, layers, and models. Key areas for improvement include test dependency alignment, addressing disabled tests, and enhancing error handling in configuration parsing.

"jax": ("jax[cpu]", ""),
# please update the jax version here if jax version is updated in
# requirements file
"jax": ("jax[cpu]==0.5.1 flax>=0.10.1", ""),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The jax version specified here (0.5.1) differs from the one in requirements.txt (0.5.0). This inconsistency can lead to testing discrepancies. Align the versions to ensure a consistent testing environment.

Comment on lines +1 to +147
# keras_module.layers.MaxPooling2D(pool_size=(2, 2)),
# keras_module.layers.Flatten(),
# keras_module.layers.Dense(num_classes, activation="softmax"),
# ]
# )
# return model


# def compile_model(model):
# model.compile(
# loss="categorical_crossentropy",
# optimizer="adam",
# metrics=["mae", "accuracy"],
# jit_compile=False,
# run_eagerly=True,
# )


# def train_model(model, x, y):
# return model.fit(
# x,
# y,
# batch_size=BATCH_SIZE,
# epochs=EPOCHS,
# shuffle=False,
# verbose=0,
# )


# def eval_model(model, x, y):
# score = model.evaluate(x, y, verbose=0, batch_size=BATCH_SIZE)
# print(score)
# return score


# def check_history(h1, h2):
# for key in h1.history.keys():
# print(f"{key}:")
# print(h1.history[key])
# print(h2.history[key])
# np.testing.assert_allclose(
# h1.history[key],
# h2.history[key],
# atol=1e-3,
# )


# def predict_model(model, x):
# return model.predict(x, batch_size=BATCH_SIZE, verbose=0)


# def numerical_test():
# x_train, y_train = build_mnist_data(NUM_CLASSES)
# keras_model = build_keras_model(keras, NUM_CLASSES)
# tf_keras_model = build_keras_model(tf_keras, NUM_CLASSES)

# # Make sure both model have same weights before training
# weights = [weight.numpy() for weight in keras_model.weights]
# tf_keras_model.set_weights(weights)

# for kw, kcw in zip(keras_model.weights, tf_keras_model.weights):
# np.testing.assert_allclose(kw.numpy(), kcw.numpy())

# compile_model(keras_model)
# compile_model(tf_keras_model)

# print("Checking training histories:")
# keras_history = train_model(keras_model, x_train, y_train)
# tf_keras_history = train_model(tf_keras_model, x_train, y_train)
# check_history(keras_history, tf_keras_history)
# print("Training histories match.")
# print()

# print("Checking trained weights:")
# for kw, kcw in zip(keras_model.weights, tf_keras_model.weights):
# np.testing.assert_allclose(kw.numpy(), kcw.numpy(), atol=1e-3)
# print("Trained weights match.")
# print()

# print("Checking predict:")
# outputs1 = predict_model(keras_model, x_train)
# outputs2 = predict_model(tf_keras_model, x_train)
# np.testing.assert_allclose(outputs1, outputs2, atol=1e-3)
# print("Predict results match.")
# print()

# print("Checking evaluate:")
# score1 = eval_model(keras_model, x_train, y_train)
# score2 = eval_model(tf_keras_model, x_train, y_train)
# np.testing.assert_allclose(score1, score2, atol=1e-3)
# print("Evaluate results match.")


# if __name__ == "__main__":
# if keras.backend.backend() == "openvino":
# # this test requires trainable backend
# sys.exit(0)
# keras.utils.set_random_seed(1337)
# tf_keras.utils.set_random_seed(1337)
# numerical_test()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This test file is fully commented out, which removes its test coverage. If these tests are no longer needed, remove the file. If temporarily disabled, add a comment explaining the reason and a tracking issue for re-enabling.

Comment on lines +445 to +449
env_val = os.environ["KERAS_NNX_ENABLED"].lower()
if env_val == "true":
_NNX_ENABLED = True
elif env_val == "false":
_NNX_ENABLED = False

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The parsing logic for KERAS_NNX_ENABLED only handles "true" and "false", ignoring other boolean representations. Enhance it to handle common boolean values like "1" and "0" for better usability.

Suggested change
env_val = os.environ["KERAS_NNX_ENABLED"].lower()
if env_val == "true":
_NNX_ENABLED = True
elif env_val == "false":
_NNX_ENABLED = False
if "KERAS_NNX_ENABLED" in os.environ:
env_val = os.environ["KERAS_NNX_ENABLED"].lower()
if env_val in ("true", "1"):
_NNX_ENABLED = True
elif env_val in ("false", "0"):
_NNX_ENABLED = False

Comment on lines +1677 to +1679
self._parent_path = current_path()
except Exception:
pass

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The try...except Exception: pass block can hide issues when getting current_path(). Consider catching a specific exception or logging the exception for debugging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants