Skip to content

LoadBalancer keyed on slot instead of primary node, not reset on NodesManager.initialize() #3683

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

drewfustin
Copy link

@drewfustin drewfustin commented Jun 19, 2025

Pull Request check-list

  • Do tests and lints pass with this change?
  • Do the CI tests pass with this change (enable it first in your forked repo and wait for the github action build to finish)? link
  • Is the new or changed code fully tested?
  • Is a documentation update included (if this change modifies existing APIs, or introduces new ones)?
  • Is there an example added to the examples folder (if applicable)?

Description of change

As noted in #3681, reseting the load balancer on NodesManager.initialize() causes the index associated with the primary node to reset to 0. If a ConnectionError or TimeoutError is raised by an attempt to connect to a primary node, NodesManager.initialize() is called, and the the load balancer's index for that node will reset to 0. Therefore, the next attempt in the retry loop will not move on from the primary node to a replica node (with index > 0) as expected, but will instead retry the primary node again (and presumably raise the same error).

Since NodesManager.initialize() being called on ConnectionError or TimeoutError is the valid strategy, and since the primary node's host will often be replaced in tandem with events that cause these errors (e.g. when a primary node is deleted and then recreated in Kubernetes), keying the LoadBalancer dictionary on the primary node's name (host:port) doesn't feel appropriate. Instead, keying the dictionary on the Redis Cluster's slot seems to be a better strategy. As such, the server_index corresponding to key slot doesn't need to be reset to 0 on NodesManager.initialize() as the slot isn't expected to change and need to be reset, only the host:port would require such. Instead, the slot can maintain its "state" even when the NodesManager is reinitialized, thus resolving #3681.

With the fix in this PR implemented, the output of the loop from #3681 becomes what is expected when the primary node goes down (the load balancer continues to the next node on a TimeoutError):

Attempt 1
idx: 2 | node: 100.66.151.143:6379 | type: replica
'bar'

Attempt 2
idx: 0 | node: 100.66.122.229:6379 | type: primary
Exception: Timeout connecting to server

Attempt 3
idx: 1 | node: 100.66.151.143:6379 | type: replica
'bar'

Attempt 4
idx: 2 | node: 100.66.106.241:6379 | type: replica
'bar'

Attempt 5
idx: 0 | node: 100.66.122.229:6379 | type: primary
Exception: Error 113 connecting to 100.66.122.229:6379. No route to host.

@petyaslavova petyaslavova requested a review from Copilot June 30, 2025 06:42
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the LoadBalancer to use cluster slots as keys instead of primary node names, so that slot-based indices persist across NodesManager reinitializations. Related tests are updated to assert on slots.

  • Rename internal mapping from primary_to_idx to slot_to_idx and adapt methods accordingly
  • Change calls to get_server_index to pass slot IDs and update get_node_from_slot
  • Remove automatic load balancer reset in NodesManager.reset to preserve slot indices
  • Update sync and async cluster tests to use slot keys in load balancer assertions

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
tests/test_cluster.py Switch load balancer tests to use slot argument instead of primary name
tests/test_asyncio/test_cluster.py Same slot-based updates for async cluster tests
redis/cluster.py Core LoadBalancer refactor: keying by slot, updated signatures, and removed reset in NodesManager
Comments suppressed due to low confidence (5)

redis/cluster.py:1409

  • Update the method docstring (or add a comment) for get_server_index to clearly state that the first parameter is now slot: int instead of the previous primary node name, and describe the expected behavior.
    def get_server_index(

redis/cluster.py:1406

  • [nitpick] Consider renaming slot_to_idx to a more descriptive identifier such as slot_index_map or slot_to_index to make it clearer that this is a mapping from slot IDs to rotation indices.
        self.slot_to_idx = {}

redis/cluster.py:1435

  • The inline comment here could be updated to reflect the slot-based logic: e.g. "skip index 0 (primary) when replicas_only is true" to avoid confusion about nodes vs. slot indices.
            # skip the primary node index

redis/cluster.py:1836

  • Add a test to verify that calling NodesManager.reset() no longer clears the slot-based load balancer state, ensuring that slot indices persist across reinitializations.
    def reset(self):

redis/cluster.py:1405

  • Add tests for non-default start_index values to ensure the LoadBalancer correctly starts rotations from the specified offset.
    def __init__(self, start_index: int = 0) -> None:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Round robin load balancing isn't working as expected if primary node goes down for Redis cluster mode
2 participants