Skip to content

Feature: Fix playlist v2 endpoint usage. Add pagination workers from mopidy-tidal #351

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jul 31, 2025

Conversation

@tehkillerbee tehkillerbee requested a review from blacklight July 29, 2025 21:18
@tehkillerbee
Copy link
Collaborator Author

I've fixed the v2 endpoint usage and I have also ported the pagination worker stuff from mopidy-tidal

I did need to make some small adjustments to make ordering/sorting work. Mainly rearranging the args. But perhaps this will break something in mopidy-tidal?

I will also add a separate PR in mopidy-tidal handling these changes (will have to be tomorrow).

Feel free to take a quick look @blacklight , especially regarding the adjustments of the workers.
f8c151f

Formatting fixes
@blacklight
Copy link
Collaborator

blacklight commented Jul 30, 2025

@tehkillerbee actually I'm a bit puzzled about the best way to handle rate throttling on the calls to the Tidal API.

Your initial approach would have caused the worker thread to fail early, as soon as it gets a 429 or a null object, which means that it wouldn't make it in time to cache the partial results it fetched. But at least the termination of the workers that hit 429s would have eased the number of outgoing calls, which means less time to wait for a retry before the rate limit is lifted from Tidal's side.

My proposed solution gracefully handles errors and skips them without resulting in the termination of the workers, which means that it's guaranteed that at least the partial results will be cached. But it also means that the library will keep bombarding the API with requests in spite of having already received a 429, which will mean more time to wait before the rate limit is lifted.

I feel like the most sensible approach here may be to explicitly catch the first 429 and keep track in the global state of the fact that we're currently being throttled - and create a timer that automatically clears that event after e.g. 60 seconds.

If we get a 429 we know already that any other API calls with that token within the next minute or so will fail. So maybe it makes sense to just return early while the limitation is active, so the rate limiting window is not extended, return an empty list or cache whatever partial result we got back, and log that throttling limitations are currently active.

Also how about making the number of workers configurable? My initial implementation on mopidy-tidal wasn't very parallelized (only 2 workers) because I was much more likely to hit rate limits when making more requests in parallel. Your approach with 5 workers is more likely to hit those rate limits, but if we mitigate it with a "buzzing window" initialized on the first 429 it should be manageable (it means that yes, we're more likely to hit rate limits if we make more parallel requests, but at least we back-off consistently, and in the average case where at least most of the content is cached things will be much faster). So if we can scale up the number of workers without being punished with more throttling how about defaulting to 5, but making configurable for those who want to either tune for maximizing the chances of getting the results in the first go vs. those who prefer speed at the cost of a couple of retries on the first load?

What do you think? If you like this approach I can probably create a PR on top of this one.

-- EDIT --

Sorry, all this talk about caching made me remember that actually I didn't implement the caching solution on python-tidal, only on mopidy-tidal...do you think it makes sense to move that logic here too, and probably leverage it directly in get_items()? That would make it much easier to consistently handle performance and throttling mitigation strategies instead of delegating it to the clients.

Improve robustness, reduce number of workers to 2.

Co-authored-by: Fabio Manganiello <[email protected]>
Formatting
@tehkillerbee
Copy link
Collaborator Author

tehkillerbee commented Jul 30, 2025

Your approach with 5 workers is more likely to hit those rate limits

Agreed, it was never my intention to increase the number of workers. I merely wanted to keep the existing functionality, but move it to python tidal, but it looks like I forgot that I had changed the number of workers as an earlier experiment.

keep track in the global state of the fact that we're currently being throttled

Yes, now that the pagination functionality has been moved inside python-tidal, it is a good idea to make it more robust as well to avoid external users getting issues with 429 as we've previously seen with mopidy-tidal. We could consider merging this PR and creating a new one specifically for throttling.

I didn't implement the caching solution on python-tidal

Correct. It does sound like a good idea to move the caching functionality out of mopidy-tidal. Some of these features currently only bring value to the users of the mopidy-tidal plugin, which is a shame, so it would be nice to port them to python-tidal. It would also help making mopidy-tidal more lean and easier to maintain.

@blacklight blacklight self-requested a review July 30, 2025 21:35
@blacklight
Copy link
Collaborator

blacklight commented Jul 30, 2025

Ok, then I'll proceed with approving this and then in the next days I can work on two follow-ups:

  1. Smart throttling (IMHO if we handle throttling globally we can also increase the number of workers) + configurable workers
  2. Move the caching logic

@tehkillerbee tehkillerbee merged commit 6211a6c into main Jul 31, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Playlists are not listed with v 0.8.4 Feature request: Exposing way to fetch in chunks without relying on implementation details
2 participants