-
Notifications
You must be signed in to change notification settings - Fork 117
Feature: Fix playlist v2 endpoint usage. Add pagination workers from mopidy-tidal #351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
tehkillerbee
commented
Jul 29, 2025
- Move "data" field validation/fallback for consistency.
- Playlists: Ensure only PLAYLIST types are returned when fetching from. Fixes Playlists are not listed with v 0.8.4 #345 , Searching isn't working #349
- Moved get playlist_folders() helper for consistency.
- Add workers from mopidy-tidal.
- Rearrange args (order, order_direction should be last)
- Add paginated getters for artists, albums, playlists and tracks. Fixes Feature request: Exposing way to fetch in chunks without relying on implementation details #265
I've fixed the v2 endpoint usage and I have also ported the pagination worker stuff from mopidy-tidal I did need to make some small adjustments to make ordering/sorting work. Mainly rearranging the args. But perhaps this will break something in mopidy-tidal? I will also add a separate PR in mopidy-tidal handling these changes (will have to be tomorrow). Feel free to take a quick look @blacklight , especially regarding the adjustments of the workers. |
Formatting fixes
66d4281
to
ba20831
Compare
@tehkillerbee actually I'm a bit puzzled about the best way to handle rate throttling on the calls to the Tidal API. Your initial approach would have caused the worker thread to fail early, as soon as it gets a 429 or a null object, which means that it wouldn't make it in time to cache the partial results it fetched. But at least the termination of the workers that hit 429s would have eased the number of outgoing calls, which means less time to wait for a retry before the rate limit is lifted from Tidal's side. My proposed solution gracefully handles errors and skips them without resulting in the termination of the workers, which means that it's guaranteed that at least the partial results will be cached. But it also means that the library will keep bombarding the API with requests in spite of having already received a 429, which will mean more time to wait before the rate limit is lifted. I feel like the most sensible approach here may be to explicitly catch the first 429 and keep track in the global state of the fact that we're currently being throttled - and create a timer that automatically clears that event after e.g. 60 seconds. If we get a 429 we know already that any other API calls with that token within the next minute or so will fail. So maybe it makes sense to just return early while the limitation is active, so the rate limiting window is not extended, return an empty list or cache whatever partial result we got back, and log that throttling limitations are currently active. Also how about making the number of workers configurable? My initial implementation on mopidy-tidal wasn't very parallelized (only 2 workers) because I was much more likely to hit rate limits when making more requests in parallel. Your approach with 5 workers is more likely to hit those rate limits, but if we mitigate it with a "buzzing window" initialized on the first 429 it should be manageable (it means that yes, we're more likely to hit rate limits if we make more parallel requests, but at least we back-off consistently, and in the average case where at least most of the content is cached things will be much faster). So if we can scale up the number of workers without being punished with more throttling how about defaulting to 5, but making configurable for those who want to either tune for maximizing the chances of getting the results in the first go vs. those who prefer speed at the cost of a couple of retries on the first load? What do you think? If you like this approach I can probably create a PR on top of this one. -- EDIT -- Sorry, all this talk about caching made me remember that actually I didn't implement the caching solution on python-tidal, only on mopidy-tidal...do you think it makes sense to move that logic here too, and probably leverage it directly in |
Improve robustness, reduce number of workers to 2. Co-authored-by: Fabio Manganiello <[email protected]> Formatting
9e78b17
to
07cedb7
Compare
Agreed, it was never my intention to increase the number of workers. I merely wanted to keep the existing functionality, but move it to python tidal, but it looks like I forgot that I had changed the number of workers as an earlier experiment.
Yes, now that the pagination functionality has been moved inside python-tidal, it is a good idea to make it more robust as well to avoid external users getting issues with 429 as we've previously seen with mopidy-tidal. We could consider merging this PR and creating a new one specifically for throttling.
Correct. It does sound like a good idea to move the caching functionality out of mopidy-tidal. Some of these features currently only bring value to the users of the mopidy-tidal plugin, which is a shame, so it would be nice to port them to python-tidal. It would also help making mopidy-tidal more lean and easier to maintain. |
Ok, then I'll proceed with approving this and then in the next days I can work on two follow-ups:
|