Skip to content

Conversation

@uniphil
Copy link
Contributor

@uniphil uniphil commented Apr 25, 2025

trying out with a new query param at the /links endpoint

  • from_dids=<did>,<did>,...

yeah it's doing comma-joins, which is not nice, but multi-value axum query parser is more than i apparently have patience for today.

  • when from_dids is absent, all links are returned.
  • there is no enforced maximum number of dids yet (though in practice there are limits to URL length that will make it to the server)

the implementation eagerly filters the whole set before doing things like paging. it's not the most efficient but feels reasonable given the current data model.

documentation and the web demo pages need to be updated still, if this works okay.

@uniphil
Copy link
Contributor Author

uniphil commented Apr 26, 2025

it's running on "prod" now but todos before merging the branch:

  • docs + demo pages
  • decide if this filter makes sense for the "distinct dids" endpoints as well. even the counts endpoints maybeeee?
  • i think eager filtering is actually wrong, or at least a little bit wrong

for the last one: the problem is with paging

right now i don't actually remove deleted links, i replace them with empty markers (tombstones basically), which ensures that cursors don't shift for clients paging through results.

eagerly filtering currently also removes the tombstones, so if a verifier removes their verification for a user while you're paging, uh, something will happen. i don't have brain to think through if it's a missed verifier or double verifier. i think it's double? that's not the worst tbh.

the biggest upside of eager filtering is that you'd expect pages to be full. if someone has thousands of verifiers but you only have one trusted verifier, then with tombstones you might need to follow cursors over many empty pages before finding out your answer.

if your set of trusted verifiers is small (<100) then you probably never page anyway and the current (eager) version is ideal. if you have a much bigger set, or those verifiers make huge numbers of verifications (following handle changes etc) then maybe it starts to matter idk.

@uniphil
Copy link
Contributor Author

uniphil commented Apr 28, 2025

tentative thought is to limit it to 100 dids.

most of the time there will be 0 or 1 record per did so no paging needed at page-limit 100. in weird or adversarial cases a did might make many records... but for now going to defer on worrying about that too much.

i'm going to leave the difference in paginating over the cursor as a quirk for the current implementation. the new data model (one day...) will have to handle this differently, so i think limiting effort here at some point makes sense. and the only buggy thing should be that you see a record more than once at a page edge if someone deletes/deactivates/whatever, not a missed record.

@uniphil uniphil merged commit 6eb61e3 into main Oct 1, 2025
2 checks passed
@uniphil
Copy link
Contributor Author

uniphil commented Oct 1, 2025

for the record: the initial version implemented here (from_did=comma,separated,...) is kept since things have been built on it, but is deprecated. introduced as deprecated. heh.

i added did=one&did=two&did=three support instead which will be the recommended way to pass the list, since this is how atproto http/xrpc lists are passed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants