[T0-296] - add GET /v1/test-executions search endpoint#642
[T0-296] - add GET /v1/test-executions search endpoint#642
Conversation
There was a problem hiding this comment.
I think there's some misunderstanding here:
I thought the spec was to add a /v1/search ... but there's some confusion stemming from the comment on the Jira card ... I thought that comment was about the filters, and failed to notice that the endpoint was also different in the suggestion from @rpbritton
If we are just going to add this additional /test-executions, how will that work with the sister frontend implementation? Will we have a new search page only for test-executions?
The spec asked to unify searching and enable using it for all test executions and results.
I must've missed that part of your spec because I'm not sure how that would work. Test Executions and Test Results are different things, you cannot return a Regarding the frontend, yes, I think there needs to be separate views because we are viewing different things. The idea I pitched in the ticket was view tabs similar to what exist on the artefacts page. |
Hmm ... I probably should have spun it out in a separate spec but ... I thought I had put it as a prominent set of requirement. I should have then made sure to present them better. Sorry! Don't
My proposal was to merge them ... so that we had a unified search page. It might be complicated implementation-wise, but would make the search page powerful. But powerful often also turns unintuitive ... After thinking about it though ... I am willing to concede unless something clicks in your minds. Perhaps when implementing the frontend, the |
@copilot wdyt? Anyway, yes, I still believe in my opinion that merging these two things is not a great idea because they are fundamentally different. Perhaps you have not seen the implementation of it, but the TestResultFilters is used for more than just search, including attachment rules, bulk issue attachments, etc. Those would need unique handling and I fear the complexity. |
|
@rpbritton I've opened a new pull request, #643, to work on those changes. Once the pull request is ready, I'll request review from you. |
| router.include_router(permissions.router, prefix="/v1/permissions") | ||
| router.include_router(applications.router, prefix="/v1/applications") | ||
| router.include_router(docs.router) | ||
| router.include_router(test_execution_search.router, prefix="/v1/test-executions") |
There was a problem hiding this comment.
Why do you need another router? Considering router.include_router(test_executions.router, prefix="/v1/test-executions") above?
There was a problem hiding this comment.
The search router has a route with path "". FastAPI raises an error when both the include prefix and the route path are empty simultaneously which happens when including it inside __init__.py since that router has no prefix of its own. The workaround is registering it directly in router.py with an explicit prefix, same as relevant_links does.
| events: list[TestEventResponse] | ||
|
|
||
|
|
||
| class TestExecutionSearchItem(BaseModel): |
There was a problem hiding this comment.
Any reason to not use TestExecutionResponse?
Looking through the code I do see the TestExecutionResponse is more of a minimal response (no artefact or build) so I do support adding a new type, but I would rather call this model TestExecutionFullResponse and have it mimic the same fields as TestExecutionResponse (maybe even enherit it), and then additionally add the extra fields.
There was a problem hiding this comment.
Good call. Then the plan should be to replace TestExecutionSearchItem with a TestExecutionFullResponse that inherits from TestExecutionResponse and adds artefact: ArtefactResponse, artefact_build: ArtefactBuildMinimalResponse, and test_results: list[TestResultResponse]. Then TestExecutionSearchResponse just wraps list[TestExecutionFullResponse] with pagination fields.
| model_config = ConfigDict(from_attributes=True) | ||
|
|
||
| test_execution: TestExecutionResponse | ||
| test_result: TestResultResponse | None = None |
There was a problem hiding this comment.
Why is there only a single test result?
There was a problem hiding this comment.
After thinking about it more, switching to test_results: list[TestResultResponse] on each execution item makes more sense like you suggested. One item per execution rather than one item per test result. The search.py logic would then query distinct TestExecution IDs matching the filters, load each execution with its filtered test results eagerly, and return one TestExecutionFullResponse per execution. The test_result=none case would just return executions with test_results: [].
|
I'm going OOO for a couple of weeks so I won't be able to continue on this PR for a while. The review comments are valid and I've replied to each one inline with guidance on how to address them. |
Description
This PR adds a new
GET /v1/test-executionsendpoint that searches across test executions with optional test result context.Resolved issues
TO-296
Documentation
The endpoint uses a
test_resultquery param that mirrors the existing issues filter pattern:test_result=any: returns executions that have test results (one item per test result)test_result=none: returns executions with no test results (test_result field is null in the response)test_result={id}: filter by specific test result ID(s)All existing filters from
GET /v1/test-resultsare supported (families, environments, artefacts, test_execution_statuses, execution_is_latest, rerun_is_requested, assignee_ids,etc.). For the none path, test-result-level filters (test_cases, template_ids, issues, test_result_statuses, from_date, until_date) are not applicable and are ignored.Web service API changes
Tests
Added full test coverage for the new search endpoint.