Skip to content

[T0-296] - add GET /v1/test-executions search endpoint#642

Open
uyarahmet wants to merge 3 commits intomainfrom
TO-296/add-search-endpoint-for-test-executions
Open

[T0-296] - add GET /v1/test-executions search endpoint#642
uyarahmet wants to merge 3 commits intomainfrom
TO-296/add-search-endpoint-for-test-executions

Conversation

@uyarahmet
Copy link
Contributor

@uyarahmet uyarahmet commented Mar 12, 2026

Description

This PR adds a new GET /v1/test-executions endpoint that searches across test executions with optional test result context.

Resolved issues

TO-296

Documentation

The endpoint uses a test_result query param that mirrors the existing issues filter pattern:

  • test_result=any: returns executions that have test results (one item per test result)
  • test_result=none: returns executions with no test results (test_result field is null in the response)
  • test_result={id}: filter by specific test result ID(s)
  • omitted: defaults to any behaviour

All existing filters from GET /v1/test-results are supported (families, environments, artefacts, test_execution_statuses, execution_is_latest, rerun_is_requested, assignee_ids, etc.). For the none path, test-result-level filters (test_cases, template_ids, issues, test_result_statuses, from_date, until_date) are not applicable and are ignored.

Web service API changes

Tests

Added full test coverage for the new search endpoint.

@uyarahmet uyarahmet marked this pull request as ready for review March 12, 2026 11:44
@uyarahmet uyarahmet requested review from motjuste and rpbritton March 12, 2026 11:44
@uyarahmet uyarahmet changed the title add GET /v1/test-executions search endpoint [T0-296] - add GET /v1/test-executions search endpoint Mar 12, 2026
Copy link
Contributor

@motjuste motjuste left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's some misunderstanding here:

I thought the spec was to add a /v1/search ... but there's some confusion stemming from the comment on the Jira card ... I thought that comment was about the filters, and failed to notice that the endpoint was also different in the suggestion from @rpbritton

If we are just going to add this additional /test-executions, how will that work with the sister frontend implementation? Will we have a new search page only for test-executions?

The spec asked to unify searching and enable using it for all test executions and results.

@rpbritton
Copy link
Collaborator

I think there's some misunderstanding here:

I thought the spec was to add a /v1/search ... but there's some confusion stemming from the comment on the Jira card ... I thought that comment was about the filters, and failed to notice that the endpoint was also different in the suggestion from @rpbritton

If we are just going to add this additional /test-executions, how will that work with the sister frontend implementation? Will we have a new search page only for test-executions?

The spec asked to unify searching and enable using it for all test executions and results.

I must've missed that part of your spec because I'm not sure how that would work. Test Executions and Test Results are different things, you cannot return a list[TestExecution | TestResult]. They also have their own unique sets of filters and unifying that would be incredibly difficult. Separate endpoints adds a clear separation of concerns.

Regarding the frontend, yes, I think there needs to be separate views because we are viewing different things. The idea I pitched in the ticket was view tabs similar to what exist on the artefacts page.

@motjuste
Copy link
Contributor

I must've missed that part of your spec because I'm not sure how that would work. Test Executions and Test Results are different things, you cannot return a list[TestExecution | TestResult]. They also have their own unique sets of filters and unifying that would be incredibly difficult. Separate endpoints adds a clear separation of concerns.

Hmm ... I probably should have spun it out in a separate spec but ... I thought I had put it as a prominent set of requirement. I should have then made sure to present them better. Sorry!

Don't Test Executions --contain--> 0+ Test Results? That probably biased me to think that the results from the /search endpoint could be more like list[TestResult | TestExecutionWithoutResult]. I can understand that merging the filters' functionalities may be complicated, but at the end, we will have just one search endpoint to maintain and refine.

Regarding the frontend, yes, I think there needs to be separate views because we are viewing different things. The idea I pitched in the ticket was view tabs similar to what exist on the artefacts page.

My proposal was to merge them ... so that we had a unified search page. It might be complicated implementation-wise, but would make the search page powerful.

But powerful often also turns unintuitive ...

After thinking about it though ... I am willing to concede unless something clicks in your minds.

Perhaps when implementing the frontend, the Search nav item becomes a drop-down with options to search Test Results or Test Executions, going to respective views.

@motjuste motjuste self-requested a review March 12, 2026 16:11
@rpbritton
Copy link
Collaborator

I must've missed that part of your spec because I'm not sure how that would work. Test Executions and Test Results are different things, you cannot return a list[TestExecution | TestResult]. They also have their own unique sets of filters and unifying that would be incredibly difficult. Separate endpoints adds a clear separation of concerns.

Hmm ... I probably should have spun it out in a separate spec but ... I thought I had put it as a prominent set of requirement. I should have then made sure to present them better. Sorry!

Don't Test Executions --contain--> 0+ Test Results? That probably biased me to think that the results from the /search endpoint could be more like list[TestResult | TestExecutionWithoutResult]. I can understand that merging the filters' functionalities may be complicated, but at the end, we will have just one search endpoint to maintain and refine.

Regarding the frontend, yes, I think there needs to be separate views because we are viewing different things. The idea I pitched in the ticket was view tabs similar to what exist on the artefacts page.

My proposal was to merge them ... so that we had a unified search page. It might be complicated implementation-wise, but would make the search page powerful.

But powerful often also turns unintuitive ...

After thinking about it though ... I am willing to concede unless something clicks in your minds.

Perhaps when implementing the frontend, the Search nav item becomes a drop-down with options to search Test Results or Test Executions, going to respective views.

@copilot wdyt?

Anyway, yes, I still believe in my opinion that merging these two things is not a great idea because they are fundamentally different. Perhaps you have not seen the implementation of it, but the TestResultFilters is used for more than just search, including attachment rules, bulk issue attachments, etc. Those would need unique handling and I fear the complexity.

Copy link
Contributor

Copilot AI commented Mar 12, 2026

@rpbritton I've opened a new pull request, #643, to work on those changes. Once the pull request is ready, I'll request review from you.

router.include_router(permissions.router, prefix="/v1/permissions")
router.include_router(applications.router, prefix="/v1/applications")
router.include_router(docs.router)
router.include_router(test_execution_search.router, prefix="/v1/test-executions")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need another router? Considering router.include_router(test_executions.router, prefix="/v1/test-executions") above?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The search router has a route with path "". FastAPI raises an error when both the include prefix and the route path are empty simultaneously which happens when including it inside __init__.py since that router has no prefix of its own. The workaround is registering it directly in router.py with an explicit prefix, same as relevant_links does.

events: list[TestEventResponse]


class TestExecutionSearchItem(BaseModel):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to not use TestExecutionResponse?

Looking through the code I do see the TestExecutionResponse is more of a minimal response (no artefact or build) so I do support adding a new type, but I would rather call this model TestExecutionFullResponse and have it mimic the same fields as TestExecutionResponse (maybe even enherit it), and then additionally add the extra fields.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. Then the plan should be to replace TestExecutionSearchItem with a TestExecutionFullResponse that inherits from TestExecutionResponse and adds artefact: ArtefactResponse, artefact_build: ArtefactBuildMinimalResponse, and test_results: list[TestResultResponse]. Then TestExecutionSearchResponse just wraps list[TestExecutionFullResponse] with pagination fields.

model_config = ConfigDict(from_attributes=True)

test_execution: TestExecutionResponse
test_result: TestResultResponse | None = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there only a single test result?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After thinking about it more, switching to test_results: list[TestResultResponse] on each execution item makes more sense like you suggested. One item per execution rather than one item per test result. The search.py logic would then query distinct TestExecution IDs matching the filters, load each execution with its filtered test results eagerly, and return one TestExecutionFullResponse per execution. The test_result=none case would just return executions with test_results: [].

@uyarahmet
Copy link
Contributor Author

I'm going OOO for a couple of weeks so I won't be able to continue on this PR for a while. The review comments are valid and I've replied to each one inline with guidance on how to address them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants