Skip to content

Add support for coverage #775

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 3, 2025
Merged

Add support for coverage #775

merged 1 commit into from
Jun 3, 2025

Conversation

moi15moi
Copy link
Contributor

@moi15moi moi15moi commented Jan 27, 2025

I opened this PR as a draft because I still haven't implement everthing said in #772.

Also, now, to run the tests, I use coverage run -m unittest discover -v -s comtypes\test -t comtypes\test instead of python -m unittest discover -v -s ./comtypes/test -t comtypes\test, but test_eval consistently fails on Python 3.10 with a large memory leaks:

FAIL: test_eval (test_comserver.TestInproc)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\a\comtypes\comtypes\comtypes\test\test_comserver.py", line 99, in test_eval
    self._find_memleak(func)
  File "D:\a\comtypes\comtypes\comtypes\test\test_comserver.py", line 42, in _find_memleak
    self.assertFalse(bytes, f"Leaks {bytes} bytes")  # type: ignore
AssertionError: 5591040 is not false : Leaks 5591040 bytes

Have you an idea why it happen?
We might be hitting a element from Things that don’t work.

@junkmd junkmd added tests enhance or fix tests help wanted Extra attention is needed labels Jan 27, 2025
@junkmd
Copy link
Collaborator

junkmd commented Jan 27, 2025

I am investigating and re-running the failed jobs in the CI pipeline.

Even before this PR, memory leaks would occasionally occur, but re-running the jobs would result in success, so I had assumed these were one-time glitches.
However, with the changes in this PR, the leaks have become highly reproducible.

This project employs various kinds of metaprogramming and is also related to threading.

It will likely require thorough investigation.

@moi15moi
Copy link
Contributor Author

Why do you keep running the CI? There have been 111 attempts, and they all failed.
It's pretty clear that there is a problem. ^^'

@junkmd
Copy link
Collaborator

junkmd commented Jan 30, 2025

In between my main work and other investigations, I was repeatedly re-running the failed jobs to see if they would pass.

As you pointed out, there are clearly problems.

One key observation is that while failures occur depending on whether third-party dependencies are included or based on the system’s bitness, at least one test run for each Python version has passed. This means that every configuration has the potential to pass — but also the potential to fail.

We should continue investigating the root cause, but to prevent memory leaks and stabilize the tests, we may need to escalate this to CPython, or open an issue in coverage-py.

@junkmd
Copy link
Collaborator

junkmd commented Jan 30, 2025

The coverage measurement of the test codebase might be affecting the results.
Try setting omit = ["comtypes/gen/*", "comtypes/test/*"] to measure coverage only for the production codebase.

@moi15moi
Copy link
Contributor Author

In the documentation, they recommend to include the coverage of the test.
image

@junkmd
Copy link
Collaborator

junkmd commented Jan 30, 2025

The previous suggestion was meant to verify whether test code coverage measurement affects memory release.
The primary intent is to experiment with different conditions to investigate the root cause.

I agree that test code should be included in the coverage measurement. However, we first need to determine what is actually affecting the results.

Possibly, it’s the coverage measurement itself.
Possibly, the memory leak test for the COM server is a false positive.

@junkmd
Copy link
Collaborator

junkmd commented Jan 31, 2025

Thank you for the verification commit.

Since it fails no matter how many times I re-run it, it seems that test code coverage measurement is not the root cause of this issue.

Since the eval method calls Python’s eval function, it is possible that this test double implementation itself is inherently prone to memory leaks.

def ITestComServer_eval(self, this, what, presult):
self.Fire_Event(0, "EvalStarted", what)
presult[0].value = eval(what)
self.Fire_Event(0, "EvalCompleted", what, presult[0].value)
return S_OK

@junkmd
Copy link
Collaborator

junkmd commented Jan 31, 2025

Generating a tuple might be the issue.

def test_eval(self):
obj = self.create_object()
def func():
return obj.eval("(1, 2, 3)")
self.assertEqual(func(), (1, 2, 3)) # type: ignore
self._find_memleak(func)

How about trying "1 + 2" instead?

@junkmd
Copy link
Collaborator

junkmd commented Jan 31, 2025

I confirmed that changing from "(1, 2, 3)" to "1 + 2" still resulted in some job failures on Python 3.12.

The failing test in a job was the same as before: test_comserver.TestInproc_win32com.test_eval.

However, what concerns me even more is that a job where all tests passed on Python 3.12 takes 818.020s.

In contrast, a job where all tests passed on Python 3.11 only took 83.091s, which means the execution time has increased nearly tenfold.

Looking at a canceled job on Python 3.12, I noticed that test_client.Test_GetModule.test_the_name_of_the_enum_member_and_the_coclass_are_duplicated was canceled.
This test generates the Python wrapper module for MSHTML (_3050F1C5_98B5_11CF_BB82_00AA00BDCE0B_0_4_0.py).

As mentioned in #524 (comment), the wrapper module for MSHTML is massive.
Could it be that despite excluding it from coverage measurement with omit = ["comtypes/gen/*"] it is still attempting to measure coverage and that’s why the execution time is so long?
It's puzzling that this issue does not occur with Python versions other than 3.12.

Currently, the latest version (7.6.10) of coverage is installed on Python 3.12 jobs.
Perhaps we should try explicitly specifying another version.

@junkmd
Copy link
Collaborator

junkmd commented Feb 12, 2025

I reported the Python 3.12 issue to coveragepy community: nedbat/coveragepy#1862 (comment)

@junkmd
Copy link
Collaborator

junkmd commented Feb 15, 2025

These issues might be related:
python/cpython#127953
python/cpython#107674

@junkmd
Copy link
Collaborator

junkmd commented Apr 11, 2025

Thank you for pinging me about this PR.

Since python/cpython#107841 has been merged, the slowdown issue with coverage might be resolved in the latest Python 3.12.10.
However, it seems the default Python 3.12 version in GHA is still 3.12.9.

Once the default Python revision used in GHA is updated, I believe this PR should be able to be merged without any issues.

@moi15moi
Copy link
Contributor Author

I still need to implement the codecov step to fix #772
I just rebased to see if the issue with the leak was fixed.

@codecov-commenter
Copy link

Welcome to Codecov 🎉

Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.

Thanks for integrating Codecov - We've got you covered ☂️

We need to use the ``network_filter`` param because of this issue: codecov/codecov-action#1808
@moi15moi
Copy link
Contributor Author

I just rebased and for what I can see, it seems to work properly, but the python 3.12 CI is still slow (but it doesn't leak anymore).

You can see the result of the coverage here: https://app.codecov.io/github/enthought/comtypes/tree/moi15moi%2Fcomtypes%3AAdd-coverage

@moi15moi moi15moi marked this pull request as ready for review May 26, 2025 16:25
@junkmd
Copy link
Collaborator

junkmd commented May 31, 2025

@moi15moi
Thank you for your continued contributions.
My concern is the handling of secrets.CODECOV_TOKEN.
My understanding is that for public repositories, no special token is required.
I'm not sure if the coverage can be uploaded to Codecov because this token exists and you're opening a pull request from your fork, or if it's because you're using Enthought's token.
If there are any similar precedents or documentation regarding this kind of PR, please let me know.

@dpinte
Please share your thoughts as an Enthought member.
Is it okay to use secrets.CODECOV_TOKEN in the comtypes repository?
Or is this token currently not set, and the coverage is being uploaded simply due to the combination of Codecov and a public repository?

@moi15moi
Copy link
Contributor Author

I understand your confusion.

As you pointed out, it's possible to upload the coverage report to Codecov without a token, as documented here. I'm not sure what your preference is. Would you like to use a token or tokenless uploads? Note that you can configure your preference here: https://app.codecov.io/account/github/enthought/org-upload-token

This PR has been run twice: once here in this repository and once in my fork (where I also have a PR open).

Let me know how you'd prefer to proceed regarding the token usage.

@dpinte
Copy link
Member

dpinte commented Jun 2, 2025

@junkmd

Please share your thoughts as an Enthought member.
Is it okay to use secrets.CODECOV_TOKEN in the comtypes repository?
Or is this token currently not set, and the coverage is being uploaded simply due to the combination of Codecov and a public repository?

I confirm there is no secrets defined for the codecov token and that we're uploading from a public repository. Happy to add a token if it can be useful.

@junkmd junkmd added this to the 1.4.12 milestone Jun 3, 2025
@junkmd
Copy link
Collaborator

junkmd commented Jun 3, 2025

I understand that with the current workflow file, even without secrets defined for the codecov token, uploading coverage from this public repository isn't a problem.
Since the scope of this PR is to enable coverage measurement, I'd like to merge this.

@junkmd
Copy link
Collaborator

junkmd commented Jun 3, 2025

@dpinte

Having a codecov token would likely prevent future maintainers from misunderstanding and also be helpful for operational aspects in the future.
So, it would be great if you could set up the token.

@junkmd junkmd merged commit ab25f02 into enthought:main Jun 3, 2025
50 checks passed
@junkmd
Copy link
Collaborator

junkmd commented Jun 3, 2025

@dpinte

It seems a token is required to upload coverage for protected branches like main.

error - 2025-06-03 08:07:29,281 -- Upload failed: {"message":"Token required because branch is protected"}

https://github.com/enthought/comtypes/actions/runs/15411905408/job/43365628109#step:7:699

Please set up the token.

@moi15moi moi15moi deleted the Add-coverage branch June 3, 2025 15:31
@dpinte
Copy link
Member

dpinte commented Jun 4, 2025

@junkmd a CODECOV_TOKEN secret has been added to the repository. You can use it now for uploading reports

@junkmd
Copy link
Collaborator

junkmd commented Jun 4, 2025

@dpinte

Thanks for adding the secret to this repository. I re-ran the job and confirmed that the coverage was successfully uploaded.

On a related note, I noticed that the badge in the README, Coverage, doesn't display the coverage percentage correctly. I plan to swap it out for Coverage when I address #840.

@moi15moi
Copy link
Contributor Author

moi15moi commented Jun 4, 2025

@junkmd It happen because the main branch haven't been set on codecov (once the default branch will be setted, the link I wrote in the readme will work).

See this note:
image

@junkmd
Copy link
Collaborator

junkmd commented Jun 4, 2025

@dpinte

Can you set codecov's default branch to main?

@dpinte
Copy link
Member

dpinte commented Jun 5, 2025

Can you set codecov's default branch to main?

Done

@junkmd
Copy link
Collaborator

junkmd commented Jun 5, 2025

I confirmed that the coverage percentage is now displayed on the shields.io badge.

Here's the difference in badge rendering results before and after the default branch setting:
IMG_0063

IMG_0064

@moi15moi moi15moi mentioned this pull request Jun 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed tests enhance or fix tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants