-
-
Notifications
You must be signed in to change notification settings - Fork 102
Add support for coverage #775
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I am investigating and re-running the failed jobs in the CI pipeline. Even before this PR, memory leaks would occasionally occur, but re-running the jobs would result in success, so I had assumed these were one-time glitches. This project employs various kinds of metaprogramming and is also related to threading. It will likely require thorough investigation. |
Why do you keep running the CI? There have been 111 attempts, and they all failed. |
In between my main work and other investigations, I was repeatedly re-running the failed jobs to see if they would pass. As you pointed out, there are clearly problems. One key observation is that while failures occur depending on whether third-party dependencies are included or based on the system’s bitness, at least one test run for each Python version has passed. This means that every configuration has the potential to pass — but also the potential to fail. We should continue investigating the root cause, but to prevent memory leaks and stabilize the tests, we may need to escalate this to |
The coverage measurement of the test codebase might be affecting the results. |
In the documentation, they recommend to include the coverage of the test. |
The previous suggestion was meant to verify whether test code coverage measurement affects memory release. I agree that test code should be included in the coverage measurement. However, we first need to determine what is actually affecting the results. Possibly, it’s the coverage measurement itself. |
Thank you for the verification commit. Since it fails no matter how many times I re-run it, it seems that test code coverage measurement is not the root cause of this issue. Since the comtypes/comtypes/test/TestComServer.py Lines 104 to 108 in f1e79a9
|
Generating a tuple might be the issue. comtypes/comtypes/test/test_comserver.py Lines 93 to 100 in f1e79a9
How about trying "1 + 2" instead? |
I confirmed that changing from The failing test in a job was the same as before: However, what concerns me even more is that a job where all tests passed on Python 3.12 takes 818.020s. In contrast, a job where all tests passed on Python 3.11 only took 83.091s, which means the execution time has increased nearly tenfold. Looking at a canceled job on Python 3.12, I noticed that As mentioned in #524 (comment), the wrapper module for Currently, the latest version (7.6.10) of |
f2d70dc
to
6e405dd
Compare
I reported the Python 3.12 issue to |
These issues might be related: |
Thank you for pinging me about this PR. Since python/cpython#107841 has been merged, the slowdown issue with coverage might be resolved in the latest Python 3.12.10. Once the default Python revision used in GHA is updated, I believe this PR should be able to be merged without any issues. |
I still need to implement the codecov step to fix #772 |
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. Thanks for integrating Codecov - We've got you covered ☂️ |
We need to use the ``network_filter`` param because of this issue: codecov/codecov-action#1808
I just rebased and for what I can see, it seems to work properly, but the python 3.12 CI is still slow (but it doesn't leak anymore). You can see the result of the coverage here: https://app.codecov.io/github/enthought/comtypes/tree/moi15moi%2Fcomtypes%3AAdd-coverage |
@moi15moi @dpinte |
I understand your confusion. As you pointed out, it's possible to upload the coverage report to Codecov without a token, as documented here. I'm not sure what your preference is. Would you like to use a token or tokenless uploads? Note that you can configure your preference here: https://app.codecov.io/account/github/enthought/org-upload-token This PR has been run twice: once here in this repository and once in my fork (where I also have a PR open).
Let me know how you'd prefer to proceed regarding the token usage. |
I confirm there is no secrets defined for the codecov token and that we're uploading from a public repository. Happy to add a token if it can be useful. |
I understand that with the current workflow file, even without secrets defined for the codecov token, uploading coverage from this public repository isn't a problem. |
Having a codecov token would likely prevent future maintainers from misunderstanding and also be helpful for operational aspects in the future. |
It seems a token is required to upload coverage for protected branches like
Please set up the token. |
@junkmd a CODECOV_TOKEN secret has been added to the repository. You can use it now for uploading reports |
Can you set codecov's default branch to |
Done |
I opened this PR as a draft because I still haven't implement everthing said in #772.
Also, now, to run the tests, I use
coverage run -m unittest discover -v -s comtypes\test -t comtypes\test
instead ofpython -m unittest discover -v -s ./comtypes/test -t comtypes\test
, buttest_eval
consistently fails on Python 3.10 with a large memory leaks:Have you an idea why it happen?
We might be hitting a element from Things that don’t work.