So I’ve been wanting to integrate code coverage in my tests and for that im using the coverage api.
I have a exit handler which gets triggered on SIGINT, SIGTERM and SIGABRT signals:
import logging
import signal
import config
logger = logging.getLogger(__name__)
class Exit:
def __init__(self, cov):
self.cov = cov
signal.signal(signal.SIGTERM, self.exit_cleanly)
signal.signal(signal.SIGINT, self.exit_cleanly)
signal.signal(signal.SIGABRT, self.exit_cleanly)
def exit_cleanly(self, signal_number, stackframe):
logger.info("Received exit signal. Exiting...")
if config.coverage:
logger.info("Saving coverage files")
try:
self.cov.stop()
self.cov.save()
logger.info("Successfully saved coverage")
except Exception as e:
logger.critical("Exception occurred while saving coverage:")
logger.critical(e)
quit(0)
And here’s what I do at the top of my main program:
if config.coverage:
from coverage import Coverage
coveragedatafile = ".coverage-" + str(int(time()))
cov = Coverage(data_file=f"{config.datadir_server}/coverage/{coveragedatafile}")
cov.start()
else:
cov = ""
exit_handler = Exit(cov)
It’s all working great when running it outside of docker, but when running inside docker it stops without saving my files. I have added a volume and when using open() I can also write to a file and it will appear in the folder.
Now here comes the reason I edited the quesion. With the code I had before I updated the question the logs said that it was about to update, and then just stopped. But now it’s even saying that it successfully saved the files. But their nowhere to be found.
This is what it says in the logs on exit.
matrix-notifier-tester exited with code 0
matrix-notifier-bot | 03-10 14:40 Received exit signal. Exiting...
matrix-notifier-bot | 03-10 14:40 Successfully saved
matrix-notifier-bot exited with code 0
matrix-notifier-server exited with code 0
This is my docker-compose:
version: "3.3"
services:
server:
container_name: "matrix-notifier-server"
build:
context: ../server
dockerfile: Dockerfile
ports:
- "5505:5505"
env_file:
- .test-env
volumes:
- ./data:/data
bot:
container_name: "matrix-notifier-bot"
build:
context: ../bot
dockerfile: Dockerfile
depends_on:
- server
env_file:
- .test-env
volumes:
- ./data:/data
tester:
container_name: "matrix-notifier-tester"
build:
context: ../tests
dockerfile: Dockerfile
depends_on:
- server
- bot
env_file:
- .test-env
volumes:
- ./data:/data
At first I thought that it had something to do with the fact that im using --abort-on-container-exit
, but I tried running it without it and it didn’t seem to change anything
I’ve spent a good time looking for people with similiar problems but couldn’t find any.
Edit 2:
Its been a day and I still dont get it.
I had the thought that maybe the coverage module either doesnt have the priveleges to write and for some reason is not spitting out an error or it doesn’t know where the right path is for whatever reason, but thats not the case.
The coverage folder gets created on start by the coverage module so that means that it must have both access and know the right path to save to.
This seems like a really weird problem.
Edit 3:
So since no one answered my question and I couldn’t figure out what was causing the coverage api not to save the files. I am let to believe that this is a bug in the coverage api which is why I decided to open an issue on the coverage.py github.
I will now be waiting, as I cant really do anything else. I will update this question when I know if it really was a bug or not.
2
Answers
It's been some time and after someone looking at the issue I created, pointing it out to me that it wasn't reproducable for them, I decided to take a another look at, and they were actually right.
This seems to be related to running on windows, since this is in fact working on debian and other linux based distros. I just didn't know about the ls -a option which is why I didn't notice.
I don't exactly understand why this doesn't work on windows, but I believe it may have something to do with docker using wsl or something. I honestly don't know and don't care, since it doesn't really matter in my use case anyways.
You need to mount volume when run container: