Speeding up gcov collection from test device #199
-
Some background context first. I’ve been looking into ways of speeding up the gather_on_test.sh script. Depending on the device I’m testing, the best case is 1.5 minutes, the worst is 11 minutes. This ultimately affects the granularity of collection. I’m generally forced to collect coverage from a group of tests instead of per test. The primary slow point in
The FAQ states this is required because of issues of using the
I familiarized myself a bit with the seq_file interface and looked over the code in gcov/fs.c. I was hoping to see why I then moved to testing with a
This turned out to be WAY faster and no sign of empty files. 9 seconds versus 1.5 minutes. I did not find any issues with the collected files when compare to those collected the previous way. Questions regarding going the
Questions regarding updating gcov to work with tar:
If there’s a known solution for it, I’d be happy to work on a patch for it. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
I think you've missed that pseudo-files backed by
|
Beta Was this translation helpful? Give feedback.
I think you've missed that pseudo-files backed by
seq_file
have zero size.tar
simply believes result ofstat.st_size
and probably doesn't even open those files. It also does useread()
throughsafe_read()
wrapper fromgnulib
(an alternative is to usemmap()
, but I think it won't work forseq_file
).cat
works because it's dump. It can just callread()
until EOF. Anything that does the same should do. You might have to checkcoreutils
/busybox
/etc. for anything more than that which could lead to incomplete files (re-reading parts of a file? not sure what can be the cause here, but maybe look at handling of sparse files).