When I run the testsuite, gdb.base/bigcore.exp takes a very long time, and worse: almost stalls my machine because it runs out of memory, and produces a 16Gb file. I wish that this test would not be run by default as it just puts way more strain on the machine than any other. I have seen projects that put all slow / intensive tests in a separate directory and you need a separate command to do them, e.g.: make perf I'm not sure if such test category currently exists: performance tests are typical candidates for such category, but they are acceptably slow. If I could all tests but one that would be great as well, but gmake's wildcard does not seem to support it easily: https://www.gnu.org/software/make/manual/html_node/Wildcards.html Old issue that does an analogous thing on Frisk: https://sourceware.org/bugzilla/show_bug.cgi?id=1852 Possibly related: Yao Qi's profiling proposal: https://sourceware.org/ml/gdb-patches/2013-08/msg00380.html
OK, there is: make check-perf which runs the perf tests, so that is where I recommend putting bigcore. I was able to exclude that single test with: make check RUNTESTFLAGS="--ignore bigcore.exp"
Removing bigcore.exp took the testsuite from 18 minutes to 13 minutes. My machine specs: 4Gb RAM, Intel(R) Core(TM) i5-3210M (ivy bridge)
bigcore.exp is not about performance - see comments throughout the test's exp and c files. It takes under two seconds to run on my laptop (x86_64 Fedora 20). It sounds like your system does not have sparse corefiles support, or that got broken for you, somehow.
Ah, there is even an opt-out check on the test OSes that don't support it, I should have read it better... Ubuntu 14.04 here, kernel 3.13 here. I think I've found the culprit: cat /proc/sys/kernel/core_pattern gives: |/usr/share/apport/apport %p %s %c %P which uses apport to deal with the core. If I do: echo | sudo tee /proc/sys/kernel/core_pattern compile and run `bigcore.c`, the problem is gone, and I get a sparse core. Fedora does not use apport although it was considered: https://fedoraproject.org/wiki/Features/CrashHandling I don't know if it is technically feasible for apport to generate the sparse dump. What pointed me in this direction: https://news.ycombinator.com/item?id=7679307 I propose either of: - don't run bigcore.exp on `make check`, and remove the OS checks of the file. They are ugly anyways. Require some extra option to run it like `make check BIGCORE=true`, or `make check-bigcore` or something. - add another check: "is this linux and does core_pattern start with | ?" Ugly and brittle. How long until another system breaks this check in a different way? :-) I'll patch if you agree to those possibilities or see a better one.
I've proposed adding a warning to the README: https://sourceware.org/ml/gdb-patches/2015-07/msg00932.html
Fedora uses abrt which introduces a similar problem. $ cat /proc/sys/kernel/core_pattern |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t %e %i Another issue is yama.ptrace_scope. If it's set then all attach testing is messed up. I think this is a problem that needs to be solved, and I'm ok with system-specific checks. Adding them on a per-test basis gets cumbersome though, there may be only one big core test today but even other core tests get slowed down. And there are a plethora of attach tests and we don't want to have to edit and maintain such checks in every one of them. What I do is perform a collection of sanity checks up front, before real testing starts.
(In reply to Ciro Santilli from comment #4) > I don't know if it is technically feasible for apport to generate the sparse > dump. Can't see why not. (In reply to Doug Evans from comment #6) > Fedora uses abrt which introduces a similar problem. > $ cat /proc/sys/kernel/core_pattern > |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t %e %i AFAICS from the sources, abrt should be creating sparse cores: https://github.com/abrt/abrt/blob/master/src/hooks/abrt-hook-ccpp.c Funny enough, they even have a test specifically to make sure that gdb's bigcore test doesn't run slower with abrt: https://github.com/abrt/abrt/tree/master/tests/runtests/bz591504-sparse-core-files-performance-hit > I think this is a problem that needs to be solved, > and I'm ok with system-specific checks. We can also detect if the system generates sparse cores, with du --apparent-size (naturally gracefully handling the case of "du --apparent" failing). That is, generate a smaller core with zeros in it, check if it is sparse, and if not, skip the big test. > Adding them on a per-test basis gets cumbersome though, > there may be only one big core test today but > even other core tests get slowed down. > And there are a plethora of attach tests and > we don't want to have to edit and maintain such > checks in every one of them. We already have to do: if {![can_spawn_for_attach]} { return 0 } On "attach" tests. We can put more checks inside that function (and rename it if it makes sense). > > What I do is perform a collection of sanity checks > up front, before real testing starts. I think that'd be nice too.
Heh, I stumbled on this. I'm trying to (re-)build a buildbot for GDB in an Ubuntu 20.04 VM, and was wondering why my GDB build directory was > 16GB in size. Because the core is generated through apport, so not sparse. I think it would indeed be a good idea to generate a small core first, check if it is sparse, and skip the test if not.