[RFC 3/3] Test on solib load and unload

Doug Evans dje@google.com
Thu Sep 19 22:45:00 GMT 2013


Hi.  Comments inline.

Yao Qi writes:
 > diff --git a/gdb/testsuite/gdb.perf/solib.exp b/gdb/testsuite/gdb.perf/solib.exp
 > new file mode 100644
 > index 0000000..8e7eaf8
 > --- /dev/null
 > +++ b/gdb/testsuite/gdb.perf/solib.exp
 > @@ -0,0 +1,86 @@
 > +# Copyright (C) 2013 Free Software Foundation, Inc.
 > +
 > +# This program is free software; you can redistribute it and/or modify
 > +# it under the terms of the GNU General Public License as published by
 > +# the Free Software Foundation; either version 3 of the License, or
 > +# (at your option) any later version.
 > +#
 > +# This program is distributed in the hope that it will be useful,
 > +# but WITHOUT ANY WARRANTY; without even the implied warranty of
 > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 > +# GNU General Public License for more details.
 > +#
 > +# You should have received a copy of the GNU General Public License
 > +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
 > +
 > +# This test case is to test the speed of GDB when it is handling the
 > +# shared libraries of inferior are loaded and unloaded.
 > +
 > +standard_testfile .c
 > +set executable $testfile
 > +set expfile $testfile.exp
 > +
 > +# make check RUNTESTFLAGS='solib.exp SOLIB_NUMBER=1024'

SOLIB_NUMBER doesn't read very well.
How about NUM_SOLIBS?

 > +if ![info exists SOLIB_NUMBER] {
 > +    set SOLIB_NUMBER 128
 > +}
 > +
 > +for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
 > +
 > +    # Produce source files.
 > +    set libname "solib-lib$i"
 > +    set src [standard_temp_file $libname.c]
 > +    set exe [standard_temp_file $libname]
 > +
 > +    set code "int shr$i (void) {return $i;}"
 > +    set f [open $src "w"]
 > +    puts $f $code
 > +    close $f

IWBN if the test harness provided utilities for generating source
files instead of hardcoding the generating of them in the test.
Parameters to such a set of functions would include things like the name
of a high level entry point (what one might pass to dlsym), the number
of functions in the file, the number of classes, etc.

 > +
 > +    # Compile.
 > +    if { [gdb_compile_shlib $src $exe {debug}] != "" } {
 > +	untested "Couldn't compile $src."
 > +	return -1
 > +    }
 > +
 > +    # Delete object files to save some space.
 > +    file delete [standard_temp_file  "solib-lib$i.c.o"]
 > +}
 > +
 > +if { [prepare_for_testing ${testfile}.exp ${binfile} ${srcfile} {debug shlib_load} ] } {
 > +     return -1
 > +}
 > +
 > +clean_restart $binfile
 > +
 > +if ![runto_main] {
 > +    fail "Can't run to main"
 > +    return -1
 > +}
 > +
 > +set remote_python_file [gdb_remote_download host ${srcdir}/${subdir}/${testfile}.py]
 > +
 > +# Set sys.path for module perftest.
 > +gdb_test_no_output "python import os, sys"
 > +gdb_test_no_output "python sys.path.insert\(0, os.path.abspath\(\"${srcdir}/${subdir}/lib\"\)\)"
 > +
 > +gdb_test_no_output "python exec (open ('${remote_python_file}').read ())"
 > +
 > +gdb_test_no_output "python SolibLoadUnload\($SOLIB_NUMBER\)"
 > +
 > +# Call the convenience function registered by python script.
 > +send_gdb "call \$perftest()\n"
 > +gdb_expect 3000 {
 > +    -re "\"Done\".*${gdb_prompt} $" {
 > +    }
 > +    timeout {}
 > +}
 > +
 > +remote_file host delete ${remote_python_file}
 > +
 > +# Remove these libraries and source files.
 > +
 > +for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
 > +    file delete [standard_temp_file "solib-lib$i"]
 > +    file delete [standard_temp_file "solib-lib$i.c"]
 > +}

I like tests that leave things behind afterwards so that if I want to
run things by hand afterwards I can easily do so.
Let "make clean" clean up build artifacts.
[Our testsuite "make clean" rules are always lagging behind, but with some
conventions in the perf testsuite we can make this a tractable problem.
E.g., It's mostly (though not completely) executables that "make clean" lags
behind in cleaning up, but if they all ended with the same suffix, then they
would get cleaned up as easily as "rm -f *.o" cleans up object files.
If one went this route, one would want to do the same with foo.so of course.
That's not the only way to make this a tractable problem, just a possibility.]

Separately,
We were discussing perf testsuite usage here, and IWBN if there was a mode
where compilation was separated from perf testing.
E.g., and this wouldn't be the default of course,
one could do an initial "make check-perf" that just built the binaries,
and then a second "make check-perf" that used the prebuilt binaries to
collect performance data.
[In between could be various things, like shipping the tests out to
other machines.]
I'm just offering this as an idea.  I can imagine implementing this
in various ways.  Whether we can agree on one ... dunno.
One thought was to reduce the actual perf collection part of .exp scripts
to one line that invoked invokes some function passing it the name of
the python script or some such.

For example,
We want to be able to run the perf tests in parallel, but we don't want
test data polluted because, for example, several copies of gcc were also
running compiling other tests or other tests were running.

 > diff --git a/gdb/testsuite/gdb.perf/solib.py b/gdb/testsuite/gdb.perf/solib.py
 > new file mode 100644
 > index 0000000..7cc9c4a
 > --- /dev/null
 > +++ b/gdb/testsuite/gdb.perf/solib.py
 > @@ -0,0 +1,48 @@
 > +# Copyright (C) 2013 Free Software Foundation, Inc.
 > +
 > +# This program is free software; you can redistribute it and/or modify
 > +# it under the terms of the GNU General Public License as published by
 > +# the Free Software Foundation; either version 3 of the License, or
 > +# (at your option) any later version.
 > +#
 > +# This program is distributed in the hope that it will be useful,
 > +# but WITHOUT ANY WARRANTY; without even the implied warranty of
 > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 > +# GNU General Public License for more details.
 > +#
 > +# You should have received a copy of the GNU General Public License
 > +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
 > +
 > +# This test case is to test the speed of GDB when it is handling the
 > +# shared libraries of inferior are loaded and unloaded.
 > +
 > +import gdb
 > +import time
 > +
 > +from perftest import perftest
 > +
 > +class SolibLoadUnload(perftest.SingleVariableTestCase):
 > +    def __init__(self, solib_number):
 > +        super (SolibLoadUnload, self).__init__ ("solib")
 > +        self.solib_number = solib_number
 > +
 > +    def execute_test(self):
 > +        num = self.solib_number
 > +        iteration = 5;
 > +
 > +        # Warm up.
 > +        do_test_command = "call do_test (%d)" % num
 > +        gdb.execute (do_test_command)
 > +        gdb.execute (do_test_command)

I often collect data for both cold and hot caches.
It's important to have both sets of data.
[Cold caches is important because that's what users see after a first build
(in a distributed build the files aren't necessarily on one's machine yet).
Hot caches are important because it helps remove one source of variability
from the results.]
Getting cold caches involves doing things like (effectively)
sudo /bin/sh -c "echo 3 >/proc/sys/vm/drop_caches"
but it also involves doing other things that aren't necessarily
relevant elsewhere.  [Obviously doing things like sudo adds wrinkles
to running the test.  With appropriate hooks it's handled in a way that
doesn't affect normal runs.]
Getting hot caches is relatively easy (to a first approximation), but
to also test with cold caches we don't want to hard code warmups in the test.
Thus we want these lines to be moved elsewhere,
and have test harness provide hooks to control this.

 > +
 > +        while num > 0 and iteration > 0:
 > +            do_test_command = "call do_test (%d)" % num
 > +
 > +            start_time = time.clock()
 > +            gdb.execute (do_test_command)
 > +            elapsed_time = time.clock() - start_time

IWBN (IMO) if the test harness provided utilities to measure things like
wall time, cpu time, memory usage, and whatever other data we want to collect.
[These utilities, could e.g., just farm out to time.clock(),
if that was the appropriate thing to do,
but the tests themselves stick to the test harness API.]

 > +
 > +            self.result.record (num, elapsed_time)
 > +
 > +            num = num / 2
 > +            iteration -= 1
 > -- 
 > 1.7.7.6

Thoughts?



More information about the Gdb-patches mailing list