This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH 0/N] test-suite improvement - PASS/FAIL: initial patch


Hi,
here in Red Hat (and after short conversation I think most of distribution maintainers) we have 
troubles when evaluating results from test-suite. The output is not good for automatic comparison.

I think the first and easiest way to improve the current state is to add PASS/FAIL line result to 
each test and be able to obtain summary file, that can be compared automatically.

There are several ways how to achieve it and I will present here really simple solution that causes, 
in my opinion, minimal changes and desirable output.

The idea is to run script, what takes return code of last command and evaluates it. This patch 
covers more than 1000 test-cases, but it covers only tests added by
     tests += mytest

If you apply the patch, and run test-suite, each time it will create file `tests.sum` where 
PASS/FAIL lines of tests are. That was the easy part.

But there are a lot of tests, which require special recipe, so they are added to test-suite this way:
     tests: mytest2
     mytest2: ... list of deps
         ... recepe

It would be "easy" to add the evaluation command to each of the test but I think, that it is not 
really a pure solution. I believe, that better way how to add evaluation command is to unify all 
these special tests, for example this way:
     tests-sp += mytest2
     mytest2-DEP = ... list of deps
     mytest2-RUN = ... recipe

and create one rule, that handles all tests-sp tests. That requires to edit part of Makefiles in 
subdirectories.

Do you think, that this is a good approach? Should I submit here patches for subdirectories 
Makefiles, or you think, that I picked wrong way?

Best regards
Tomas Dohnalek

---

	* Makerules: New rule `tests-summary-clean', covered abi-checks
	with PASS/FAIL line.
	* Rules: Variables `evaluate-test' and `test-name' declared, 
	covered all ordinary tests with PASS/FAIL line.
	* scripts/evaluate-test.sh: New file.

 Makerules                | 13 +++++++++----
 Rules                    |  8 +++++---
 scripts/evaluate-test.sh | 22 ++++++++++++++++++++++
 3 files changed, 36 insertions(+), 7 deletions(-)

diff --git a/Makerules b/Makerules
index 1281b94..8772f1f 100644
--- a/Makerules
+++ b/Makerules
@@ -1123,11 +1123,11 @@ ALL_BUILD_CFLAGS = $(BUILD_CFLAGS) $(BUILD_CPPFLAGS) -D_GNU_SOURCE \
 
 # Support the GNU standard name for this target.
 .PHONY: check
-check: tests
+check: tests-summary-clean tests
 # Special target to run tests which cannot be run unconditionally.
 # Maintainers should use this target.
 .PHONY: xcheck
-xcheck: xtests
+xcheck: tests-summary-clean xtests
 
 all-nonlib = $(strip $(tests) $(xtests) $(test-srcs) $(test-extras) $(others))
 ifneq (,$(all-nonlib))
@@ -1166,7 +1166,8 @@ check-abi-%: $(common-objpfx)config.make %.abilist $(objpfx)%.symlist
 check-abi-%: $(common-objpfx)config.make %.abilist $(common-objpfx)%.symlist
 	$(check-abi)
 define check-abi
-	diff -p -U 0 $(filter %.abilist,$^) $(filter %.symlist,$^)
+	diff -p -U 0 $(filter %.abilist,$^) $(filter %.symlist,$^); \
+	    $(evaluate-test)
 endef
 
 update-abi-%: $(objpfx)%.symlist %.abilist
@@ -1271,7 +1272,11 @@ echo-headers:
 clean: common-clean
 mostlyclean: common-mostlyclean
 
-do-tests-clean:
+tests-summary = $(common-objpfx)tests.sum
+tests-summary-clean:
+	rm -f $(tests-summary)
+
+do-tests-clean: tests-summary-clean
 	-rm -f $(addprefix $(objpfx),$(addsuffix .out,$(tests) $(xtests) \
 						      $(test-srcs)) \
 				     $(addsuffix -bp.out,$(tests) $(xtests) \
diff --git a/Rules b/Rules
index 17d938e..5a569dd 100644
--- a/Rules
+++ b/Rules
@@ -127,6 +127,8 @@ binaries-shared-tests = $(filter-out $(binaries-pie) $(binaries-static), \
 				     $(binaries-all-tests))
 binaries-shared-notests = $(filter-out $(binaries-pie) $(binaries-static), \
 				       $(binaries-all-notests))
+test-name = $(subdir)/$(*F)
+evaluate-test = $(..)scripts/evaluate-test.sh $$? $(test-name) $(tests-summary)
 
 ifneq "$(strip $(binaries-shared-notests))" ""
 $(addprefix $(objpfx),$(binaries-shared-notests)): %: %.o \
@@ -178,11 +180,11 @@ ifneq "$(strip $(tests) $(xtests) $(test-srcs))" ""
 make-test-out = GCONV_PATH=$(common-objpfx)iconvdata LC_ALL=C \
 		$($*-ENV) $(built-program-cmd) $($*-ARGS)
 $(objpfx)%-bp.out: %.input $(objpfx)%-bp
-	$(make-test-out) > $@ < $(word 1,$^)
+	$(make-test-out) > $@ < $(word 1,$^); $(evaluate-test)
 $(objpfx)%.out: %.input $(objpfx)%
-	$(make-test-out) > $@ < $(word 1,$^)
+	$(make-test-out) > $@ < $(word 1,$^); $(evaluate-test)
 $(objpfx)%.out: /dev/null $(objpfx)%	# Make it 2nd arg for canned sequence.
-	$(make-test-out) > $@
+	$(make-test-out) > $@; $(evaluate-test)
 
 endif	# tests
 
diff --git a/scripts/evaluate-test.sh b/scripts/evaluate-test.sh
new file mode 100755
index 0000000..00fae58
--- /dev/null
+++ b/scripts/evaluate-test.sh
@@ -0,0 +1,22 @@
+#!/bin/sh
+# This script is used to evaluate return code of a single testcase
+# and produce appropriate output.
+# usage: evaluate-test.sh test-rc test-name [output-file]
+
+test_rc=$1
+test_name=$2
+
+if [ $# -gt 2 ]; then
+    output=$3
+else
+    output=/dev/stdout
+fi
+
+if [ ${test_rc} -eq 0 ]; then
+    result="PASS"
+else
+    result="FAIL"
+fi
+
+echo "${result}: ${test_name}" >> ${output}
+exit ${test_rc}


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]