This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

KFAIL patch updated with KPASS


As per Michael and Rob request, I've updated my patch to KPASS when
a test that was supposed to KFAIL passes (it was XPASS in the previous
version).

Please give it a spin.

Regards to all,
Fernando

-- 
Fernando Nasser
Red Hat Canada Ltd.                     E-Mail:  fnasser@redhat.com
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
Index: dejagnu/runtest.exp
===================================================================
RCS file: /cvs/src/src/dejagnu/runtest.exp,v
retrieving revision 1.8
diff -c -p -r1.8 runtest.exp
*** dejagnu/runtest.exp	21 Apr 2002 08:46:47 -0000	1.8
--- dejagnu/runtest.exp	22 Apr 2002 17:24:14 -0000
*************** set psum_file   "latest"	;# file name of
*** 49,56 ****
  
  set exit_status	0		;# exit code returned by this program
  
! set xfail_flag  0
! set xfail_prms	0
  set sum_file	""		;# name of the file that contains the summary log
  set base_dir	""		;# the current working directory
  set logname     ""		;# the users login name
--- 49,58 ----
  
  set exit_status	0		;# exit code returned by this program
  
! set xfail_flag  0		;# indicates that a failure is expected
! set xfail_prms	0		;# GNATS prms id number for this expected failure
! set kfail_flag  0		;# indicates that it is a known failure
! set kfail_prms	0		;# bug id for the description of the known failure 
  set sum_file	""		;# name of the file that contains the summary log
  set base_dir	""		;# the current working directory
  set logname     ""		;# the users login name
Index: dejagnu/doc/dejagnu.texi
===================================================================
RCS file: /cvs/src/src/dejagnu/doc/dejagnu.texi,v
retrieving revision 1.3
diff -c -p -r1.3 dejagnu.texi
*** dejagnu/doc/dejagnu.texi	21 Apr 2002 08:47:02 -0000	1.3
--- dejagnu/doc/dejagnu.texi	22 Apr 2002 17:24:15 -0000
*************** case:
*** 453,467 ****
  @item PASS
  A test has succeeded.  That is, it demonstrated that the assertion is true.
  
- @cindex XFAIL, avoiding for POSIX
- @item XFAIL
- @sc{posix} 1003.3 does not incorporate the notion of expected failures,
- so @code{PASS}, instead of @code{XPASS}, must also be returned for test
- cases which were expected to fail and did not.  This means that
- @code{PASS} is in some sense more ambiguous than if @code{XPASS} is also
- used.  For information on @code{XPASS} and @code{XFAIL}, see
- @ref{Invoking runtest,,Using @code{runtest}}.
- 
  @item FAIL
  @cindex failure, POSIX definition
  A test @emph{has} produced the bug it was intended to capture.  That is,
--- 453,458 ----
*************** it has demonstrated that the assertion i
*** 469,477 ****
  message is based on the test case only.  Other messages are used to
  indicate a failure of the framework.
  
- As with @code{PASS}, @sc{posix} tests must return @code{FAIL} rather
- than @code{XFAIL} even if a failure was expected.
- 
  @item UNRESOLVED
  @cindex ambiguity, required for POSIX
  A test produced indeterminate results.  Usually, this means the test
--- 460,465 ----
*************** real test case yet.
*** 516,523 ****
  @end ftable
  
  @noindent
! The only remaining output message left is intended to test features that
! are specified by the applicable @sc{posix} standard as conditional:
  
  @ftable @code
  @item UNSUPPORTED
--- 504,512 ----
  @end ftable
  
  @noindent
! The only remaining @sc{posix} output message left is intended to test
! features that are specified by the applicable @sc{posix} standard as
! conditional:
  
  @ftable @code
  @item UNSUPPORTED
*************** running the test case.  For example, a t
*** 529,542 ****
  @code{gethostname} would never work on a target board running only a
  boot monitor.
  @end ftable
!   
  DejaGnu uses the same output procedures to produce these messages for
  all test suites, and these procedures are already known to conform to
  @sc{posix} 1003.3.  For a DejaGnu test suite to conform to @sc{posix}
! 1003.3, you must avoid the @code{setup_xfail} procedure as described in
! the @code{PASS} section above, and you must be careful to return
  @code{UNRESOLVED} where appropriate, as described in the
  @code{UNRESOLVED} section above.
  
  @node Future Directions
  @section Future directions
--- 518,600 ----
  @code{gethostname} would never work on a target board running only a
  boot monitor.
  @end ftable
! 
  DejaGnu uses the same output procedures to produce these messages for
  all test suites, and these procedures are already known to conform to
  @sc{posix} 1003.3.  For a DejaGnu test suite to conform to @sc{posix}
! 1003.3, you must avoid the @code{setup_xfail} and @code{setup_kfail}
! procedures (see below), and you must be careful to return
  @code{UNRESOLVED} where appropriate, as described in the
  @code{UNRESOLVED} section above.
+   
+ Besides the @sc{posix} messages, DejaGnu provides for variations of the
+ PASS and FAIL messages that can be helpful for the tool maintainers.
+ It must be noted, however, that this feature is not @sc{posix} 1003.3
+ compliant, so its use should be avoided if compliance is necessary.
+ 
+ The additional messages are:
+ 
+ @ftable @code
+ 
+ @item XFAIL
+ A test is expected to fail in some environment(s) due to some bug
+ in the environment that we hope is fixed someday (but we can't do
+ nothing about as it is not a bug in the tool that we are testing).
+ The procedure @code{setup_xfail} is used to indicate that a failure
+ is expected.
+ 
+ @cindex XFAIL, avoiding for POSIX
+ @sc{posix} 1003.3 does not incorporate the notion of expected failures,
+ so @sc{posix} tests must return @code{FAIL} rather
+ than @code{XFAIL} even if a failure was expected.
+ 
+ @item KFAIL
+ A test is known to fail in some environment(s) due to a known bug
+ in the tool being tested (identified by a bug id string).  This 
+ exists so that, after a bug is identified and properly registered
+ in a bug tracking database (Gnats, for instance), the count of 
+ failures can be kept as zero.  Having zero as a baseline in all
+ platforms allow the tool developers to immediately detect regressions
+ caused by changes (which may affect some platforms and not others).
+ The connection with a bug tracking database allows for automatic
+ generation of the BUGS section of man pages or Release Notes, as
+ well as a "Bugs Fixed this Release" section (by comparing to a
+ previous release set of known failures).
+ The procedure @code{setup_kfail} is used to indicate a failure is
+ known to exist.
+ 
+ @cindex KFAIL, avoiding for POSIX
+ As with @code{XFAIL}, @sc{posix} tests must return @code{FAIL} rather
+ than @code{KFAIL} even if a failure was due to a known bug.
+ 
+ 
+ @item XPASS
+ A test was expected to fail with @code{XFAIL} but passed instead.
+ Whatever problem that used to exist in the environment was corrected
+ The test may also be failing to detect the failure due to some
+ environment or output changes, so this possibility must be investigated
+ as well.
+ 
+ @item KPASS
+ A test was expected to fail with @code{KFAIL} but passed instead.
+ Someone may have fixed the bug and failed to unmark the test.
+ As for XPASS, the test may also be failing to detect the
+ failure due to some environment or output changes, so this possibility
+ must also be checked.
+ 
+ @code{PASS}, instead of @code{XPASS} or @code{KPASS}, must also be
+ returned for test cases which were expected to fail and did not,
+ if @sc{posix} 1003.3 compliance is required.
+ This means that @code{PASS} is in some sense more ambiguous than if
+ @code{XPASS} and @code{KPASS} are also used.  
+ 
+ @end ftable
+ 
+ See also @ref{Invoking runtest,,Using @code{runtest}}.
+ For information on how to mark tests as expected/known to fail by using
+ @code{setup_xfail} and @code{setup_kfail}, see
+ @ref{framework.exp,,Core Internal Procedures}.
+ 
  
  @node Future Directions
  @section Future directions
*************** A pleasant kind of failure: a test was e
*** 616,621 ****
--- 674,687 ----
  This may indicate progress; inspect the test case to determine whether
  you should amend it to stop expecting failure.
  
+ @item KPASS
+ @kindex KPASS
+ @cindex successful test, unexpected
+ @cindex unexpected success
+ A pleasant kind of failure: a test was known to fail, but succeeded.
+ This may indicate progress; inspect the test case to determine whether
+ you should amend it to stop expecting failure.
+ 
  @item FAIL
  @kindex FAIL
  @cindex failing test, unexpected
*************** regress; inspect the test case and the f
*** 628,636 ****
  @cindex expected failure
  @cindex failing test, expected
  A test failed, but it was expected to fail.  This result indicates no
! change in a known bug.  If a test fails because the operating system
! where the test runs lacks some facility required by the test, the
! outcome is @code{UNSUPPORTED} instead.
  
  @item UNRESOLVED
  @kindex UNRESOLVED
--- 694,712 ----
  @cindex expected failure
  @cindex failing test, expected
  A test failed, but it was expected to fail.  This result indicates no
! change in a known environment bug.  If a test fails because the operating
! system where the test runs lacks some facility required by the test
! (i.e. failure is due to the lack of a feature, not the existence of a bug),
! the outcome is @code{UNSUPPORTED} instead.
! 
! @item KFAIL
! @kindex KFAIL
! @cindex known failure
! @cindex failing test, known
! A test failed, but it was known to fail.  This result indicates no
! change in a known bug.  If a test fails because of a problem in the
! environment, not in the tool being tested, that is expected to be
! fixed one day, the outcome is @code{XFAIL} instead.
  
  @item UNRESOLVED
  @kindex UNRESOLVED
*************** values}.
*** 764,771 ****
  @cindex test output, displaying all
  Display all test output.  By default, @code{runtest} shows only the
  output of tests that produce unexpected results; that is, tests with
! status @samp{FAIL} (unexpected failure), @samp{XPASS} (unexpected
! success), or @samp{ERROR} (a severe error in the test case itself).
  Specify @samp{--all} to see output for tests with status @samp{PASS}
  (success, as expected) @samp{XFAIL} (failure, as expected), or
  @samp{WARNING} (minor error in the test case itself).
--- 840,847 ----
  @cindex test output, displaying all
  Display all test output.  By default, @code{runtest} shows only the
  output of tests that produce unexpected results; that is, tests with
! status @samp{FAIL} (unexpected failure), @samp{XPASS} or @samp{KPASS}
! (unexpected success), or @samp{ERROR} (a severe error in the test case itself).
  Specify @samp{--all} to see output for tests with status @samp{PASS}
  (success, as expected) @samp{XFAIL} (failure, as expected), or
  @samp{WARNING} (minor error in the test case itself).
*************** recorded by your configuration's choice 
*** 844,851 ****
  change how anything is actually configured unless --build is also
  specified; it affects @emph{only} DejaGnu procedures that compare the
  host string with particular values.  The procedures @code{ishost},
! @code{istarget}, @code{isnative}, and @code{setup_xfail} are affected by
! @samp{--host}. In this usage, @code{host} refers to the machine that the
  tests are to be run on, which may not be the same as the @code{build}
  machine. If @code{--build} is also specified, then @code{--host} refers
  to the machine that the tests wil, be run on, not the machine DejaGnu is
--- 920,928 ----
  change how anything is actually configured unless --build is also
  specified; it affects @emph{only} DejaGnu procedures that compare the
  host string with particular values.  The procedures @code{ishost},
! @code{istarget}, @code{isnative}, @code{setup_xfail} and
! @code{setup_kfail} are affected by @samp{--host}.
! In this usage, @code{host} refers to the machine that the
  tests are to be run on, which may not be the same as the @code{build}
  machine. If @code{--build} is also specified, then @code{--host} refers
  to the machine that the tests wil, be run on, not the machine DejaGnu is
*************** the verbosity level use @code{note}.
*** 1736,1743 ****
  @item pass "@var{string}"
  @cindex test case, declaring success
  Declares a test to have passed.  @code{pass} writes in the 
! log files a message beginning with @samp{PASS} (or @code{XPASS}, if
! failure was expected), appending the argument @var{string}.
  
  @item fail "@var{string}"
  @cindex test case, declaring failure
--- 1813,1821 ----
  @item pass "@var{string}"
  @cindex test case, declaring success
  Declares a test to have passed.  @code{pass} writes in the 
! log files a message beginning with @samp{PASS}
! (or @code{XPASS}/@code{KPASS}, if failure was expected),
! appending the argument @var{string}.
  
  @item fail "@var{string}"
  @cindex test case, declaring failure
*************** common shell wildcard characters to spec
*** 1860,1877 ****
  output; use it as a link to a bug-tracking system such as @sc{gnats}
  (@pxref{Overview,, Overview, gnats.info, Tracking Bugs With GNATS}).
  
  @cindex @code{XFAIL}, producing
  @cindex @code{XPASS}, producing
! Once you use @code{setup_xfail}, the @code{fail} and @code{pass}
! procedures produce the messages @samp{XFAIL} and @samp{XPASS}
! respectively, allowing you to distinguish expected failures (and
! unexpected success!) from other test outcomes.
! 
! @emph{Warning:} you must clear the expected failure after using
! @code{setup_xfail} in a test case.  Any call to @code{pass} or
! @code{fail} clears the expected failure implicitly; if the test has some
! other outcome, e.g. an error, you can call @code{clear_xfail} to clear
! the expected failure explicitly.  Otherwise, the expected-failure
  declaration applies to whatever test runs next, leading to surprising
  results.
  
--- 1938,1981 ----
  output; use it as a link to a bug-tracking system such as @sc{gnats}
  (@pxref{Overview,, Overview, gnats.info, Tracking Bugs With GNATS}).
  
+ See notes under setup_kfail (below).
+  
+ @item setup_kfail "@var{config}  @r{[}@var{bugid}@r{]}"
+ @c two spaces above to make it absolutely clear there's whitespace---a
+ @c crude sort of italic correction!
+ @cindex test case, known failure
+ @cindex failure, known
+ @cindex known failure
+ Declares that the test is known to fail on a particular set of
+ configurations.  The @var{config} argument must be a list of full
+ three-part @code{configure} target name; in particular, you may not use
+ the shorter nicknames supported by @code{configure} (but you can use the
+ common shell wildcard characters to specify sets of names).  The
+ @var{bugid} argument is mandatory, and used only in the logging file
+ output; use it as a link to a bug-tracking system such as @sc{gnats}
+ (@pxref{Overview,, Overview, gnats.info, Tracking Bugs With GNATS}).
+ 
  @cindex @code{XFAIL}, producing
+ @cindex @code{KFAIL}, producing
  @cindex @code{XPASS}, producing
! @cindex @code{KPASS}, producing
! Once you use @code{setup_xfail} or @code{setup_kfail}, the @code{fail}
! and @code{pass} procedures produce the messages @samp{XFAIL} or @samp{KFAIL}
! and @samp{XPASS} or @samp{KPASS} respectively, allowing you to distinguish
! expected/known failures (and unexpected success!) from other test outcomes.
! 
! If a test is marked as both expected to fail and known to fail for a
! certain configuration, a @samp{KFAIL} message will be generated.
! As @samp{KFAIL} messages are expected to draw more attention than
! the @samp{XFAIL} ones this will hopefuly ensure the test result is not
! overlooked.
! 
! @emph{Warning:} you must clear the expected/known failure after using
! @code{setup_xfail} or @code{setup_kfail} in a test case.  Any call to 
! @code{pass} or @code{fail} clears the expectedknown failure implicitly;
! if the test has some other outcome, e.g. an error, you can call
! @code{clear_xfail} to clear the expected failure or @code{clear_kfail}
! to clear the known failure explicitly.  Otherwise, the expected-failure
  declaration applies to whatever test runs next, leading to surprising
  results.
  
*************** list of configuration target names.  It 
*** 1952,1957 ****
--- 2056,2070 ----
  @code{clear_xfail} if a test case ends without calling either
  @code{pass} or @code{fail}, after calling @code{setup_xfail}.
  
+ @item clear_kfail @var{config}
+ @cindex cancelling known failure
+ @cindex known failure, cancelling
+ Cancel a known failure (previously declared with @code{setup_kfail})
+ for a particular set of configurations.  The @var{config} argument is a
+ list of configuration target names.  It is only necessary to call
+ @code{clear_kfail} if a test case ends without calling either
+ @code{pass} or @code{fail}, after calling @code{setup_kfail}.
+ 
  @item verbose @r{[}-log@r{]} @r{[}-n@r{]} @r{[}--@r{]} "@var{string}" @var{number}
  @cindex @code{verbose} builtin function
  Test cases can use this function to issue helpful messages depending on
*************** For troubleshooting, a third kind of out
*** 2627,2635 ****
  @code{runtest} always produces a summary output file
  @file{@var{tool}.sum}.  This summary shows the names of all test files
  run; for each test file, one line of output from each @code{pass}
! command (showing status @samp{PASS} or @samp{XPASS}) or @code{fail}
! command (status @samp{FAIL} or @samp{XFAIL}); trailing summary
! statistics that count passing and failing tests (expected and
  unexpected); and the full pathname and version number of the tool
  tested.  (All possible outcomes, and all errors, are always reflected in
  the summary output file, regardless of whether or not you specify
--- 2740,2748 ----
  @code{runtest} always produces a summary output file
  @file{@var{tool}.sum}.  This summary shows the names of all test files
  run; for each test file, one line of output from each @code{pass}
! command (showing status @samp{PASS}, @samp{XPASS} or @samp{KPASS}) or
! @code{fail} command (status @samp{FAIL}, @samp{XFAIL} or @samp{KFAIL});
! trailing summary statistics that count passing and failing tests (expected and
  unexpected); and the full pathname and version number of the tool
  tested.  (All possible outcomes, and all errors, are always reflected in
  the summary output file, regardless of whether or not you specify
Index: dejagnu/lib/framework.exp
===================================================================
RCS file: /cvs/src/src/dejagnu/lib/framework.exp,v
retrieving revision 1.7
diff -c -p -r1.7 framework.exp
*** dejagnu/lib/framework.exp	21 Apr 2002 08:47:06 -0000	1.7
--- dejagnu/lib/framework.exp	22 Apr 2002 17:24:15 -0000
*************** proc unknown { args } {
*** 251,258 ****
  # Without this, all messages that start with a keyword are written only to the
  # detail log file.  All messages that go to the screen will also appear in the
  # detail log.  This should only be used by the framework itself using pass,
! # fail, xpass, xfail, warning, perror, note, untested, unresolved, or
! # unsupported procedures.
  #
  proc clone_output { message } {
      global sum_file
--- 251,258 ----
  # Without this, all messages that start with a keyword are written only to the
  # detail log file.  All messages that go to the screen will also appear in the
  # detail log.  This should only be used by the framework itself using pass,
! # fail, xpass, xfail, kpass, kfail, warning, perror, note, untested, unresolved,
! # or unsupported procedures.
  #
  proc clone_output { message } {
      global sum_file
*************** proc clone_output { message } {
*** 264,270 ****
  
      regsub "^\[ \t\]*(\[^ \t\]+).*$" "$message" "\\1" firstword;
      case "$firstword" in {
! 	{"PASS:" "XFAIL:" "UNRESOLVED:" "UNSUPPORTED:" "UNTESTED:"} {
  	    if $all_flag {
  		send_user "$message\n"
  		return "$message"
--- 264,270 ----
  
      regsub "^\[ \t\]*(\[^ \t\]+).*$" "$message" "\\1" firstword;
      case "$firstword" in {
! 	{"PASS:" "XFAIL:" "KFAIL:" "UNRESOLVED:" "UNSUPPORTED:" "UNTESTED:"} {
  	    if $all_flag {
  		send_user "$message\n"
  		return "$message"
*************** proc log_summary { args } {
*** 364,370 ****
  	if { $testcnt > 0 } {
  	    set totlcnt 0;
  	    # total all the testcases reported
! 	    foreach x { FAIL PASS XFAIL XPASS UNTESTED UNRESOLVED UNSUPPORTED } {
  		incr totlcnt test_counts($x,$which);
  	    }
  	    set testcnt test_counts(total,$which);
--- 364,370 ----
  	if { $testcnt > 0 } {
  	    set totlcnt 0;
  	    # total all the testcases reported
! 	    foreach x { FAIL PASS XFAIL KFAIL XPASS KPASS UNTESTED UNRESOLVED UNSUPPORTED } {
  		incr totlcnt test_counts($x,$which);
  	    }
  	    set testcnt test_counts(total,$which);
*************** proc log_summary { args } {
*** 388,394 ****
  	    }
  	}
      }
!     foreach x { PASS FAIL XPASS XFAIL UNRESOLVED UNTESTED UNSUPPORTED } {
  	set val $test_counts($x,$which);
  	if { $val > 0 } {
  	    set mess "# of $test_counts($x,name)";
--- 388,394 ----
  	    }
  	}
      }
!     foreach x { PASS FAIL XPASS XFAIL KPASS KFAIL UNRESOLVED UNTESTED UNSUPPORTED } {
  	set val $test_counts($x,$which);
  	if { $val > 0 } {
  	    set mess "# of $test_counts($x,name)";
*************** proc setup_xfail { args } {
*** 441,446 ****
--- 441,483 ----
  }
  
  
+ #
+ # Setup a flag to control whether it is a known failure
+ #
+ # A bug report ID _MUST_ be specified, and is the first argument.
+ # It still must be a string without '-' so we can be sure someone
+ # did not just forget it and we end-up using a taget triple as
+ # bug id.
+ #
+ # Multiple target triplet patterns can be specified for targets
+ # for which the test is known to fail.  
+ #
+ #
+ proc setup_kfail { args } {
+     global kfail_flag
+     global kfail_prms
+     
+     set kfail_prms 0
+     set argc [ llength $args ]
+     for { set i 0 } { $i < $argc } { incr i } {
+ 	set sub_arg [ lindex $args $i ]
+ 	# is a prms number. we assume this is a string with no '-' characters
+ 	if [regexp "^\[^\-\]+$" $sub_arg] { 
+ 	    set kfail_prms $sub_arg
+ 	    continue
+ 	}
+ 	if [istarget $sub_arg] {
+ 	    set kfail_flag 1
+ 	    continue
+ 	}
+     }
+ 
+     if {$kfail_prms == 0} {
+ 	perror "Attempt to set a kfail without specifying bug tracking id"
+     }
+ }
+ 
+ 
  # check to see if a conditional xfail is triggered
  #	message {targets} {include} {exclude}
  #              
*************** proc clear_xfail { args } {
*** 557,562 ****
--- 594,621 ----
  }
  
  #
+ # Clear the kfail flag for a particular target
+ #
+ proc clear_kfail { args } {
+     global kfail_flag
+     global kfail_prms
+     
+     set argc [ llength $args ]
+     for { set i 0 } { $i < $argc } { incr i } {
+ 	set sub_arg [ lindex $args $i ]
+ 	case $sub_arg in {
+ 	    "*-*-*" {			# is a configuration triplet
+ 		if [istarget $sub_arg] {
+ 		    set kfail_flag 0
+ 		    set kfail_prms 0
+ 		}
+ 		continue
+ 	    }
+ 	}
+     }
+ }
+ 
+ #
  # Record that a test has passed or failed (perhaps unexpectedly)
  #
  # This is an internal procedure, only used in this file.
*************** proc record_test { type message args } {
*** 565,570 ****
--- 624,630 ----
      global exit_status
      global prms_id bug_id
      global xfail_flag xfail_prms
+     global kfail_flag kfail_prms
      global errcnt warncnt
      global warning_threshold perror_threshold
      global pf_prefix
*************** proc record_test { type message args } {
*** 612,621 ****
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    }
  	}
  	UNTESTED {
! 	    # The only reason we look at the xfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
--- 672,694 ----
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    }
  	}
+ 	KPASS {
+ 	    set exit_status 1
+ 	    if { $kfail_prms != 0 } {
+ 		set message [concat $message "\t(PRMS $kfail_prms)"]
+ 	    }
+ 	}
+ 	KFAIL {
+ 	    if { $kfail_prms != 0 } {
+ 		set message [concat $message "\t(PRMS: $kfail_prms)"]
+ 	    }
+ 	}
  	UNTESTED {
! 	    # The only reason we look at the xfail/kfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $kfail_flag && $kfail_prms != 0 } {
! 		set message [concat $message "\t(PRMS $kfail_prms)"]
! 	    } elseif { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
*************** proc record_test { type message args } {
*** 623,640 ****
  	}
  	UNRESOLVED {
  	    set exit_status 1
! 	    # The only reason we look at the xfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
  	    }
  	}
  	UNSUPPORTED {
! 	    # The only reason we look at the xfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
--- 696,717 ----
  	}
  	UNRESOLVED {
  	    set exit_status 1
! 	    # The only reason we look at the xfail/kfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $kfail_flag && $kfail_prms != 0 } {
! 		set message [concat $message "\t(PRMS $kfail_prms)"]
! 	    } elseif { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
  	    }
  	}
  	UNSUPPORTED {
! 	    # The only reason we look at the xfail/kfail stuff is to pick up
  	    # `xfail_prms'.
! 	    if { $kfail_flag && $kfail_prms != 0 } {
! 		set message [concat $message "\t(PRMS $kfail_prms)"]
! 	    } elseif { $xfail_flag && $xfail_prms != 0 } {
  		set message [concat $message "\t(PRMS $xfail_prms)"]
  	    } elseif $prms_id {
  		set message [concat $message "\t(PRMS $prms_id)"]
*************** proc record_test { type message args } {
*** 675,688 ****
      set warncnt 0
      set errcnt 0
      set xfail_flag 0
      set xfail_prms 0
  }
  
  #
  # Record that a test has passed
  #
  proc pass { message } {
!     global xfail_flag compiler_conditional_xfail_data
  
      # if we have a conditional xfail setup, then see if our compiler flags match
      if [ info exists compiler_conditional_xfail_data ] {
--- 752,767 ----
      set warncnt 0
      set errcnt 0
      set xfail_flag 0
+     set kfail_flag 0
      set xfail_prms 0
+     set kfail_prms 0
  }
  
  #
  # Record that a test has passed
  #
  proc pass { message } {
!     global xfail_flag kfail_flag compiler_conditional_xfail_data
  
      # if we have a conditional xfail setup, then see if our compiler flags match
      if [ info exists compiler_conditional_xfail_data ] {
*************** proc pass { message } {
*** 692,698 ****
  	unset compiler_conditional_xfail_data
      }
      
!     if $xfail_flag {
  	record_test XPASS $message
      } else {
  	record_test PASS $message
--- 771,779 ----
  	unset compiler_conditional_xfail_data
      }
      
!     if $kfail_flag {
! 	record_test KPASS $message
!     } elseif $xfail_flag {
  	record_test XPASS $message
      } else {
  	record_test PASS $message
*************** proc pass { message } {
*** 703,709 ****
  # Record that a test has failed
  #
  proc fail { message } {
!     global xfail_flag compiler_conditional_xfail_data
  
      # if we have a conditional xfail setup, then see if our compiler flags match
      if [ info exists compiler_conditional_xfail_data ] {
--- 784,790 ----
  # Record that a test has failed
  #
  proc fail { message } {
!     global xfail_flag kfail_flag compiler_conditional_xfail_data
  
      # if we have a conditional xfail setup, then see if our compiler flags match
      if [ info exists compiler_conditional_xfail_data ] {
*************** proc fail { message } {
*** 713,719 ****
  	unset compiler_conditional_xfail_data
      }
  
!     if $xfail_flag {
  	record_test XFAIL $message
      } else {
  	record_test FAIL $message
--- 794,802 ----
  	unset compiler_conditional_xfail_data
      }
  
!     if $kfail_flag {
! 	record_test KFAIL $message
!     } elseif $xfail_flag {
  	record_test XFAIL $message
      } else {
  	record_test FAIL $message
*************** proc fail { message } {
*** 721,738 ****
  }
  
  #
! # Record that a test has passed unexpectedly
  #
  proc xpass { message } {
      record_test XPASS $message
  }
  
  #
! # Record that a test has failed unexpectedly
  #
  proc xfail { message } {
      record_test XFAIL $message
  }
  
  #
  # Set warning threshold
--- 804,841 ----
  }
  
  #
! # Record that a test that was expected to fail has passed unexpectedly
  #
  proc xpass { message } {
      record_test XPASS $message
  }
  
  #
! # Record that a test that was expected to fail did indeed fail
  #
  proc xfail { message } {
      record_test XFAIL $message
  }
+ 
+ #
+ # Record that a test for a known bug has passed unexpectedly
+ #
+ proc kpass { bugid message } {
+     global kfail_flag kfail_prms
+     set kfail_flag 1
+     set kfail_prms $bugid
+     record_test KPASS $message
+ }
+ 
+ #
+ # Record that a test has failed due to a known bug
+ #
+ proc kfail { bugid message } {
+     global kfail_flag kfail_prms
+     set kfail_flag 1
+     set kfail_prms $bugid
+     record_test KFAIL $message
+ }
  
  #
  # Set warning threshold
*************** proc init_testcounts { } {
*** 845,850 ****
--- 948,955 ----
      set test_counts(FAIL,name) "unexpected failures"
      set test_counts(XFAIL,name) "expected failures"
      set test_counts(XPASS,name) "unexpected successes"
+     set test_counts(KFAIL,name) "known failures"
+     set test_counts(KPASS,name) "unknown successes"
      set test_counts(WARNING,name) "warnings"
      set test_counts(ERROR,name) "errors"
      set test_counts(UNSUPPORTED,name) "unsupported tests"
Index: dejagnu/testsuite/lib/libsup.exp
===================================================================
RCS file: /cvs/src/src/dejagnu/testsuite/lib/libsup.exp,v
retrieving revision 1.2
diff -c -p -r1.2 libsup.exp
*** dejagnu/testsuite/lib/libsup.exp	21 Apr 2002 08:47:08 -0000	1.2
--- dejagnu/testsuite/lib/libsup.exp	22 Apr 2002 17:24:15 -0000
*************** proc make_defaults_file { defs } {
*** 54,61 ****
--- 54,63 ----
      puts ${fd} "set errcnt 0"
      puts ${fd} "set passcnt 0"
      puts ${fd} "set xpasscnt 0"
+     puts ${fd} "set kpasscnt 0"
      puts ${fd} "set failcnt 0"
      puts ${fd} "set xfailcnt 0"
+     puts ${fd} "set kfailcnt 0"
      puts ${fd} "set prms_id 0"
      puts ${fd} "set bug_id 0"
      puts ${fd} "set exit_status 0"
*************** proc make_defaults_file { defs } {
*** 64,69 ****
--- 66,73 ----
      puts ${fd} "set unsupportedcnt 0"
      puts ${fd} "set xfail_flag 0"
      puts ${fd} "set xfail_prms 0"
+     puts ${fd} "set kfail_flag 0"
+     puts ${fd} "set kfail_prms 0"
      puts ${fd} "set mail_logs 0"
      puts ${fd} "set multipass_name 0"
      catch "close $fd"
Index: dejagnu/testsuite/runtest.all/stats-sub.exp
===================================================================
RCS file: /cvs/src/src/dejagnu/testsuite/runtest.all/stats-sub.exp,v
retrieving revision 1.2
diff -c -p -r1.2 stats-sub.exp
*** dejagnu/testsuite/runtest.all/stats-sub.exp	21 Apr 2002 08:47:09 -0000	1.2
--- dejagnu/testsuite/runtest.all/stats-sub.exp	22 Apr 2002 17:24:15 -0000
*************** switch $STATS_TEST {
*** 29,34 ****
--- 29,36 ----
      fail { fail "fail test" }
      xpass { xpass "xpass test" }
      xfail { xfail "xfail test" }
+     kpass { kpass "somebug" "kpass test" }
+     kfail { kfail "somebug" "kfail test" }
      untested { untested "untested test" }
      unresolved { unresolved "unresolved test" }
      unsupported { unsupported "unsupported test" }
Index: dejagnu/testsuite/runtest.all/stats.exp
===================================================================
RCS file: /cvs/src/src/dejagnu/testsuite/runtest.all/stats.exp,v
retrieving revision 1.2
diff -c -p -r1.2 stats.exp
*** dejagnu/testsuite/runtest.all/stats.exp	21 Apr 2002 08:47:09 -0000	1.2
--- dejagnu/testsuite/runtest.all/stats.exp	22 Apr 2002 17:24:15 -0000
*************** set tests {
*** 36,41 ****
--- 36,43 ----
      { fail "unexpected failures\[ \t\]+1\n" }
      { xpass "unexpected successes\[ \t\]+1\n" }
      { xfail "expected failures\[ \t\]+1\n" }
+     { kpass "unknown successes\[ \t\]+1\n" }
+     { kfail "known failures\[ \t\]+1\n" }
      { untested "untested testcases\[ \t\]+1\n" }
      { unresolved "unresolved testcases\[ \t\]+1\n" }
      { unsupported "unsupported tests\[ \t\]+1\n" }

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]