public inbox for ltp@lists.linux.it
 help / color / mirror / Atom feed
* Re: [LTP] [PATCH] Detect test results more accurately when generating HTML
@ 2009-05-26 14:27 Subrata Modak
  2009-05-26 20:14 ` Marc Gauthier
  0 siblings, 1 reply; 18+ messages in thread
From: Subrata Modak @ 2009-05-26 14:27 UTC (permalink / raw)
  To: Rohit Verma; +Cc: Ltp-List

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 12014 bytes --]

Hi,

>On Tue, 2009-05-26 at 09:08 +0530, rohit verma wrote:
>Hi,
> 
> This would be a tedious task. If you look into ltp/pan/pan.c, you can
> find that the parent process is writing the tags (<<<test_start>>>,
> <<<test_output>>>, etc...) and the child process goes on to execute
> the testcase using exec system call. Once the child has invoked the
> testcase through exec, the pan does not have any control on flushing
> the output. If flushing is important then it has to be taken up in
> each of the testcases.
> 
> If you look at few of the outputs, you can find that the test output
> is appearing before the <<<test_start>>> or in between
> <<<test_start>>>  and <<<test_output>>>, which implies that before the
> parent has written the headers, the child executes and writes its
> output to stdout. I don't know if adding delay in child will solve the
> problem.

How much would you want to delay ? May be to some microsecs !!
However delaying is not a good option, but, i am ready to consider.
See, what is the reasanable time so that this problem vanishes.

Meanwhile, i have generated this small patch. It tries to flush all data
to be written to STDOUT before the Parent (PAN) tries to write his
next tag. Moreover, i have included the fflush() to be executed inside
the if()��s domain whenever the parent is writing from <<test_start>>
till <<test_output>>. See, if this can help in solving the problem.

Signed-off-by: Subrata Modak <subrata@linux.vnet.ibm.com>
---

--- ltp-intermediate-20090521/pan/ltp-pan.c.orig	2009-05-26 16:30:39.000000000 +0530
+++ ltp-intermediate-20090521/pan/ltp-pan.c	2009-05-26 19:44:37.000000000 +0530
@@ -1015,6 +1015,7 @@ static pid_t run_child(struct coll_entry
 		errbuf[errlen] = '\0';
 		/* fprintf(stderr, "%s", errbuf); */
 		waitpid(cpid, &status, 0);
+		fflush(stdout); /* Flush any data before writing <<test_end>>*/
 		if (WIFSIGNALED(status)) {
 			termid = WTERMSIG(status);
 			termtype = "signaled";
@@ -1300,14 +1301,14 @@ static void copy_buffered_output(struct 
 
 static void write_test_start(struct tag_pgrp *running, const char *init_status)
 {
-	if (!strcmp(reporttype, "rts"))
+	if (!strcmp(reporttype, "rts")) {
 		printf("%s\ntag=%s stime=%ld\ncmdline=\"%s\"\ncontacts=\"%s\"\nanalysis=%s\ninitiation_status=\"%s\"\n%s\n",
 		       "<<<test_start>>>",
 		       running->cmd->name, running->mystime, running->cmd->cmdline, "",
 		       "exit", init_status,
 		       "<<<test_output>>>");
-
 	fflush(stdout);
+	}
 }
 
 

---
Regards--
Subrata

> Let me know your views on the same.
> 
> Regards,
> Rohit
> 
> 
> 
> On Mon, May 25, 2009 at 9:42 PM, Subrata Modak
> <subrata@linux.vnet.ibm.com> wrote:
> > On Fri, 2009-05-22 at 14:55 +0530, rohit verma wrote:
> >> Hi,
> >>
> >>
> >> There is no problem with the genhtml.pl script. The real problem is in
> >> the pan driver (ltp/pan/pan.c). This seems to be a problem of race
> >> condition after going through the code.
> >>
> >
> > I would rather want to get this guy fixed inside pan.c. See, if there is
> > a way so that the concerned test is provisioned by pan:
> >
> > 1) Only after <<test_output>>
> > 2) Pan must flush out everything for writing tags before the actual test
> > is provisioned
> > 3) The test must have written/flushed everything before Pan started
> > writing <<test_end>>
> >
> > Regards--
> > Subrata
> >
> >>
> >> My suggestion was to use the buffered mode as default. This can be
> >> done with the -O option followed by a directory name where the
> >> buffered data can be stored. But, there is a problem - the -O option
> >> has no effect unless the -x option with an argument greater than 1 is
> >> used.
> >>
> >>
> >> In your test runs you can add the options " -O <dirname> -x 2" and
> >> find out if the problem with logging still persists.
> >>
> >>
> >> Regards,
> >> rohit
> >>
> >>
> >>
> >>
> >>
> >>
> >> On Fri, May 22, 2009 at 2:43 PM, Marc Gauthier <marc@tensilica.com>
> >> wrote:
> >>         Hi Rohit,
> >>
> >>         I guess we've each observed output that the other has
> >>         not  :-)
> >>
> >>         Per your (1):  I've seen a few cases where lines between
> >>         test_end and test_start do belong to the preceding one, though
> >>         they are indeed the exception, not the rule.  Looking at the
> >>         name to assign to previous seems to catch the proper ones in
> >>         all the cases I've seen (default test lists don't repeat the
> >>         same test twice in a row).
> >>
> >>         Per your (2):  Haven't seen that, interesting.  That's a bit
> >>         harder to sort out, but the script could at least catch the
> >>         lines that report PASS/FAIL/BROK etc.  And perhaps lines that
> >>         don't look like a collection of var=value.  Hmmm.... maybe all
> >>         it needs is another regexp test, see patch below (against the
> >>         genhtml.pl file I posted, which btw was against its latest
> >>         version, 1.3).
> >>
> >>         I haven't taken time to look at why things get out of order in
> >>         the first place, which seems like the correct thing to fix
> >>         (well, I looked enough to notice that fflush(stdout) was
> >>         really there :-).  Though making the script handle it doesn't
> >>         hurt, and I've simply been trying to get an accurate look at
> >>         test results, this was the quickest way to do it.
> >>
> >>         -Marc
> >>
> >>         --- genhtml-v4.pl       2009-05-21 01:08:14.747493000 -0700
> >>         +++ genhtml.pl  2009-05-22 02:07:47.756226000 -0700
> >>         @@ -116,18 +116,18 @@
> >>                         }
> >>
> >>                         #  Read test result parameters and test output
> >>         -               while ($line !~ /$end_tag/) {
> >>         +               while (1) {
> >>         +                       get_line(\$line) or last TEST;
> >>         +                       last if $line =~ /$end_tag/;
> >>                                 ($read_output = 1, next) if $line
> >>         =~ /$output_tag/;
> >>                                 ($read_output = 0, next) if $line
> >>         =~ /$execution_tag/;
> >>         -                       if ($read_output) {
> >>         +                       if ($read_output or $line !~ /^(\s*\w
> >>         +=(".*"|\S*))+\s*$/) {
> >>                                     push @output_lines, $line;
> >>                                 } else {
> >>                                     while ($line =~ s/^\s*(\w+)=(".*"|
> >>         \S*)//) {
> >>                                         $values{$1} = $2;
> >>                                     }
> >>                                 }
> >>         -               } continue {
> >>         -                       get_line(\$line) or last TEST;
> >>                         }
> >>
> >>                         #  Treat lines that follow <<<end_test>>> as
> >>         output lines
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>                 ______________________________________________________
> >>
> >>                 From: rohit verma [mailto:rohit.170309@gmail.com]
> >>                 Sent: Friday, May 22, 2009 00:13
> >>                 To: subrata@linux.vnet.ibm.com
> >>                 Cc: Marc Gauthier; LTP List
> >>                 Subject: Re: [LTP] [PATCH] Detect test results more
> >>                 accurately when generating HTML
> >>
> >>
> >>
> >>
> >>                 Hi,
> >>
> >>
> >>                 The suggestion to - "Process test output lines that
> >>                 appear between <<<test_end>>>
> >>                 and the following <<<test_start>>>.  Assign to the
> >>                 preceding test where the name matches, else to the
> >>                 following test." does not solve the problem
> >>                 completely.
> >>
> >>
> >>                 My observation is:
> >>
> >>
> >>                 1. Lines between <<<test_end>>> and <<<test_start>>>
> >>                 belongs to the following test case and not to the
> >>                 preceding one.
> >>                 2. Also, at times I have observed the test output is
> >>                 spread in such a way that few lines of the test output
> >>                 fall between <<<test_end>>> and <<<test_start>>> and
> >>                 rest of lines appear between the <<<test_start>>> and
> >>                 <<<test_output>>> of the corresponding testcase.
> >>
> >>
> >>                 Regards,
> >>                 rohit
> >>
> >>                 On Fri, May 22, 2009 at 12:22 PM, Subrata Modak
> >>                 <subrata@linux.vnet.ibm.com> wrote:
> >>                         On Thu, 2009-05-21 at 01:13 -0700, Marc
> >>                         Gauthier wrote:
> >>                         > Hi, my first patch to this list.  Actually
> >>                         not a patch,
> >>                         > but the files themselves, to avoid
> >>                         line-wrapping issues
> >>                         > and because the diff is twice the size of
> >>                         the new file.
> >>                         >
> >>                         > -Marc
> >>
> >>                         Thanks Marc.
> >>
> >>                         Rohit,
> >>
> >>                         Can you test this and see if these solves the
> >>                         problem that you were
> >>                         discussing for long ?
> >>
> >>                         Regards--
> >>                         Subrata
> >>
> >>                         >
> >>                         >
> >>                         > -----------
> >>                         > Detect test results more accurately when
> >>                         generating HTML
> >>                         >
> >>                         > Process test output lines that appear
> >>                         between <<<test_end>>>
> >>                         > and the following <<<test_start>>>.  Assign
> >>                         to the preceding
> >>                         > test where the name matches, else to the
> >>                         following test.
> >>                         >
> >>                         > If a single test has multiple types of
> >>                         results (e.g. both
> >>                         > FAIL and WARN), report only the most
> >>                         significant one, to
> >>                         > avoid mis-computing the total number of PASS
> >>                         tests or
> >>                         > total counts that don't add up to the number
> >>                         of tests.
> >>                         >
> >>                         > If a test's output has no explicit result
> >>                         (PASS, FAIL, etc),
> >>                         > look at the exit value to determine whether
> >>                         it passed.
> >>                         >
> >>                         > Setting the SHOW_UNRESOLVED environment
> >>                         variable to 1
> >>                         > classifies as UNRESOLVED any test with no
> >>                         explicit result
> >>                         > and a zero exit code.
> >>                         >
> >>                         > Setting the SUMMARY_OUTPUT environment
> >>                         variable to 1
> >>                         > causes only one line of output per test to
> >>                         be shown, for a
> >>                         > tighter page that allows quickly scanning
> >>                         the results.
> >>                         >
> >>                         > Show percentage of each result type in
> >>                         summary section.
> >>                         >
> >>                         > Simplify parsing a bit.
> >>                         >
> >>                         > Signed-off-by:  Marc Gauthier
> >>                         <marc@tensilica.com>
> >>                         >
> >>                         >
> 


[-- Attachment #2: Type: text/plain, Size: 432 bytes --]

------------------------------------------------------------------------------
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers & brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing, & 
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA, & Big Spaceship. http://www.creativitycat.com 

[-- Attachment #3: Type: text/plain, Size: 155 bytes --]

_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 18+ messages in thread
* [LTP] [PATCH] Detect test results more accurately when generating HTML
@ 2009-05-21  8:13 Marc Gauthier
  2009-05-22  6:52 ` Subrata Modak
  0 siblings, 1 reply; 18+ messages in thread
From: Marc Gauthier @ 2009-05-21  8:13 UTC (permalink / raw)
  To: LTP List

[-- Attachment #1: Type: text/plain, Size: 1242 bytes --]

Hi, my first patch to this list.  Actually not a patch,
but the files themselves, to avoid line-wrapping issues
and because the diff is twice the size of the new file.

-Marc


-----------
Detect test results more accurately when generating HTML

Process test output lines that appear between <<<test_end>>>
and the following <<<test_start>>>.  Assign to the preceding
test where the name matches, else to the following test.

If a single test has multiple types of results (e.g. both
FAIL and WARN), report only the most significant one, to
avoid mis-computing the total number of PASS tests or
total counts that don't add up to the number of tests.

If a test's output has no explicit result (PASS, FAIL, etc),
look at the exit value to determine whether it passed.

Setting the SHOW_UNRESOLVED environment variable to 1
classifies as UNRESOLVED any test with no explicit result
and a zero exit code.

Setting the SUMMARY_OUTPUT environment variable to 1
causes only one line of output per test to be shown, for a
tighter page that allows quickly scanning the results.

Show percentage of each result type in summary section.

Simplify parsing a bit.

Signed-off-by:  Marc Gauthier <marc@tensilica.com>


[-- Attachment #2: html_report_header.txt --]
[-- Type: text/plain, Size: 2889 bytes --]

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US">
<head>
  <title>Linux Test Project - Results</title>
</head>
<body>

<h1>LTP Output/Log </h1>
<table border="1" cellspacing="3">
  <tbody>
    <tr>
      <td bgcolor="#66ff66"><b>&nbsp;PASSED&nbsp;</b></td>
      <td bgcolor="#ff0000"><b>&nbsp;FAILED&nbsp;</b></td>
      <td bgcolor="Fuchsia"><b>&nbsp;WARNING&nbsp;</b></td>
      <td bgcolor="Yellow"><b>&nbsp;BROKEN&nbsp;</b></td>
      <td bgcolor="#8dc997"><b>&nbsp;RETIRED&nbsp;</b></td>
      <td bgcolor="Aqua"><b>&nbsp;CONFIG-ERROR&nbsp;</b></td>
      <td bgcolor="#c0c0ff"><b>&nbsp;UNRESOLVED&nbsp;</b></td>
    </tr>
  </tbody>
</table>
<br>
<b>Meaning of the following KEYWORDS in test results/logs:</b>
<hr>
<li><b>PASS -</b> Indicates that the test case had the expected result and passed</li>
<li><b>FAIL -</b> Indicates that the test case had an unexpected result and failed.</li>
<li><b>BROK -</b> Indicates that the remaining test cases are broken and will not execute correctly, because some precondition not met, such as a resource not being available.</li>
<li><b>CONF -</b> Indicates that the test case was not written to run on the current harware or software configuration such as machine type, or, kernel version.</li>
<li><b>RETR -</b> Indicates that the test cases has been retired and should not be executed any longer.</li>
<li><b>WARN -</b> Indicates that the test case experienced an unexpected or undesirable event that should not affect the test itself such as being unable to cleanup resources after the test finished.</li>
<li><b>INFO -</b> Specifies useful information about the status of the test that does not affect the result and does not indicate a problem.</li>
<hr>

<br>
<li><a href="#_1">Click Here for Detailed Report</a></li>
<li><a href="#_2">Click Here for Summary Report</a></li>
<br>

<h2 id="_1">Detailed Report</h2>
<div>
<table border="1" cellspacing="3"> 
<tbody>
    <tr>
      <td bgcolor="Silver"><strong>No</strong></td>
      <td bgcolor="Silver"><strong>Test-Name</strong></td>
      <td bgcolor="Silver"><strong>Start-Time</strong></td>
      <td bgcolor="Silver"><strong>Command-Line</strong></td>
      <td bgcolor="Silver"><strong>Contacts</strong></td>
      <td bgcolor="Silver"><strong>Analysis</strong></td>
      <td bgcolor="Silver"><strong>Initiation-Status</strong></td>
      <td bgcolor="Silver"><strong>Test-Output</strong></td>
      <td bgcolor="Silver"><strong>Duration</strong></td>
      <td bgcolor="Silver"><strong>Termination-type</strong></td>
      <td bgcolor="Silver"><strong>Termination-id</strong></td>
      <td bgcolor="Silver"><strong>Core-File</strong></td>
      <td bgcolor="Silver"><strong>cutime</strong></td>
      <td bgcolor="Silver"><strong>cstime</strong></td>
    </tr>

[-- Attachment #3: genhtml.pl --]
[-- Type: application/octet-stream, Size: 10691 bytes --]

#!/usr/bin/perl
#****************************************************************************#
# Copyright (c) International Business Machines  Corp., 2001                 #
#                                                                            #
# This program is free software;  you can redistribute it an#or modify       #
# it under the terms of the GNU General Public License as published by       #
# the Free Software Foundation; either version 2 of the License, or          #
# (at your option) any later version.                                        #
#                                                                            #
# This program is distributed in the hope that it will be useful,            #
# but WITHOUT ANY WARRANTY;  without even the implied warranty of            #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See                  #
# the GNU General Public License for more details.                           #
#                                                                            #
# You should have received a copy of the GNU General Public License          #
# along with this program;  if not, write to the Free Software               #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA    #
#                                                                            #
#****************************************************************************#

#****************************************************************************
#
# File:        genhtml.pl
#
# Description: This is a Parser which can parse the text output generated by
#              pan and convert the same to an HTML format, with proper high-
#              lighting of test result background for easy identification of
#              pass/fail of testcases
#
# Author:      Subrata Modak: subrata@linux.vnet.ibm.com
#
# History:     May 20 2009 - Modified - Marc Gauthier <marc@alumni.uwaterloo.ca>
#               - catch test output between test_end and test_start tags
#               - look at exit value if no explicit test result reported
#               - allow reporting tests without explicit result as unresolved
#               - simplify parsing a bit
#
#****************************************************************************


#  Constants
my %result_priority = (FAIL=>6, BROK=>5, WARN=>4, RETR=>3, CONF=>2, PASS=>1, UNRESOLVED=>0);
my %colours = (	FAIL => "#ff0000",		# red
		BROK => "Yellow",
		WARN => "Fuchsia",
		RETR => "#8dc997",		# darker green
		CONF => "Aqua",
		PASS => "#66ff66",		# light green
		UNRESOLVED => "#c0c0ff" );	# light blue

#  Globals
my $test_counter = 0;
my $eof = 0;
my %total_count = (FAIL=>0, BROK=>0, WARN=>0, RETR=>0, CONF=>0, PASS=>0, UNRESOLVED=>0);

#  Script arguments
my $header_file   = shift (@ARGV) || syntax();
my $start_tag     = shift (@ARGV) || syntax();
my $end_tag       = shift (@ARGV) || syntax();
my $output_tag    = shift (@ARGV) || syntax();
my $execution_tag = shift (@ARGV) || syntax();

if ($start_tag eq "" || $end_tag eq "" || $output_tag eq "" || $execution_tag eq "") {
    syntax();
}

sub syntax {
    print "syntax: genhtml header_file start_tag end_tag output_tag execution_tag file(s)\n";
    exit (1);
}

#  Get a line from <FILE>, with chomp and sticky EOF.
sub get_line {
    my ($lineref) = @_;
    $$lineref = undef;
    return undef if $eof;
    if (!defined($$lineref = <FILE>)) {
	$eof = 1;
	return undef;
    }
    chomp $$lineref;
    return 1;
}

#  Display HTML header
open (FILE, "<$header_file") || die "Cannot open file: $header_file";
my $header = "";
while ($headerline = <FILE>) {
       $header .= $headerline;
}
my $start_time = $ENV{TEST_START_TIME} || "[unknown date/time]";
$header =~ s/LTP\ Output\/Log/LTP\ Output\/Log\ (Report\ Generated\ on\ $start_time)/;
print $header;
close (FILE);


#  Parse and display test results

foreach my $file (@ARGV) {

	open (FILE, $file) || die "Cannot open file: $file\n";

	my $line = "";

	TEST: while (defined($line)) {

		my %values = ();
		my @output_lines = ();
		my $read_output = 0;

		#  Find start of test
		while ($line !~ /$start_tag/) {
			#  Capture lines present between end_tag and start_tag that apply to the subsequent test.
			push @output_lines, $line unless $test_counter == 0;
			get_line(\$line) or last TEST;
		}

		#  Read test result parameters and test output
		while ($line !~ /$end_tag/) {
			($read_output = 1, next) if $line =~ /$output_tag/;
			($read_output = 0, next) if $line =~ /$execution_tag/;
			if ($read_output) {
			    push @output_lines, $line;
			} else {
			    while ($line =~ s/^\s*(\w+)=(".*"|\S*)//) {
				$values{$1} = $2;
			    }
			}
		} continue {
			get_line(\$line) or last TEST;
		}

		#  Treat lines that follow <<<end_test>>> as output lines
		#  if they start with the same test name (tag).
		#  Otherwise they're likely part of the next test.
		while (1) {
			get_line(\$line) or last;
			last unless $line !~ /$start_tag/ and $line =~ /^\s(\w+)\s/ and $1 eq $values{tag};
			push @output_lines, $line;
		}

		#  Parse output lines for test results.
		#
		my $result = "UNRESOLVED";	# single result of test
		my %result_count = ();
		my $picked_line = "";		# one representative line picked out of the output
		foreach my $output_line (@output_lines) {
		    my $line_res = "UNRESOLVED";
		    $line_res = $1 if $output_line =~ /\ (FAIL|BROK|WARN|RETR|CONF|PASS)\ /;
		    $result_count{$line_res}++;
		    #  If a test has multiple results, only tally the "most important" one.
		    if ($result_priority{$line_res} >= $result_priority{$result}) {
			$result = $line_res;
			$picked_line = $output_line;	# pick last line of highest priority
		    }
		}

		#  If no explicit result found, look at termination status to determine result.
		#
		if ($result eq "UNRESOLVED") {
		    my $efail = "BROK"; #"FAIL"		# for now, report exit status failures as BROK
		    if ($values{termination_type} eq "signaled") {
			$result = $efail;		# termination_id is the signal number
		    } elsif ($values{termination_id}) {
			if (($values{termination_id} & 1) != 0) {
			    $result = $efail;		# bit 0 set == failure
			} elsif (($values{termination_id} & 2) != 0) {
			    $result = "BROK";		# bit 1 set == broken
			} elsif ($values{termination_id} == 4) {
			    $result = "WARN";
			} else {
			    $result = $efail;		# any other non-zero value assumed to mean failure
			}
		    } else {
			#  Exited with a zero exit code.  Do we assume PASS, or mark it UNRESOLVED?
			#  (We could grep the output for other hint keywords, but that's a bit hokey.)
			$result = "PASS" unless $ENV{SHOW_UNRESOLVED};
		    }
		}

		#  Cumulate this test's result.
		$total_count{$result}++;

		##  Previous cumulation method; tests exhibiting multiple result
		##  types are counted many times.
		#
		#foreach my $res (keys %result_count) {
		#    $total_count{$res}++ if $result_count{$res};
		#}

		#  All test data scanned.  Now output HTML (row) for this test.
		#
		my $get_proper_time = localtime ($values{stime});
		my $background_colour = $colours{$result};
		my $test_output = join("", map("$_ \n", @output_lines));
		$test_output = $picked_line if $ENV{SUMMARY_OUTPUT};
		$test_counter++;
		print	"<tr bgcolor=$background_colour><td><p><strong>$test_counter</strong></p></td>\n"
			. "<td><p><strong>$values{tag}</strong></p></td>\n"
			. "<td><p><pre><strong>$get_proper_time</strong></pre></p></td>\n"
			. "<td><p><strong> $values{cmdline} </strong></p></td>\n"
			. "<td><p><strong>$values{contacts}</strong></p></td>\n"
			. "<td><p><strong>$values{analysis}</strong></p></td>\n"
			. "<td><p><strong>$values{initiation_status}</strong></p></td>\n"
			. "<td><pre><strong>$test_output</strong></pre></td>"
			. "<td><p><strong>$values{duration}</strong></p></td>\n"
			. "<td><p><strong>$values{termination_type}<strong></p></td>\n"
			. "<td><p><strong>$values{termination_id}</strong></p></td>\n"
			. "<td><p><strong>$values{corefile}</strong></p></td>\n"
			. "<td><p><strong>$values{cutime}</strong></p></td>\n"
                        . "<td><p><strong>$values{cstime}</strong></p></td></tr>\n"
			; 
	}

	close (FILE);
}


#  Display summary

print "</tbody></table></div> \n\n<h2 id=\"_2\">Summary Report</h2>\n\n<div>\n\n<table border=\"1\" cellspacing=\"3\"><tbody>\n<tr>\n<td ";
if ($ENV{LTP_EXIT_VALUE} == 1 ) {
    print "bgcolor=\"#ff0000\"> <strong>Test Summary</strong></p></td><td bgcolor=\"#ff0000\"><strong>Pan reported some Tests FAIL</strong></p></td></tr>\n";
}
else {
    print "bgcolor=\"#66ff66\"> <strong>Test Summary</strong></p></td><td bgcolor=\"#66ff66\"><strong>Pan reported all Test Pass</strong></p></td></tr>\n";
}

print "<tr><td><strong>LTP Version</strong> </td><td><strong> $ENV{LTP_VERSION}     </strong></td></tr>\n";
print "<tr><td><strong>Start Time</strong>  </td><td><strong> $ENV{TEST_START_TIME} </strong></td></tr>\n";
print "<tr><td><strong>End Time</strong>    </td><td><strong> $ENV{TEST_END_TIME}   </strong></td></tr>\n";
print "<tr><td><strong>Log Result</strong>  </td><td><a href=\"file://$ENV{TEST_LOGS_DIRECTORY}/\">      <strong>$ENV{TEST_LOGS_DIRECTORY}</strong></a></td></tr>\n";
print "<tr><td><strong>Output/Failed Result</strong></td><td><a href=\"file://$ENV{TEST_OUTPUT_DIRECTORY}/\"> <strong>$ENV{TEST_OUTPUT_DIRECTORY}</strong></a></td></tr>\n";
print "<tr><td><strong>Total Tests</strong></td><td><strong> $test_counter </strong></td></tr>\n";
#  Previous method of counting PASS:
#$total_count{PASS} = $test_counter - $total_count{FAIL} - $total_count{BROK} - $total_count{WARN} - $total_count{RETR} - $total_count{CONF} - $total_count{UNRESOLVED};
sub show_result_summary {
    my ($restype, $what) = @_;
    printf "<tr><td><strong>$what</strong></td>"
	      ."<td><strong> $total_count{$restype} </strong></td>"
	      ."<td><strong> %8.2f%% </strong></td></tr>\n",
	($total_count{$restype}*100.0) / $test_counter;
}
show_result_summary("PASS", "Test PASS");
show_result_summary("FAIL", "Test FAIL");
show_result_summary("BROK", "Test BROK");
show_result_summary("WARN", "Test WARN");
show_result_summary("RETR", "Test RETR");
show_result_summary("CONF", "Test CONF");
show_result_summary("UNRESOLVED", "Unresolved") if $ENV{SHOW_UNRESOLVED};
print "<tr><td><strong>Kernel Version</strong></td><td><strong> $ENV{KERNEL_VERSION}  </strong></td></tr>\n";
print "<tr><td><strong>Machine Architecture</strong></td><td><strong> $ENV{MACHINE_ARCH} </strong></td></tr>\n";
$hostname = `uname -n`;             chop($hostname); 
print "<tr><td><strong>Hostname</strong>  </td> <td><strong> $hostname </strong></td></tr>\n";
print "</tbody></table></div></body></html>\n";


[-- Attachment #4: Type: text/plain, Size: 432 bytes --]

------------------------------------------------------------------------------
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers & brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing, & 
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA, & Big Spaceship. http://www.creativitycat.com 

[-- Attachment #5: Type: text/plain, Size: 155 bytes --]

_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2009-06-09 18:24 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-26 14:27 [LTP] [PATCH] Detect test results more accurately when generating HTML Subrata Modak
2009-05-26 20:14 ` Marc Gauthier
2009-05-27  6:44   ` rohit verma
2009-05-27 12:05     ` Marc Gauthier
2009-05-27 14:24       ` Subrata Modak
2009-05-27 14:58         ` Marc Gauthier
2009-05-29 12:54           ` Subrata Modak
2009-06-01 12:47             ` rohit verma
2009-06-02 12:33               ` rohit verma
2009-06-04  9:27                 ` Subrata Modak
2009-06-09 18:24                 ` Subrata Modak
2009-05-27 14:23     ` Subrata Modak
  -- strict thread matches above, loose matches on Subject: below --
2009-05-21  8:13 Marc Gauthier
2009-05-22  6:52 ` Subrata Modak
2009-05-22  7:13   ` rohit verma
2009-05-22  9:13     ` Marc Gauthier
2009-05-22  9:25       ` rohit verma
2009-05-25 16:12         ` Subrata Modak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox