linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Carsten Haitzler <carsten.haitzler@foss.arm.com>
To: Leo Yan <leo.yan@linaro.org>
Cc: linux-kernel@vger.kernel.org, coresight@lists.linaro.org,
	suzuki.poulose@arm.com, mathieu.poirier@linaro.org,
	mike.leach@linaro.org, linux-perf-users@vger.kernel.org,
	acme@kernel.org
Subject: Re: [PATCH v5 02/14] perf test: Add CoreSight shell lib shared code for future tests
Date: Wed, 10 Aug 2022 09:40:43 +0100	[thread overview]
Message-ID: <efadadfb-4aa7-9786-c297-8f073b4e97be@foss.arm.com> (raw)
In-Reply-To: <20220806094055.GB124146@leoy-ThinkPad-X240s>



On 8/6/22 10:40, Leo Yan wrote:
> On Thu, Jul 28, 2022 at 03:52:44PM +0100, carsten.haitzler@foss.arm.com wrote:
>> From: "Carsten Haitzler (Rasterman)" <raster@rasterman.com>
>>
>> This adds a library of shell "code" to be shared and used by future
>> tests that target quality testing for Arm CoreSight support in perf
>> and the Linux kernel.
>>
>> Signed-off-by: Carsten Haitzler <carsten.haitzler@arm.com>
>> ---
>>   tools/perf/tests/shell/lib/coresight.sh | 132 ++++++++++++++++++++++++
>>   1 file changed, 132 insertions(+)
>>   create mode 100644 tools/perf/tests/shell/lib/coresight.sh
>>
>> diff --git a/tools/perf/tests/shell/lib/coresight.sh b/tools/perf/tests/shell/lib/coresight.sh
>> new file mode 100644
> 
> Now one thing is tricky.  Since we scan sub directories, all scripts
> under the folder "tools/perf/tests/shell/lib/" are not added into the
> test list, this is because the scripts under this folder have no the
> executable (X) permission:
> 
> -rw-rw-r-- 1 leoy leoy 4675 Aug  6 17:03 coresight.sh
> -rw-rw-r-- 1 leoy leoy  329 Jul 27 09:37 probe.sh
> -rw-rw-r-- 1 leoy leoy  812 Jul 27 09:37 probe_vfs_getname.sh
> 
> I verified with command "perf list" and it works as expected.

Correct. the code takes advantage of this and skips things that are not 
+x as these will be assumed to be "library files".

>> index 000000000000..45a1477256b6
>> --- /dev/null
>> +++ b/tools/perf/tests/shell/lib/coresight.sh
>> @@ -0,0 +1,132 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +# Carsten Haitzler <carsten.haitzler@arm.com>, 2021
>> +
>> +# This is sourced from a driver script so no need for #!/bin... etc. at the
>> +# top - the assumption below is that it runs as part of sourcing after the
>> +# test sets up some basic env vars to say what it is.
>> +
>> +# This currently works with ETMv4 / ETF not any other packet types at thi
>> +# point. This will need changes if that changes.
>> +
>> +# perf record options for the perf tests to use
>> +PERFRECMEM="-m ,16M"
>> +PERFRECOPT="$PERFRECMEM -e cs_etm//u"
>> +
>> +TOOLS=$(dirname $0)
>> +DIR="$TOOLS/$TEST"
>> +BIN="$DIR/$TEST"
>> +# If the test tool/binary does not exist and is executable then skip the test
>> +if ! test -x "$BIN"; then exit 2; fi
>> +DATD="."
>> +# If the data dir env is set then make the data dir use that instead of ./
>> +if test -n "$PERF_TEST_CORESIGHT_DATADIR"; then
>> +	DATD="$PERF_TEST_CORESIGHT_DATADIR";
>> +fi
>> +# If the stat dir env is set then make the data dir use that instead of ./
>> +STATD="."
>> +if test -n "$PERF_TEST_CORESIGHT_STATDIR"; then
>> +	STATD="$PERF_TEST_CORESIGHT_STATDIR";
>> +fi
>> +
>> +# Called if the test fails - error code 1
>> +err() {
>> +	echo "$1"
>> +	exit 1
>> +}
>> +
>> +# Check that some statistics from our perf
>> +check_val_min() {
>> +	STATF="$4"
>> +	if test "$2" -lt "$3"; then
>> +		echo ", FAILED" >> "$STATF"
>> +		err "Sanity check number of $1 is too low ($2 < $3)"
>> +	fi
>> +}
>> +
>> +perf_dump_aux_verify() {
>> +	# Some basic checking that the AUX chunk contains some sensible data
>> +	# to see that we are recording something and at least a minimum
>> +	# amount of it. We should almost always see Fn packets in just about
>> +	# anything but certainly we will see some trace info and async
>> +	# packets
>> +	DUMP="$DATD/perf-tmp-aux-dump.txt"
>> +	perf report --stdio --dump -i "$1" | \
>> +		grep -o -e I_ATOM_F -e I_ASYNC -e I_TRACE_INFO > "$DUMP"
>> +	# Simply count how many of these packets we find to see that we are
>> +	# producing a reasonable amount of data - exact checks are not sane
>> +	# as this is a lossy process where we may lose some blocks and the
>> +	# compiler may produce different code depending on the compiler and
>> +	# optimization options, so this is rough just to see if we're
>> +	# either missing almost all the data or all of it
>> +	ATOM_FX_NUM=`grep I_ATOM_F "$DUMP" | wc -l`
>> +	ASYNC_NUM=`grep I_ASYNC "$DUMP" | wc -l`
>> +	TRACE_INFO_NUM=`grep I_TRACE_INFO "$DUMP" | wc -l`
>> +	rm -f "$DUMP"
>> +
>> +	# Arguments provide minimums for a pass
>> +	CHECK_FX_MIN="$2"
>> +	CHECK_ASYNC_MIN="$3"
>> +	CHECK_TRACE_INFO_MIN="$4"
>> +
>> +	# Write out statistics, so over time you can track results to see if
>> +	# there is a pattern - for example we have less "noisy" results that
>> +	# produce more consistent amounts of data each run, to see if over
>> +	# time any techinques to  minimize data loss are having an effect or
>> +	# not
>> +	STATF="$STATD/stats-$TEST-$DATV.csv"
>> +	if ! test -f "$STATF"; then
>> +		echo "ATOM Fx Count, Minimum, ASYNC Count, Minimum, TRACE INFO Count, Minimum" > "$STATF"
>> +	fi
>> +	echo -n "$ATOM_FX_NUM, $CHECK_FX_MIN, $ASYNC_NUM, $CHECK_ASYNC_MIN, $TRACE_INFO_NUM, $CHECK_TRACE_INFO_MIN" >> "$STATF"
>> +
>> +	# Actually check to see if we passed or failed.
>> +	check_val_min "ATOM_FX" "$ATOM_FX_NUM" "$CHECK_FX_MIN" "$STATF"
>> +	check_val_min "ASYNC" "$ASYNC_NUM" "$CHECK_ASYNC_MIN" "$STATF"
>> +	check_val_min "TRACE_INFO" "$TRACE_INFO_NUM" "$CHECK_TRACE_INFO_MIN" "$STATF"
>> +	echo ", Ok" >> "$STATF"
>> +}
>> +
>> +perf_dump_aux_tid_verify() {
>> +	# Specifically crafted test will produce a list of Tread ID's to
>> +	# stdout that need to be checked to  see that they have had trace
>> +	# info collected in AUX blocks in the perf data. This will go
>> +	# through all the TID's that are listed as CID=0xabcdef and see
>> +	# that all the Thread IDs the test tool reports are  in the perf
>> +	# data AUX chunks
>> +
>> +	# The TID test tools will print a TID per stdout line that are being
>> +	# tested
>> +	TIDS=`cat "$2"`
>> +	# Scan the perf report to find the TIDs that are actually CID in hex
>> +	# and build a list of the ones found
>> +	FOUND_TIDS=`perf report --stdio --dump -i "$1" | \
>> +			grep -o "CID=0x[0-9a-z]\+" | sed 's/CID=//g' | \
>> +			uniq | sort | uniq`
>> +	# No CID=xxx found - maybe your kernel is reporting these as
>> +	# VMID=xxx so look there
>> +	if test -z "$FOUND_TIDS"; then
>> +		FOUND_TIDS=`perf report --stdio --dump -i "$1" | \
>> +				grep -o "VMID=0x[0-9a-z]\+" | sed 's/VMID=//g' | \
>> +				uniq | sort | uniq`
>> +	fi
> 
> Just note, in theory we can check perf meta data and decide if use
> VMID or CID as thread ID in the trace data.  But perf meta data doesn't
> give direct info and need to parse the "TRCIDR2" field, this would
> introduce complexity.
> 
> Current approach is simple, so let's keep it.

A simple approach at least is easier to maintain here. so we're in 
agreement. :)

>> +
>> +	# Iterate over the list of TIDs that the test says it has and find
>> +	# them in the TIDs found in the perf report
>> +	MISSING=""
>> +	for TID2 in $TIDS; do
>> +		FOUND=""
>> +		for TIDHEX in $FOUND_TIDS; do
>> +			TID=`printf "%i" $TIDHEX`
>> +			if test "$TID" -eq "$TID2"; then
>> +				FOUND="y"
>> +				break
>> +			fi
>> +		done
>> +		if test -z "$FOUND"; then
>> +			MISSING="$MISSING $TID"
>> +		fi
>> +	done
>> +	if test -n "$MISSING"; then
>> +		err "Thread IDs $MISSING not found in perf AUX data"
>> +	fi
>> +}
> 
> The patch LGTM:
> 
> Reviewed-by: Leo Yan <leo.yan@linaro.org>
> 
>> -- 
>> 2.32.0
>>

  reply	other threads:[~2022-08-10  8:41 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-28 14:52 [PATCH v5 00/14] A patch series improving data quality of perf test for CoreSight carsten.haitzler
2022-07-28 14:52 ` [PATCH v5 01/14] perf test: Refactor shell tests allowing subdirs carsten.haitzler
2022-08-06  8:37   ` Leo Yan
2022-08-10  8:38     ` Carsten Haitzler
2022-07-28 14:52 ` [PATCH v5 02/14] perf test: Add CoreSight shell lib shared code for future tests carsten.haitzler
2022-08-06  9:40   ` Leo Yan
2022-08-10  8:40     ` Carsten Haitzler [this message]
2022-07-28 14:52 ` [PATCH v5 03/14] perf test: Add build infra for perf test tools for CoreSight tests carsten.haitzler
2022-08-07  3:59   ` Leo Yan
2022-08-10 17:37     ` Carsten Haitzler
2022-07-28 14:52 ` [PATCH v5 04/14] perf test: Add asm pureloop test tool carsten.haitzler
2022-08-07  4:03   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 05/14] perf test: Add asm pureloop test shell script carsten.haitzler
2022-08-07  4:35   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 06/14] perf test: Add git ignore for perf data generated by the CoreSight tests carsten.haitzler
2022-08-07  4:35   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 07/14] perf test: Add memcpy thread test tool carsten.haitzler
2022-08-07  4:49   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 08/14] perf test: Add memcpy thread test shell script carsten.haitzler
2022-08-07  4:12   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 09/14] perf test: Add thread loop test tool carsten.haitzler
2022-08-07  5:13   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 10/14] perf test: Add thread loop test shell scripts carsten.haitzler
2022-08-07  5:17   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 11/14] perf test: Add unroll thread test tool carsten.haitzler
2022-08-07  5:25   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 12/14] perf test: Add unroll thread test shell script carsten.haitzler
2022-08-07  5:44   ` Leo Yan
2022-08-10 17:55     ` Carsten Haitzler
2022-07-28 14:52 ` [PATCH v5 13/14] perf test: Add git ignore for tmp and output files of CoreSight tests carsten.haitzler
2022-08-07  5:48   ` Leo Yan
2022-07-28 14:52 ` [PATCH v5 14/14] perf test: Add relevant documentation about CoreSight testing carsten.haitzler
2022-08-07  7:03   ` Leo Yan
2022-08-10 17:59     ` Carsten Haitzler
2022-08-11 13:03       ` Mike Leach
2022-08-11 16:10 ` [PATCH v5 00/14] A patch series improving data quality of perf test for CoreSight Mike Leach

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=efadadfb-4aa7-9786-c297-8f073b4e97be@foss.arm.com \
    --to=carsten.haitzler@foss.arm.com \
    --cc=acme@kernel.org \
    --cc=coresight@lists.linaro.org \
    --cc=leo.yan@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mathieu.poirier@linaro.org \
    --cc=mike.leach@linaro.org \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).