linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: James Clark <james.clark@linaro.org>
To: Leo Yan <leo.yan@arm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-perf-users@vger.kernel.org, Will Deacon <will@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Namhyung Kim <namhyung@kernel.org>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Ian Rogers <irogers@google.com>,
	Adrian Hunter <adrian.hunter@intel.com>
Subject: Re: [PATCH 10/12] perf arm_spe: Refactor arm_spe__get_metadata_by_cpu()
Date: Fri, 20 Jun 2025 11:45:40 +0100	[thread overview]
Message-ID: <f5f70b05-dd8b-4e46-a6ee-e2710a9cf1ee@linaro.org> (raw)
In-Reply-To: <20250613-arm_spe_support_hitm_overhead_v1_public-v1-10-6faecf0a8775@arm.com>



On 13/06/2025 4:53 pm, Leo Yan wrote:
> Handle "CPU=-1" (per-thread mode) in the arm_spe__get_metadata_by_cpu()
> function. As a result, the function is more general and will be invoked
> by a sequential change.
> 
> Signed-off-by: Leo Yan <leo.yan@arm.com>
> ---
>   tools/perf/util/arm-spe.c | 30 ++++++++++++++----------------
>   1 file changed, 14 insertions(+), 16 deletions(-)
> 
> diff --git a/tools/perf/util/arm-spe.c b/tools/perf/util/arm-spe.c
> index 2ab38d21d52f73617451a6a79f9d5ae931a34f49..8e93b0d151a98714d0c5e5f6ceec386a2aa63ad0 100644
> --- a/tools/perf/util/arm-spe.c
> +++ b/tools/perf/util/arm-spe.c
> @@ -324,6 +324,19 @@ static u64 *arm_spe__get_metadata_by_cpu(struct arm_spe *spe, u64 cpu)
>   	if (!spe->metadata)
>   		return NULL;
>   
> +	/* CPU ID is -1 for per-thread mode */
> +	if (cpu < 0) {
> +		/*
> +		 * On the heterogeneous system, due to CPU ID is -1,
> +		 * cannot confirm the meta data.
> +		 */
> +		if (!spe->is_homogeneous)
> +			return NULL;
> +
> +		/* In homogeneous system, simply use CPU0's metadata */
> +		return spe->metadata[0];
> +	}
> +
>   	for (i = 0; i < spe->metadata_nr_cpu; i++)
>   		if (spe->metadata[i][ARM_SPE_CPU] == cpu)
>   			return spe->metadata[i];
> @@ -924,22 +937,7 @@ static bool arm_spe__synth_ds(struct arm_spe_queue *speq,
>   		cpuid = perf_env__cpuid(spe->session->evlist->env);
>   		midr = strtol(cpuid, NULL, 16);
>   	} else {
> -		/* CPU ID is -1 for per-thread mode */
> -		if (speq->cpu < 0) {
> -			/*
> -			 * On the heterogeneous system, due to CPU ID is -1,
> -			 * cannot confirm the data source packet is supported.
> -			 */
> -			if (!spe->is_homogeneous)
> -				return false;
> -
> -			/* In homogeneous system, simply use CPU0's metadata */
> -			if (spe->metadata)
> -				metadata = spe->metadata[0];
> -		} else {
> -			metadata = arm_spe__get_metadata_by_cpu(spe, speq->cpu);
> -		}
> -
> +		metadata = arm_spe__get_metadata_by_cpu(spe, speq->cpu);
>   		if (!metadata)
>   			return false;
>   
> 

Reviewed-by: James Clark <james.clark@linaro.org>



  reply	other threads:[~2025-06-20 11:29 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-13 15:53 [PATCH 00/12] perf arm-spe: Support new events in FEAT_SPEv1p4 Leo Yan
2025-06-13 15:53 ` [PATCH 01/12] drivers/perf: arm_spe: Store event reserved bits in driver data Leo Yan
2025-06-19 11:28   ` James Clark
2025-06-19 16:22     ` Leo Yan
2025-06-13 15:53 ` [PATCH 02/12] drivers/perf: arm_spe: Expose events capability Leo Yan
2025-06-19 11:32   ` James Clark
2025-06-19 16:24     ` Leo Yan
2025-06-13 15:53 ` [PATCH 03/12] perf arm_spe: Correct setting remote access Leo Yan
2025-06-19 13:53   ` James Clark
2025-06-19 16:45     ` Leo Yan
2025-06-13 15:53 ` [PATCH 04/12] perf arm_spe: Directly propagate raw event Leo Yan
2025-06-19 14:13   ` James Clark
2025-06-13 15:53 ` [PATCH 05/12] perf arm_spe: Decode event types for new features Leo Yan
2025-06-19 14:20   ` James Clark
2025-06-13 15:53 ` [PATCH 06/12] perf arm_spe: Add "events" entry in meta data Leo Yan
2025-06-19 15:46   ` James Clark
2025-06-13 15:53 ` [PATCH 07/12] perf arm_spe: Refine memory level filling Leo Yan
2025-06-20 10:27   ` James Clark
2025-06-13 15:53 ` [PATCH 08/12] perf arm_spe: Separate setting of memory levels for loads and stores Leo Yan
2025-06-20 10:30   ` James Clark
2025-06-13 15:53 ` [PATCH 09/12] perf arm_spe: Fill memory levels for FEAT_SPEv1p4 Leo Yan
2025-06-20 10:37   ` James Clark
2025-06-13 15:53 ` [PATCH 10/12] perf arm_spe: Refactor arm_spe__get_metadata_by_cpu() Leo Yan
2025-06-20 10:45   ` James Clark [this message]
2025-06-13 15:53 ` [PATCH 11/12] perf arm_spe: Set HITM flag Leo Yan
2025-06-20 10:51   ` James Clark
2025-06-13 15:53 ` [PATCH 12/12] perf arm_spe: Allow parsing both data source and events Leo Yan
2025-06-20 10:55   ` James Clark

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f5f70b05-dd8b-4e46-a6ee-e2710a9cf1ee@linaro.org \
    --to=james.clark@linaro.org \
    --cc=acme@kernel.org \
    --cc=acme@redhat.com \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=leo.yan@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=namhyung@kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).