From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754690AbdBGSoP (ORCPT ); Tue, 7 Feb 2017 13:44:15 -0500 Received: from foss.arm.com ([217.140.101.70]:60524 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753893AbdBGSoO (ORCPT ); Tue, 7 Feb 2017 13:44:14 -0500 Date: Tue, 7 Feb 2017 18:40:54 +0000 From: Mark Rutland To: Neil Leeder , Will Deacon Cc: Catalin Marinas , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Mark Langsdorf , Mark Salter , Jon Masters , Timur Tabi , cov@codeaurora.org Subject: Re: [PATCH v10] perf: add qcom l2 cache perf events driver Message-ID: <20170207184054.GE26173@leverpostej> References: <1486491244-4143-1-git-send-email-nleeder@codeaurora.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1486491244-4143-1-git-send-email-nleeder@codeaurora.org> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Neil, On Tue, Feb 07, 2017 at 01:14:04PM -0500, Neil Leeder wrote: > Adds perf events support for L2 cache PMU. > > The L2 cache PMU driver is named 'l2cache_0' and can be used > with perf events to profile L2 events such as cache hits > and misses on Qualcomm Technologies processors. > > Signed-off-by: Neil Leeder Thanks for respinning this. This looks good to me now: Reviewed-by: Mark Rutland Will and I should be able to pick this up shortly. There's one minor thing I'd like to clean up below, but we can sort that out when applying -- there's no need to respin. > +static struct cluster_pmu *l2_cache_associate_cpu_with_cluster( > + struct l2cache_pmu *l2cache_pmu, int cpu) > +{ > + u64 mpidr; > + int cpu_cluster_id; > + struct cluster_pmu *cluster; > + > + /* > + * This assumes that the cluster_id is in MPIDR[aff1] for > + * single-threaded cores, and MPIDR[aff2] for multi-threaded > + * cores. This logic will have to be updated if this changes. > + */ > + mpidr = read_cpuid_mpidr(); > + if (mpidr & MPIDR_MT_BITMASK) > + cpu_cluster_id = MPIDR_AFFINITY_LEVEL(mpidr, 2); > + else > + cpu_cluster_id = MPIDR_AFFINITY_LEVEL(mpidr, 1); > + > + list_for_each_entry(cluster, &l2cache_pmu->clusters, next) { > + if (cluster->cluster_id == cpu_cluster_id) { > + dev_info(&l2cache_pmu->pdev->dev, > + "CPU%d associated with cluster %d\n", cpu, > + cluster->cluster_id); > + cpumask_set_cpu(cpu, &cluster->cluster_cpus); > + *per_cpu_ptr(l2cache_pmu->pmu_cluster, cpu) = cluster; > + return cluster; > + } > + } To minimise nesting, I'd like to fix this up as: list_for_each_entry(cluster, &l2cache_pmu->clusters, next) { if (cluster->cluster_id != cpu_cluster_id) continue; dev_info(&l2cache_pmu->pdev->dev, "CPU%d associated with cluster %d\n", cpu, cluster->cluster_id); cpumask_set_cpu(cpu, &cluster->cluster_cpus); *per_cpu_ptr(l2cache_pmu->pmu_cluster, cpu) = cluster; return cluster; } Regardless, this is fine by me. Thanks, Mark.