From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753081AbdARM0T (ORCPT ); Wed, 18 Jan 2017 07:26:19 -0500 Received: from foss.arm.com ([217.140.101.70]:50402 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753048AbdARM0R (ORCPT ); Wed, 18 Jan 2017 07:26:17 -0500 Date: Wed, 18 Jan 2017 12:25:15 +0000 From: Mark Rutland To: David Carrillo-Cisneros Cc: linux-kernel@vger.kernel.org, "x86@kernel.org" , Ingo Molnar , Thomas Gleixner , Andi Kleen , Kan Liang , Peter Zijlstra , Borislav Petkov , Srinivas Pandruvada , Dave Hansen , Vikas Shivappa , Arnaldo Carvalho de Melo , Vince Weaver , Paul Turner , Stephane Eranian Subject: Re: [PATCH 2/2] perf/core: Remove perf_cpu_context::unique_pmu Message-ID: <20170118122515.GD3231@leverpostej> References: <20170117173840.10614-1-davidcc@google.com> <20170117173840.10614-3-davidcc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170117173840.10614-3-davidcc@google.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 17, 2017 at 09:38:40AM -0800, David Carrillo-Cisneros wrote: > cpuctx->unique_pmu was originally introduced as a way to identify cpuctxs > with shared pmus in order to avoid visiting the same cpuctx more than once > in a for_each_pmu loop. > > cpuctx->unique_pmu == cpuctx->pmu in non-software task contexts since they > have only one pmu per cpuctx. Since perf_pmu_sched_task is only called in > hw contexts, this patch replaces cpuctx->unique_pmu by cpuctx->pmu in it. > > The change above, together with the previous patch in this series, removed > the remaining uses of cpuctx->unique_pmu, so we remove it altogether. > > Signed-off-by: David Carrillo-Cisneros I don't have any HW with a PMU that needs a sched_task callback, so I'm unable to give this a full test, but it looks sane to me. FWIW: Acked-by: Mark Rutland Thanks, Mark.