From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760452AbcAKQez (ORCPT ); Mon, 11 Jan 2016 11:34:55 -0500 Received: from casper.infradead.org ([85.118.1.10]:33115 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758890AbcAKQeu (ORCPT ); Mon, 11 Jan 2016 11:34:50 -0500 Message-Id: <20160111163229.054874403@infradead.org> User-Agent: quilt/0.61-1 Date: Mon, 11 Jan 2016 17:25:04 +0100 From: Peter Zijlstra To: mingo@kernel.org, alexander.shishkin@linux.intel.com, eranian@google.com Cc: linux-kernel@vger.kernel.org, vince@deater.net, dvyukov@google.com, andi@firstfloor.org, jolsa@redhat.com, peterz@infradead.org Subject: [RFC][PATCH 06/12] perf: Use task_ctx_sched_out() References: <20160111162458.427203780@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-perf-fixes-7.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We have a function that does exactly what we want here, use it. This reduces the amount of cpuctx->task_ctx muckery. Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2545,8 +2545,7 @@ static void perf_event_context_sched_out if (do_switch) { raw_spin_lock(&ctx->lock); - ctx_sched_out(ctx, cpuctx, EVENT_ALL); - cpuctx->task_ctx = NULL; + task_ctx_sched_out(cpuctx, ctx); raw_spin_unlock(&ctx->lock); } }