From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1765419AbdAJK02 (ORCPT ); Tue, 10 Jan 2017 05:26:28 -0500 Received: from mail-pf0-f174.google.com ([209.85.192.174]:33572 "EHLO mail-pf0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760412AbdAJKZr (ORCPT ); Tue, 10 Jan 2017 05:25:47 -0500 From: David Carrillo-Cisneros To: linux-kernel@vger.kernel.org Cc: "x86@kernel.org" , Ingo Molnar , Thomas Gleixner , Andi Kleen , Kan Liang , Peter Zijlstra , Borislav Petkov , Srinivas Pandruvada , Dave Hansen , Vikas Shivappa , Mark Rutland , Arnaldo Carvalho de Melo , Vince Weaver , Paul Turner , Stephane Eranian , David Carrillo-Cisneros Subject: [RFC 5/6] perf/core: rotation no longer necessary. Behavior has changed. Beware Date: Tue, 10 Jan 2017 02:25:01 -0800 Message-Id: <20170110102502.106187-6-davidcc@google.com> X-Mailer: git-send-email 2.11.0.390.gc69c2f50cf-goog In-Reply-To: <20170110102502.106187-1-davidcc@google.com> References: <20170110102502.106187-1-davidcc@google.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The sched in/out process updates timestamps and "rotates" ctx->inactive_groups. This changes the speed at which rotation happens. Before events will rotate one event per interruption, now they will rotate q events each timer interruption. Where q is the number of events added to the pmu per sched in. Signed-off-by: David Carrillo-Cisneros --- kernel/events/core.c | 22 +++++----------------- 1 file changed, 5 insertions(+), 17 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index c7715b2627a9..f5d9c13b485f 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3642,19 +3642,6 @@ static void perf_adjust_freq_unthr_context(struct perf_event_context *ctx, raw_spin_unlock(&ctx->lock); } -/* - * Round-robin a context's events: - */ -static void rotate_ctx(struct perf_event_context *ctx) -{ - /* - * Rotate the first entry last of non-pinned groups. Rotation might be - * disabled by the inheritance code. - */ - if (!ctx->rotate_disable) - list_rotate_left(&ctx->flexible_groups); -} - static int perf_rotate_context(struct perf_cpu_context *cpuctx) { struct perf_event_context *ctx = NULL; @@ -3681,10 +3668,11 @@ static int perf_rotate_context(struct perf_cpu_context *cpuctx) if (ctx) ctx_sched_out(ctx, cpuctx, EVENT_FLEXIBLE); - rotate_ctx(&cpuctx->ctx); - if (ctx) - rotate_ctx(ctx); - + /* + * A sched out will insert event groups at end of inactive_groups, + * a sched in will schedule events at the beginning of inactive_groups. + * This causes a rotation. + */ perf_event_sched_in(cpuctx, ctx, current); perf_pmu_enable(cpuctx->ctx.pmu); -- 2.11.0.390.gc69c2f50cf-goog