From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967263AbcA1K7d (ORCPT ); Thu, 28 Jan 2016 05:59:33 -0500 Received: from outbound-smtp09.blacknight.com ([46.22.139.14]:43740 "EHLO outbound-smtp09.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966848AbcA1K73 (ORCPT ); Thu, 28 Jan 2016 05:59:29 -0500 Date: Thu, 28 Jan 2016 10:59:25 +0000 From: Mel Gorman To: Matt Fleming Cc: Peter Zijlstra , Ingo Molnar , Mike Galbraith , LKML Subject: Re: [PATCH] sched: Make schedstats a runtime tunable that is disabled by default v2 Message-ID: <20160128105925.GM3162@techsingularity.net> References: <1453908566-15211-1-git-send-email-mgorman@techsingularity.net> <20160128103208.GB2571@codeblueprint.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20160128103208.GB2571@codeblueprint.co.uk> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 28, 2016 at 10:32:08AM +0000, Matt Fleming wrote: > On Wed, 27 Jan, at 03:29:26PM, Mel Gorman wrote: > > +#ifdef CONFIG_SCHEDSTATS > > +void set_schedstats(bool enabled) > > +{ > > + if (enabled) > > + static_branch_enable(&sched_schedstats); > > + else > > + static_branch_disable(&sched_schedstats); > > +} > > This function should probably be 'static'; it has no users outside of > this file. > Yes. > > @@ -313,17 +317,19 @@ do { \ > > #define P(n) SEQ_printf(m, " .%-30s: %d\n", #n, rq->n); > > #define P64(n) SEQ_printf(m, " .%-30s: %Ld\n", #n, rq->n); > > > > - P(yld_count); > > + if (schedstat_enabled()) { > > + P(yld_count); > > > > - P(sched_count); > > - P(sched_goidle); > > + P(sched_count); > > + P(sched_goidle); > > #ifdef CONFIG_SMP > > - P64(avg_idle); > > - P64(max_idle_balance_cost); > > + P64(avg_idle); > > + P64(max_idle_balance_cost); > > These two fields are still updated without any kind of > schedstat_enabled() guard. We probably shouldn't refuse to print them > if we're maintaining these counters, right? > Right. > > #undef P > > #undef P64 > > @@ -569,38 +575,38 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) > > nr_switches = p->nvcsw + p->nivcsw; > > > > #ifdef CONFIG_SCHEDSTATS > > - PN(se.statistics.sum_sleep_runtime); > > - PN(se.statistics.wait_start); > > - PN(se.statistics.sleep_start); > > - PN(se.statistics.block_start); > > - PN(se.statistics.sleep_max); > > - PN(se.statistics.block_max); > > - PN(se.statistics.exec_max); > > - PN(se.statistics.slice_max); > > - PN(se.statistics.wait_max); > > - PN(se.statistics.wait_sum); > > - P(se.statistics.wait_count); > > - PN(se.statistics.iowait_sum); > > - P(se.statistics.iowait_count); > > - P(se.nr_migrations); > > - P(se.statistics.nr_migrations_cold); > > - P(se.statistics.nr_failed_migrations_affine); > > - P(se.statistics.nr_failed_migrations_running); > > - P(se.statistics.nr_failed_migrations_hot); > > - P(se.statistics.nr_forced_migrations); > > - P(se.statistics.nr_wakeups); > > - P(se.statistics.nr_wakeups_sync); > > - P(se.statistics.nr_wakeups_migrate); > > - P(se.statistics.nr_wakeups_local); > > - P(se.statistics.nr_wakeups_remote); > > - P(se.statistics.nr_wakeups_affine); > > - P(se.statistics.nr_wakeups_affine_attempts); > > - P(se.statistics.nr_wakeups_passive); > > - P(se.statistics.nr_wakeups_idle); > > - > > - { > > + if (schedstat_enabled()) { > > u64 avg_atom, avg_per_cpu; > > > > + PN(se.statistics.sum_sleep_runtime); > > + PN(se.statistics.wait_start); > > + PN(se.statistics.sleep_start); > > + PN(se.statistics.block_start); > > + PN(se.statistics.sleep_max); > > + PN(se.statistics.block_max); > > + PN(se.statistics.exec_max); > > + PN(se.statistics.slice_max); > > + PN(se.statistics.wait_max); > > + PN(se.statistics.wait_sum); > > + P(se.statistics.wait_count); > > + PN(se.statistics.iowait_sum); > > + P(se.statistics.iowait_count); > > + P(se.nr_migrations); > > Ditto for se.nr_migrations. It has no schedstat_enabled() wrapper. > Yes. > > @@ -801,8 +793,8 @@ static void update_stats_enqueue(struct cfs_rq *cfs_rq, struct sched_entity *se) > > update_stats_wait_start(cfs_rq, se); > > } > > > > -static inline void > > -update_stats_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se) > > +static void > > +update_stats_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) > > { > > /* > > * Mark the end of the wait period if dequeueing a > > You dropped the 'inline' from this function. Since there is only one > caller, I'm guessing that was unintentional? It wasn't really. The patch increased the function size by enough that I uninlined it and let the compiler make the decision. In this case, it should automatically inline but I can leave the inline in. -- Mel Gorman SUSE Labs