From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751832Ab2LUEdZ (ORCPT ); Thu, 20 Dec 2012 23:33:25 -0500 Received: from LGEMRELSE7Q.lge.com ([156.147.1.151]:62377 "EHLO LGEMRELSE7Q.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750908Ab2LUEdR (ORCPT ); Thu, 20 Dec 2012 23:33:17 -0500 X-AuditID: 9c930197-b7c5bae000000e31-d3-50d3e68b9f37 From: Namhyung Kim To: Alex Shi Cc: rob@landley.net, mingo@redhat.com, peterz@infradead.org, gregkh@linuxfoundation.org, andre.przywara@amd.com, rjw@sisk.pl, paul.gortmaker@windriver.com, akpm@linux-foundation.org, paulmck@linux.vnet.ibm.com, linux-kernel@vger.kernel.org, pjt@google.com, vincent.guittot@linaro.org Subject: Re: [PATCH 06/18] sched: set initial load avg of new forked task as its load weight References: <1355127754-8444-1-git-send-email-alex.shi@intel.com> <1355127754-8444-7-git-send-email-alex.shi@intel.com> Date: Fri, 21 Dec 2012 13:33:15 +0900 In-Reply-To: <1355127754-8444-7-git-send-email-alex.shi@intel.com> (Alex Shi's message of "Mon, 10 Dec 2012 16:22:22 +0800") Message-ID: <87a9t8f2ic.fsf@sejong.aot.lge.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Brightmail-Tracker: AAAAAA== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 10 Dec 2012 16:22:22 +0800, Alex Shi wrote: > New task has no runnable sum at its first runnable time, that make > burst forking just select few idle cpus to put tasks. > Set initial load avg of new forked task as its load weight to resolve > this issue. > > Signed-off-by: Alex Shi > --- > include/linux/sched.h | 1 + > kernel/sched/core.c | 2 +- > kernel/sched/fair.c | 13 +++++++++++-- > 3 files changed, 13 insertions(+), 3 deletions(-) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 5dafac3..093f9cd 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -1058,6 +1058,7 @@ struct sched_domain; > #else > #define ENQUEUE_WAKING 0 > #endif > +#define ENQUEUE_NEWTASK 8 > > #define DEQUEUE_SLEEP 1 > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index e6533e1..96fa5f1 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -1648,7 +1648,7 @@ void wake_up_new_task(struct task_struct *p) > #endif > > rq = __task_rq_lock(p); > - activate_task(rq, p, 0); > + activate_task(rq, p, ENQUEUE_NEWTASK); > p->on_rq = 1; > trace_sched_wakeup_new(p, true); > check_preempt_curr(rq, p, WF_FORK); > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 1faf89f..61c8d24 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -1277,8 +1277,9 @@ static inline void update_rq_runnable_avg(struct rq *rq, int runnable) > /* Add the load generated by se into cfs_rq's child load-average */ > static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, > struct sched_entity *se, > - int wakeup) > + int flags) > { > + int wakeup = flags & ENQUEUE_WAKEUP; > /* > * We track migrations using entity decay_count <= 0, on a wake-up > * migration we use a negative decay count to track the remote decays > @@ -1312,6 +1313,12 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, > update_entity_load_avg(se, 0); > } > > + /* > + * set the initial load avg of new task same as its load > + * in order to avoid brust fork make few cpu too heavier > + */ > + if (flags & ENQUEUE_NEWTASK) > + se->avg.load_avg_contrib = se->load.weight; > cfs_rq->runnable_load_avg += se->avg.load_avg_contrib; > /* we force update consideration on load-balancer moves */ > update_cfs_rq_blocked_load(cfs_rq, !wakeup); > @@ -1476,7 +1483,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) > */ > update_curr(cfs_rq); > account_entity_enqueue(cfs_rq, se); > - enqueue_entity_load_avg(cfs_rq, se, flags & ENQUEUE_WAKEUP); > + enqueue_entity_load_avg(cfs_rq, se, flags & > + (ENQUEUE_WAKEUP | ENQUEUE_NEWTASK)); It seems that just passing 'flags' is enough. > > if (flags & ENQUEUE_WAKEUP) { > place_entity(cfs_rq, se, 0); > @@ -2586,6 +2594,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) > cfs_rq->h_nr_running++; > > flags = ENQUEUE_WAKEUP; > + flags &= ~ENQUEUE_NEWTASK; Why is this needed? Thanks, Namhyung > } > > for_each_sched_entity(se) {