From: Ingo Molnar <mingo@kernel.org>
To: Rakib Mullick <rakib.mullick@gmail.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>,
Peter Zijlstra <peterz@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
Frederic Weisbecker <fweisbec@gmail.com>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 0/2] sched: move content out of core files for load average
Date: Sun, 21 Apr 2013 10:54:47 +0200 [thread overview]
Message-ID: <20130421085447.GB31470@gmail.com> (raw)
In-Reply-To: <CADZ9YHiy59tASG3uMXqdOv=RY79SMSzZb47oBh2uZGqfbCR+YQ@mail.gmail.com>
* Rakib Mullick <rakib.mullick@gmail.com> wrote:
> On Fri, Apr 19, 2013 at 2:25 PM, Ingo Molnar <mingo@kernel.org> wrote:
> >
> > * Paul Gortmaker <paul.gortmaker@windriver.com> wrote:
> >
> >> On 13-04-18 07:14 AM, Peter Zijlstra wrote:
> >> > On Mon, 2013-04-15 at 11:33 +0200, Ingo Molnar wrote:
> >> >> * Paul Gortmaker <paul.gortmaker@windriver.com> wrote:
> >> >>
> >> >>> Recent activity has had a focus on moving functionally related blocks of stuff
> >> >>> out of sched/core.c into stand-alone files. The code relating to load average
> >> >>> calculations has grown significantly enough recently to warrant placing it in a
> >> >>> separate file.
> >> >>>
> >> >>> Here we do that, and in doing so, we shed ~20k of code from sched/core.c (~10%).
> >> >>>
> >> >>> A couple small static functions in the core sched.h header were also localized
> >> >>> to their singular user in sched/fair.c at the same time, with the goal to also
> >> >>> reduce the amount of "broadcast" content in that sched.h file.
> >> >>
> >> >> Nice!
> >> >>
> >> >> Peter, is this (and the naming of the new file) fine with you too?
> >> >
> >> > Yes and no.. that is I do like the change, but I don't like the
> >> > filename. We have _waaaay_ too many different things we call load_avg.
> >> >
> >> > That said, I'm having a somewhat hard time coming up with a coherent
> >> > alternative :/
> >>
> >> Several of the relocated functions start their name with "calc_load..."
> >> Does "calc_load.c" sound any better?
> >
> > Peter has a point about load_avg being somewhat of a misnomer: that's not your
> > fault in any way, we created overlapping naming within the scheduler and are now
> > hurting from it.
> >
> > Here are the main scheduler 'load' concepts we have right now:
> >
> > - The externally visible 'average load' value extracted by tools like 'top' via
> > /proc/loadavg and handled by fs/proc/loadavg.c. Internally the naming is all
> > over the map: the fields that are updated are named 'avenrun[]', most other
> > variables and methods are named calc_load_*(), and a few callbacks are named
> > *_cpu_load_*().
> >
> > - rq->cpu_load, a weighted, vectored scheduler-internal notion of task load
> > average with multiple run length averages. Only exposed by debug interfaces but
> > otherwise relied on by the scheduler for SMP load balancing.
> >
> > - se->avg - per entity (per task) load average. This is integrated differently
> > from the cpu_load - but work is ongoing to possibly integrate it with the
> > rq->cpu_load metric. This metric is used for CPU internal execution time
> > allocation and timeslicing, based on nice value priorities and cgroup
> > weights and constraints.
> >
> > Work is ongoing to integrate rq->cpu_load and se->avg - eventually they will
> > become one metric.
> >
> > It might eventually make sense to integrate the 'average load' calculation as well
> > with all this - as they really have a similar purpose, the avenload[] vector of
> > averages is conceptually similar to the rq->cpu_load[] vector of averages.
> >
> > So I'd suggest to side-step all that existing confusion and simply name the new
> > file kernel/sched/proc.c - our external /proc scheduler ABI towards userspace.
> > This is similar to the already existing kernel/irq/proc.c pattern.
> >
> Well, kernel/sched/stat.c - also exposes scheduler ABI to userspace.
schedstats is more like a debug API, used by a low number of tools.
So I don't think it's particularly confusing.
Thanks,
Ingo
prev parent reply other threads:[~2013-04-21 8:54 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-04-13 0:04 [RFC PATCH 0/2] sched: move content out of core files for load average Paul Gortmaker
2013-04-13 0:04 ` [PATCH 1/2] sched: fork load calculation code from sched/core --> sched/load_avg Paul Gortmaker
2013-04-13 0:04 ` [PATCH 2/2] sched: move update_load_[add/sub/set] from sched.h to fair.c Paul Gortmaker
2013-04-13 4:30 ` [RFC PATCH 0/2] sched: move content out of core files for load average Rakib Mullick
2013-04-14 0:06 ` Paul Gortmaker
2013-04-15 9:33 ` Ingo Molnar
2013-04-18 11:14 ` Peter Zijlstra
2013-04-18 15:54 ` Paul Gortmaker
2013-04-18 17:06 ` Rakib Mullick
2013-04-18 23:13 ` Paul Gortmaker
2013-04-18 23:43 ` Paul Turner
2013-04-19 2:17 ` Charles Wang
2013-04-19 6:13 ` Rakib Mullick
2013-04-19 8:25 ` Ingo Molnar
2013-04-19 10:51 ` Peter Zijlstra
2013-04-19 10:58 ` Ingo Molnar
2013-04-19 17:05 ` Rakib Mullick
2013-04-21 8:54 ` Ingo Molnar [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130421085447.GB31470@gmail.com \
--to=mingo@kernel.org \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=paul.gortmaker@windriver.com \
--cc=peterz@infradead.org \
--cc=rakib.mullick@gmail.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox