From: Peter Williams <pwil3058@bigpond.net.au>
To: Con Kolivas <kernel@kolivas.org>,
"Marc E. Fiuczynski" <mef@CS.Princeton.EDU>
Cc: Jens Axboe <axboe@suse.de>,
Valdis.Kletnieks@vt.edu,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Chris Han <xiphux@gmail.com>
Subject: Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...
Date: Sat, 22 Jan 2005 08:20:08 +1100 [thread overview]
Message-ID: <41F17208.5000709@bigpond.net.au> (raw)
In-Reply-To: <41F13120.60108@kolivas.org>
Con Kolivas wrote:
> Marc E. Fiuczynski wrote:
>
>> Paraphrasing Jens Axboe:
>>
>>> I don't think you can compare [plugsched with the plugio framework].
>>> Yes they are both schedulers, but that's about where the 'similarity'
>>> stops. The CPU scheduler must be really fast, overhead must be kept
>>> to a minimum. For a disk scheduler, we can affort to burn cpu cycles
>>> to increase the io performance. The extra abstraction required to
>>> fully modularize the cpu scheduler would come at a non-zero cost as
>>> well, but I bet it would have a larger impact there. I doubt you
>>> could measure the difference in the disk scheduler.
>>
>>
>>
>> Modularization usually is done through a level of indirection (function
>> pointers). I have a can of "indirection be gone" almost ready to
>> spray over
>> the plugsched framework that would reduce the overhead to zero at
>> runtime.
>> I'd be happy to finish that work if it makes it more palpable to
>> integrate a
>> plugsched framework into the kernel?
>
>
> The indirection was a minor point. On modern cpus it was suggested by
> wli that this would not be a demonstrable hit in perormance. Having said
> that, I'm sure Peter would be happy for another developer. I know how
> tiring and lonely it can feel maintaining such a monster.
Indeed, the more hands the lighter the load.
Another issue (than indirection) that I think needs to be addressed at
some stage is freeing up the memory occupied by the code of the
schedulers that were unlucky not to be picked. Something like what
__init offers only more selective.
And the option of allowing more than one CPU per run queue is another
direction that needs addressing. This could allow a better balance
between the good scheduling fairness that is obtained by using a single
run queue with the better scalability obtained by using separate run queues.
Peter
--
Peter Williams pwil3058@bigpond.net.au
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
next prev parent reply other threads:[~2005-01-21 21:20 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-01-20 1:23 [ANNOUNCE][RFC] plugsched-2.0 patches Peter Williams
2005-01-20 1:58 ` Kasper Sandberg
2005-01-20 16:14 ` Marc E. Fiuczynski
2005-01-20 17:51 ` Valdis.Kletnieks
2005-01-21 14:11 ` Jens Axboe
2005-01-21 16:29 ` Marc E. Fiuczynski
2005-01-21 16:43 ` Con Kolivas
2005-01-21 21:20 ` Peter Williams [this message]
2005-01-21 2:38 ` Peter Williams
2005-01-21 2:50 ` Marc E. Fiuczynski
2005-01-21 15:16 ` [ckrm-tech] " Shailabh Nagar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=41F17208.5000709@bigpond.net.au \
--to=pwil3058@bigpond.net.au \
--cc=Valdis.Kletnieks@vt.edu \
--cc=axboe@suse.de \
--cc=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mef@CS.Princeton.EDU \
--cc=xiphux@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox