public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@suse.de>
To: Tejun Heo <htejun@gmail.com>
Cc: Linda Walsh <lkml@tlinx.org>,
	Linux-Kernel <linux-kernel@vger.kernel.org>
Subject: Re: Block I/O Schedulers: Can they be made selectable/device? @runtime?
Date: Wed, 29 Mar 2006 09:21:25 +0200	[thread overview]
Message-ID: <20060329072124.GR8186@suse.de> (raw)
In-Reply-To: <442A08AA.80305@gmail.com>

On Wed, Mar 29 2006, Tejun Heo wrote:
> Linda Walsh wrote:
> >Is it still the case that block I/O schedulers (AS, CFQ, etc.)
> >are only selectable at boot time?
> >
> >How difficult would it be to allow multiple, concurrent I/O
> >schedulers running on different block devices?
> >
> >How close is the kernel to "being there"?  I.e. if someone has a
> >"regular" hard disk and a high-end solid state disk, can
> >Linux allow whichever algorithm is best for the hardware?
> >(or applications if they are run on separate block devices)?
> >
> 
> Hello, Linda, Jens.
> 
> Actually, I've been thinking about related stuff for sometime. e.g. It 
> doesn't make much sense to use any scheduler other than noop for SSDs 
> and it also doesn't make much sense to plug requests for milliseconds to 
> such devices. So, what I'm currently thinking is...
> 
> * Give LLDD a chance to say that it doesn't need fancy scheduling.

Something I've been meaning to do for ages as well. I figure the
simplest way is to define a simple set of profiles, ala

enum {
        BLK_QUEUE_TYPE_HD,
        BLK_QUEUE_TYPE_SS,
        BLK_QUEUE_TYPE_CDROM,
};

Make BLK_QUEUE_TYPE_HD the default setting, and then let setting of this
look something ala:

        q = blk_init_queue(rfn, lock);
        blk_set_queue_type(q, BLK_QUEUE_TYPE_SS);
        ...

and be done with it.

> * Automagically tune plugging time. We can maintain running average of 
> request turn-around time and use fraction of it to plug the device. This 
> should be give good enough merging behavior while not adding excessive 
> delay to seek time.

Sounds like too much work for little (or zero) benefit. The current
heuristics are a little rough, and if you can show a tangible benefit
from actually looking/calculating this stuff, then we can talk :-)

> * Don't leave device devices with queue depth > 1 idle. For queued 
> devices, we can push the first request fast such that the head moves to 
> proximity of what would probably follow. So, don't plug the first 
> request, plug from the second.

Trade off, if the next io is mergable it will still be a loss. But
generally I like the idea!

-- 
Jens Axboe


      reply	other threads:[~2006-03-29  7:24 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-03-26  6:41 Block I/O Schedulers: Can they be made selectable/device? @runtime? Linda Walsh
2006-03-26  7:06 ` Valdis.Kletnieks
2006-03-27  3:20   ` Linda Walsh
2006-03-27  6:19     ` Randy.Dunlap
2006-03-27  8:40       ` [PATCH] 2.6.16 Block I/O Schedulers - document runtime selection Valdis.Kletnieks
2006-03-29  4:10 ` Block I/O Schedulers: Can they be made selectable/device? @runtime? Tejun Heo
2006-03-29  7:21   ` Jens Axboe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060329072124.GR8186@suse.de \
    --to=axboe@suse.de \
    --cc=htejun@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkml@tlinx.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox