From: Jens Axboe <axboe@kernel.dk>
To: Yuehai Xu <yuehaixu@gmail.com>
Cc: linux-kernel@vger.kernel.org, cmm@us.ibm.com,
rwheeler@redhat.com, vgoyal@redhat.com, czoccolo@gmail.com,
yhxu@wayne.edu
Subject: Re: Who does determine the number of requests that can be serving simultaneously in a storage?
Date: Fri, 07 Jan 2011 09:21:02 +0100 [thread overview]
Message-ID: <4D26CCEE.4070508@kernel.dk> (raw)
In-Reply-To: <AANLkTimHE35jUOY6Lhd7OnATSD6BeB4poUosaQ+bR61i@mail.gmail.com>
On 2011-01-07 04:21, Yuehai Xu wrote:
> Hi all,
>
> We know that couples of requests can be serving simultaneously in a
> storage because of NCQ. My question is that who does determine the
> exact number of the servicing requests in HDD or SSD? Since the
> capability for different storages(hdd/ssd) to server multiple requests
> is different, how the OS know the exact number of requests that can be
> served simultaneously?
>
> I fail to figure out the answer. I know the dispatch routine in I/O
> schedulers is elevator_dispatch_fn, which are invoked at two places.
> One is in __elv_next_request(), the other is elv_drain_elevator(). I
> fail to figure out the exact condition to trigger the routine of
> elv_drain_elevator(), from the source code, I know that it should
> dispatch all the requests in pending queue to "request_queue", from
> which the request is selected to dispatch into device driver.
>
> For __elv_next_request(), it is actually invoked by
> blk_peek_reqeust(), which is invoked by blk_fetch_request(). From
> their comments, I know that only a request should be fetched from
> "request_queue" and this request should be dispatched into
> corresponding device driver. However, I notice that blk_fetch_request
> is invoked at a number of places, it fetches the requests endlessly
> with different stop condition. Which condition is the exact one that
> control the number of requests which can be served at the same time?
> The OS would of course not dispatch requests more than that the
> storage can serve, for example, for SSD, the number of multi requests
> serving simultaneously might be 32, while for HDD, the number is 4.
> But how the OS handle this?
The driver has to take care of this. Since requests are pulled by the
driver, it knows when to stop asking for more work.
BTW, your depth of 4 for the HDD seems a bit odd. Typically all SATA
drives share the same queue depth, limited by what NCQ provides (32).
--
Jens Axboe
next prev parent reply other threads:[~2011-01-07 8:18 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-07 3:21 Who does determine the number of requests that can be serving simultaneously in a storage? Yuehai Xu
2011-01-07 5:16 ` Yuehai Xu
2011-01-07 8:21 ` Jens Axboe [this message]
2011-01-07 13:00 ` Yuehai Xu
2011-01-07 13:10 ` Jens Axboe
2011-01-07 13:23 ` Yuehai Xu
2011-01-07 15:30 ` Jens Axboe
2011-01-07 16:45 ` Yuehai Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D26CCEE.4070508@kernel.dk \
--to=axboe@kernel.dk \
--cc=cmm@us.ibm.com \
--cc=czoccolo@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=rwheeler@redhat.com \
--cc=vgoyal@redhat.com \
--cc=yhxu@wayne.edu \
--cc=yuehaixu@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox