stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Ming Lei <ming.lei@canonical.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"Justin M. Forbes" <jforbes@fedoraproject.org>,
	Jeff Moyer <jmoyer@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	"v4.0" <stable@vger.kernel.org>
Subject: Re: [PATCH 2/2] block: loop: avoiding too many pending per work I/O
Date: Tue, 5 May 2015 12:55:41 -0400	[thread overview]
Message-ID: <20150505165541.GV1971@htj.duckdns.org> (raw)
In-Reply-To: <CACVXFVOKWdoQoZOnLyvES60Pray4Xwh5OH_PS7vLLyb51hikCw@mail.gmail.com>

Hello, Ming.

On Tue, May 05, 2015 at 10:46:10PM +0800, Ming Lei wrote:
> On Tue, May 5, 2015 at 9:59 PM, Tejun Heo <tj@kernel.org> wrote:
> > It's a bit weird to hard code this to 16 as this effectively becomes a
> > hidden bottleneck for concurrency.  For cases where 16 isn't a good
> > value, hunting down what's going on can be painful as it's not visible
> > anywhere.  I still think the right knob to control concurrency is
> > nr_requests for the loop device.  You said that for linear IOs, it's
> > better to have higher nr_requests than concurrency but can you
> > elaborate why?
> 
> I mean, in case of sequential IO, the IO may hit page cache a bit easier,
> so handling the IO may be quite quick, then it is often more efficient to
> handle them in one same context(such as, handle one by one from IO
> queue) than from different contexts(scheduled from different worker
> threads). And that can be made by setting a bigger nr_requests(queue_depth).

Ah, so, it's about the queueing latency.  Blocking the issuer from
get_request side for the same level of concurrency would incur a lot
longer latency before the next IO can be dispatched.  The arbitrary 16
is still bothering but for now it's fine I guess, but we need to
revisit the whole thing including WQ_HIGHPRI thing.  Maybe it made
sense when we had only one thread servicing all IOs but w/ high
concurrency I don't think it's a good idea.

Please feel free to add

 Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun

  reply	other threads:[~2015-05-05 16:55 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1430826595-5888-1-git-send-email-ming.lei@canonical.com>
2015-05-05 11:49 ` [PATCH 1/2] block: loop: convert to per-device workqueue Ming Lei
2015-05-05 14:00   ` Tejun Heo
2015-05-26 22:29   ` Dave Kleikamp
2015-05-05 11:49 ` [PATCH 2/2] block: loop: avoiding too many pending per work I/O Ming Lei
2015-05-05 13:59   ` Tejun Heo
2015-05-05 14:46     ` Ming Lei
2015-05-05 16:55       ` Tejun Heo [this message]
2015-05-06  3:14         ` Ming Lei
2015-05-06  5:17           ` Ming Lei
2015-05-06  7:36             ` Christoph Hellwig
2015-05-06 11:43               ` Ming Lei
2015-05-22 12:36   ` Josh Boyer
2015-05-22 13:32     ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150505165541.GV1971@htj.duckdns.org \
    --to=tj@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=hch@infradead.org \
    --cc=jforbes@fedoraproject.org \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@canonical.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).