From: Ming Lei <ming.lei@redhat.com>
To: Kashyap Desai <kashyap.desai@broadcom.com>
Cc: Bart Van Assche <bvanassche@acm.org>,
linux-scsi@vger.kernel.org,
"Martin K . Petersen" <martin.petersen@oracle.com>,
James Bottomley <James.Bottomley@hansenpartnership.com>,
Jens Axboe <axboe@kernel.dk>,
"Ewan D . Milne" <emilne@redhat.com>,
Omar Sandoval <osandov@fb.com>, Christoph Hellwig <hch@lst.de>,
Hannes Reinecke <hare@suse.de>,
Laurence Oberman <loberman@redhat.com>,
Bart Van Assche <bart.vanassche@wdc.com>,
Sathya Prakash Veerichetty <sathya.prakash@broadcom.com>
Subject: Re: [RFC PATCH V4 2/2] scsi: core: don't limit per-LUN queue depth for SSD
Date: Tue, 5 Nov 2019 08:23:34 +0800 [thread overview]
Message-ID: <20191105002334.GA11436@ming.t460p> (raw)
In-Reply-To: <fde89689599da4da91330061e5920d8e@mail.gmail.com>
On Mon, Nov 04, 2019 at 03:00:47PM +0530, Kashyap Desai wrote:
> > On Fri, Oct 25, 2019 at 03:34:16PM +0530, Kashyap Desai wrote:
> > > > >
> > > > > >
> > > > > > > Can we get supporting API from block layer (through SML) ?
> > > > > > > something similar to "atomic_read(&hctx->nr_active)" which can
> > > > > > > be derived from
> > > > > > > sdev->request_queue->hctx ?
> > > > > > > At least for those driver which is nr_hw_queue = 1, it will be
> > > > > > > useful and we can avoid sdev->device_busy dependency.
> > > > > >
> > > > > > If you mean to add new atomic counter, we just move the
> > > > > > .device_busy
> > > > > into
> > > > > > blk-mq, that can become new bottleneck.
> > > > >
> > > > > How about below ? We define and use below API instead of
> > > > > "atomic_read(&scp->device->device_busy) >" and it is giving
> > > > > expected value. I have not captured performance impact on max IOPs
> > profile.
> > > > >
> > > > > Inline unsigned long sdev_nr_inflight_request(struct request_queue
> > > > > *q) {
> > > > > struct blk_mq_hw_ctx *hctx;
> > > > > unsigned long nr_requests = 0;
> > > > > int i;
> > > > >
> > > > > queue_for_each_hw_ctx(q, hctx, i)
> > > > > nr_requests += atomic_read(&hctx->nr_active);
> > > > >
> > > > > return nr_requests;
> > > > > }
> > > >
> > > > There is still difference between above and .device_busy in case of
> > > none,
> > > > because .nr_active is accounted actually when allocating the request
> > > instead
> > > > of getting driver tag(or before calling .queue_rq).
> > >
> > >
> > > This will be fine as long as we get outstanding from allocation time
> > > itself.
> >
> > Fine, but keep that in mind.
> >
> > > >
> > > > Also the above only works in case that there are more than one
> > > > active
> > > LUNs.
> > >
> > > I am not able to understand this part. We have tested on setup which
> > > has only one active LUN and it works. Can you help me to understand
> > > this part ?
> >
> > Please see blk_mq_rq_ctx_init():
> >
> > if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) {
> > rq_flags = RQF_MQ_INFLIGHT;
> .
> > }
> >
> > blk_mq_init_allocated_queue
> > blk_mq_add_queue_tag_set
> > blk_mq_update_tag_set_depth(ture)
> > queue_set_hctx_shared(q, shared)
> >
>
> Ming, Thanks for the pointers. Now I am able to follow you. Only single
> active LUN does not really require shared tag, so block layer starts using
> BLK_MQ_F_TAG_SHARED only after more than one active LUN.
> This limitation should be fine for megaraid_sas and mpt3sas driver. BTW,
> how about using BLK_MQ_F_TAG_SHARED flags for one active lun case ?
I guess it won't be accepted, given .nr_active is used for fair driver
tag allocation among all LUNs, and the counting does has cost.
> It will help us to remove single lun limitation, so that any other driver
> module can take benefit of the same API.
>
> I think we have to provide API " sdev_nr_inflight_request" from block
> layer OR scsi mid layer.
> For this RFC, we need additional API discussed in this thread so that
> megaraid_sas and mpt3sas driver does not break key functionality which has
> dependency on sdev-> device_busy.
I will take a close look at this usage, and see if there is better way.
Thanks,
Ming
next prev parent reply other threads:[~2019-11-05 0:24 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-09 9:32 [PATCH V4 0/2] scsi: avoid atomic operations in IO path Ming Lei
2019-10-09 9:32 ` [PATCH V4 1/2] scsi: core: avoid host-wide host_busy counter for scsi_mq Ming Lei
2019-10-09 16:14 ` Bart Van Assche
2019-10-23 8:52 ` John Garry
2019-10-24 0:58 ` Ming Lei
2019-10-24 9:19 ` John Garry
2019-10-24 21:24 ` Ming Lei
2019-10-25 8:58 ` John Garry
2019-10-25 9:43 ` Ming Lei
2019-10-25 10:13 ` John Garry
2019-10-25 21:53 ` Ming Lei
2019-10-28 9:42 ` John Garry
2019-10-09 9:32 ` [RFC PATCH V4 2/2] scsi: core: don't limit per-LUN queue depth for SSD Ming Lei
2019-10-09 16:05 ` Bart Van Assche
2019-10-10 0:43 ` Ming Lei
2019-10-17 18:30 ` Kashyap Desai
2019-10-23 1:28 ` Ming Lei
2019-10-23 7:46 ` Kashyap Desai
2019-10-24 1:09 ` Ming Lei
2019-10-25 10:04 ` Kashyap Desai
2019-10-25 21:58 ` Ming Lei
2019-11-04 9:30 ` Kashyap Desai
2019-11-05 0:23 ` Ming Lei [this message]
2019-10-23 0:30 ` [scsi] cc2f854c79: suspend_stress.fail kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191105002334.GA11436@ming.t460p \
--to=ming.lei@redhat.com \
--cc=James.Bottomley@hansenpartnership.com \
--cc=axboe@kernel.dk \
--cc=bart.vanassche@wdc.com \
--cc=bvanassche@acm.org \
--cc=emilne@redhat.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=kashyap.desai@broadcom.com \
--cc=linux-scsi@vger.kernel.org \
--cc=loberman@redhat.com \
--cc=martin.petersen@oracle.com \
--cc=osandov@fb.com \
--cc=sathya.prakash@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox