From: Ming Lei <ming.lei@redhat.com>
To: Bart Van Assche <bvanassche@acm.org>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
"Martin K . Petersen" <martin.petersen@oracle.com>,
Christoph Hellwig <hch@lst.de>,
ming.lei@redhat.com
Subject: Re: [PATCH v4 0/3] Support disabling fair tag sharing
Date: Wed, 25 Oct 2023 09:33:40 +0800 [thread overview]
Message-ID: <ZThwdPaeAFmhp58L@fedora> (raw)
In-Reply-To: <5d37f5ed-130a-4e75-b9a7-f77aeb4c7c89@acm.org>
On Tue, Oct 24, 2023 at 09:41:50AM -0700, Bart Van Assche wrote:
> On 10/23/23 19:28, Ming Lei wrote:
> > On Mon, Oct 23, 2023 at 01:36:32PM -0700, Bart Van Assche wrote:
> > > Performance of UFS devices is reduced significantly by the fair tag sharing
> > > algorithm. This is because UFS devices have multiple logical units and a
> > > limited queue depth (32 for UFS 3.1 devices) and also because it takes time to
> > > give tags back after activity on a request queue has stopped. This patch series
> > > addresses this issue by introducing a flag that allows block drivers to
> > > disable fair sharing.
> > >
> > > Please consider this patch series for the next merge window.
> >
> > In previous post[1], you mentioned that the issue is caused by non-IO
> > queue of WLUN, but in this version, looks there isn't such story any more.
> >
> > IMO, it isn't reasonable to account non-IO LUN for tag fairness, so
> > solution could be to not take non-IO queue into account for fair tag
> > sharing. But disabling fair tag sharing for this whole tagset could be
> > too over-kill.
> >
> > And if you mean normal IO LUNs, can you share more details about the
> > performance drop? such as the test case, how many IO LUNs, and how to
> > observe performance drop, cause it isn't simple any more since multiple
> > LUN's perf has to be considered.
> >
> > [1] https://lore.kernel.org/linux-block/20231018180056.2151711-1-bvanassche@acm.org/
>
> Hi Ming,
>
> Submitting I/O to a WLUN is only one example of a use case that
> activates the fair sharing algorithm for UFS devices. Another use
> case is simultaneous activity for multiple data LUNs. Conventional
> UFS devices typically have four data LUNs and zoned UFS devices
> typically have five data LUNs. From an Android device with a zoned UFS
> device:
>
> $ adb shell ls /sys/class/scsi_device
> 0:0:0:0
> 0:0:0:1
> 0:0:0:2
> 0:0:0:3
> 0:0:0:4
> 0:0:0:49456
> 0:0:0:49476
> 0:0:0:49488
>
> The first five are data logical units. The last three are WLUNs.
>
> For a block size of 4 KiB, I see 144 K IOPS for queue depth 31 and
> 107 K IOPS for queue depth 15 (queue depth is reduced from 31 to 15
> if I/O is being submitted to two LUNs simultaneously). In other words,
> disabling fair sharing results in up to 35% higher IOPS for small reads
> and in case two logical units are active simultaneously. I think that's
> a very significant performance difference.
Yeah, performance does drop when queue depth is cut to half if queue
depth is low enough.
However, it isn't enough to just test perf over one LUN, what is the
perf effect when running IOs over the 2 or 5 data LUNs concurrently?
SATA should have similar issue too, and I think the improvement may be
more generic to bypass fair tag sharing in case of low queue depth
(such as < 32) if turns out the fair tag sharing doesn't work well
in case low queue depth.
Also the 'fairness' could be enhanced dynamically by scsi LUN's queue depth,
which can be adjusted dynamically.
Thanks,
Ming
next prev parent reply other threads:[~2023-10-25 1:34 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-23 20:36 [PATCH v4 0/3] Support disabling fair tag sharing Bart Van Assche
2023-10-23 20:36 ` [PATCH v4 1/3] block: Introduce flag BLK_MQ_F_DISABLE_FAIR_TAG_SHARING Bart Van Assche
2023-10-23 20:36 ` [PATCH v4 2/3] scsi: core: Support disabling fair tag sharing Bart Van Assche
2023-10-23 20:36 ` [PATCH v4 3/3] scsi: ufs: Disable " Bart Van Assche
2023-10-24 5:36 ` Avri Altman
2023-10-24 2:28 ` [PATCH v4 0/3] Support disabling " Ming Lei
2023-10-24 16:41 ` Bart Van Assche
2023-10-25 1:33 ` Ming Lei [this message]
2023-10-25 18:50 ` Avri Altman
2023-10-26 16:37 ` Bart Van Assche
2023-10-25 19:01 ` Bart Van Assche
2023-10-25 23:37 ` Ming Lei
2023-10-26 16:29 ` Bart Van Assche
2023-10-31 2:01 ` Yu Kuai
2023-10-31 16:25 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZThwdPaeAFmhp58L@fedora \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox