From: Sumanesh Samanta <sumanesh.samanta@broadcom.com>
To: Bart Van Assche <bvanassche@acm.org>,
axboe@kernel.dk, linux-block@vger.kernel.org, jejb@linux.ibm.com,
martin.petersen@oracle.com, linux-scsi@vger.kernel.org,
ming.lei@redhat.com,
Sathya Prakash Veerichetty <sathya.prakash@broadcom.com>,
chaitra.basappa@broadcom.com,
Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com>,
Kashyap Desai <kashyap.desai@broadcom.com>,
Sumit Saxena <sumit.saxena@broadcom.com>,
Shivasharan Srikanteshwara
<shivasharan.srikanteshwara@broadcom.com>,
emilne@redhat.com, hch@lst.de, hare@suse.de,
bart.vanassche@wdc.com
Subject: RE: [PATCH 1/1] scsi core: limit overhead of device_busy counter for SSDs
Date: Tue, 19 Nov 2019 16:35:59 -0700 [thread overview]
Message-ID: <e4a7540785d14eea7ccf0f7bd02c05f4@mail.gmail.com> (raw)
In-Reply-To: <8357148d-e819-4a3c-9834-25080e036781@acm.org>
Hi Bart,
Thanks for pointing this out.
Yes, the purpose of my patch is exactly same as Ming's patch you referred
to, albeit it achieves the same purpose in a different way.
If the earlier patch makes it upstream, then my patch is not needed.
Thanks,
Sumanesh
-----Original Message-----
From: Bart Van Assche [mailto:bvanassche@acm.org]
Sent: Tuesday, November 19, 2019 4:22 PM
To: Sumanesh Samanta; axboe@kernel.dk; linux-block@vger.kernel.org;
jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org;
ming.lei@redhat.com; sathya.prakash@broadcom.com;
chaitra.basappa@broadcom.com; suganath-prabu.subramani@broadcom.com;
kashyap.desai@broadcom.com; sumit.saxena@broadcom.com;
shivasharan.srikanteshwara@broadcom.com; emilne@redhat.com; hch@lst.de;
hare@suse.de; bart.vanassche@wdc.com
Subject: Re: [PATCH 1/1] scsi core: limit overhead of device_busy counter
for SSDs
On 11/19/19 12:07 PM, Sumanesh Samanta wrote:
> From: root <sumanesh.samanta@broadcom.com>
>
> Recently a patch was delivered to remove host_busy counter from SCSI mid
> layer. That was a major bottleneck, and helped improve SCSI stack
> performance.
> With that patch, bottle neck moved to the scsi_device device_busy counter.
> The performance issue with this counter is seen more in cases where a
> single device can produce very high IOPs, for example h/w RAID devices
> where OS sees one device, but there are many drives behind it, thus being
> capable of very high IOPs. The effect is also visible when cores from
> multiple NUMA nodes send IO to the same device or same controller.
> The device_busy counter is not needed by controllers which can manage as
> many IO as submitted to it. Rotating media still uses it for merging IO,
> but for non-rotating SSD drives it becomes a major bottleneck as described
> above.
>
> A few weeks back, a patch was provided to address the device_busy counter
> also but unfortunately that had some issues:
> 1. There was a functional issue discovered:
> https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/VFKDTG4XC4VHWX5KKDJJI7P36EIGK526/
> 2. There was some concern about existing drivers using the device_busy
> counter.
>
> This patch is an attempt to address both the above issues.
> For this patch to be effective, LLDs need to set a specific flag
> use_per_cpu_device_busy in the scsi_host_template. For other drivers ( who
> does not set the flag), this patch would be a no-op, and should not affect
> their performance or functionality at all.
>
> Also, this patch does not fundamentally change any logic or functionality
> of the code. All it does is replace device_busy with a per CPU counter. In
> fast path, all cpu increment/decrement their own counter. In relatively
> slow path. they call scsi_device_busy function to get the total no of IO
> outstanding on a device. Only functional aspect it changes is that for
> non-rotating media, the number of IO to a device is not restricted.
> Controllers which can handle that, can set the use_per_cpu_device_busy
> flag in scsi_host_template to take advantage of this patch. Other
> controllers need not modify any code and would work as usual.
> Since the patch does not modify any other functional aspects, it should
> not have any side effects even for drivers that do set the
> use_per_cpu_device_busy flag.
Hi Sumanesh,
Can you have a look at the following patch series and see whether it has
perhaps the same purpose as your patch?
https://lore.kernel.org/linux-scsi/20191118103117.978-1-ming.lei@redhat.com/
Thanks,
Bart.
next prev parent reply other threads:[~2019-11-19 23:36 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-19 20:07 [PATCH 0/1] : limit overhead of device_busy counter for SSDs Sumanesh Samanta
2019-11-19 20:07 ` [PATCH 1/1] scsi core: " Sumanesh Samanta
2019-11-19 21:01 ` Ewan D. Milne
2019-11-19 23:22 ` Bart Van Assche
2019-11-19 23:35 ` Sumanesh Samanta [this message]
2019-11-20 7:29 ` Ming Lei
2019-11-20 19:59 ` Sumanesh Samanta
2019-11-21 1:34 ` Ming Lei
2019-11-21 1:59 ` Sumanesh Samanta
2019-11-21 2:29 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e4a7540785d14eea7ccf0f7bd02c05f4@mail.gmail.com \
--to=sumanesh.samanta@broadcom.com \
--cc=axboe@kernel.dk \
--cc=bart.vanassche@wdc.com \
--cc=bvanassche@acm.org \
--cc=chaitra.basappa@broadcom.com \
--cc=emilne@redhat.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jejb@linux.ibm.com \
--cc=kashyap.desai@broadcom.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=ming.lei@redhat.com \
--cc=sathya.prakash@broadcom.com \
--cc=shivasharan.srikanteshwara@broadcom.com \
--cc=suganath-prabu.subramani@broadcom.com \
--cc=sumit.saxena@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox