From: Ira Weiny <ira.weiny@intel.com>
To: Chaitanya Kulkarni <kch@nvidia.com>, <dan.j.williams@intel.com>,
<vishal.l.verma@intel.com>, <dave.jiang@intel.com>,
<ira.weiny@intel.com>
Cc: <nvdimm@lists.linux.dev>, Chaitanya Kulkarni <kch@nvidia.com>
Subject: Re: [PATCH 1/1] pmem: allow user to set QUEUE_FLAG_NOWAIT
Date: Fri, 12 May 2023 10:14:38 -0700 [thread overview]
Message-ID: <645e73feb7ff6_aee562944d@iweiny-mobl.notmuch> (raw)
In-Reply-To: <20230512104302.8527-2-kch@nvidia.com>
Chaitanya Kulkarni wrote:
> Allow user to set the QUEUE_FLAG_NOWAIT optionally using module
> parameter to retain the default behaviour. Also, update respective
> allocation flags in the write path. Following are the performance
> numbers with io_uring fio engine for random read, note that device has
> been populated fully with randwrite workload before taking these
> numbers :-
I'm not seeing any comparison with/without the option you propose? I
assume there is some performance improvement you are trying to show?
>
> * linux-block (for-next) # grep IOPS pmem*fio | column -t
>
> nowait-off-1.fio: read: IOPS=3968k, BW=15.1GiB/s
> nowait-off-2.fio: read: IOPS=4084k, BW=15.6GiB/s
> nowait-off-3.fio: read: IOPS=3995k, BW=15.2GiB/s
>
> nowait-on-1.fio: read: IOPS=5909k, BW=22.5GiB/s
> nowait-on-2.fio: read: IOPS=5997k, BW=22.9GiB/s
> nowait-on-3.fio: read: IOPS=6006k, BW=22.9GiB/s
>
> * linux-block (for-next) # grep cpu pmem*fio | column -t
>
> nowait-off-1.fio: cpu : usr=6.38%, sys=31.37%, ctx=220427659
> nowait-off-2.fio: cpu : usr=6.19%, sys=31.45%, ctx=229825635
> nowait-off-3.fio: cpu : usr=6.17%, sys=31.22%, ctx=221896158
>
> nowait-on-1.fio: cpu : usr=10.56%, sys=87.82%, ctx=24730
> nowait-on-2.fio: cpu : usr=9.92%, sys=88.36%, ctx=23427
> nowait-on-3.fio: cpu : usr=9.85%, sys=89.04%, ctx=23237
>
> * linux-block (for-next) # grep slat pmem*fio | column -t
> nowait-off-1.fio: slat (nsec): min=431, max=50423k, avg=9424.06
> nowait-off-2.fio: slat (nsec): min=420, max=35992k, avg=9193.94
> nowait-off-3.fio: slat (nsec): min=430, max=40737k, avg=9244.24
>
> nowait-on-1.fio: slat (nsec): min=1232, max=40098k, avg=7518.60
> nowait-on-2.fio: slat (nsec): min=1303, max=52107k, avg=7423.37
> nowait-on-3.fio: slat (nsec): min=1123, max=40193k, avg=7409.08
>
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
> drivers/nvdimm/pmem.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> index ceea55f621cc..38defe84de4c 100644
> --- a/drivers/nvdimm/pmem.c
> +++ b/drivers/nvdimm/pmem.c
> @@ -31,6 +31,10 @@
> #include "pfn.h"
> #include "nd.h"
>
> +static bool g_nowait;
> +module_param_named(nowait, g_nowait, bool, 0444);
> +MODULE_PARM_DESC(nowait, "set QUEUE_FLAG_NOWAIT. Default: False");
Module parameters should be avoided. Since I'm not clear on the
performance benefit I can't comment on alternatives. But I strongly
suspect that this choice is not going to be desired for all devices
always.
Ira
> +
> static struct device *to_dev(struct pmem_device *pmem)
> {
> /*
> @@ -543,6 +547,8 @@ static int pmem_attach_disk(struct device *dev,
> blk_queue_max_hw_sectors(q, UINT_MAX);
> blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
> + if (g_nowait)
> + blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
> if (pmem->pfn_flags & PFN_MAP)
> blk_queue_flag_set(QUEUE_FLAG_DAX, q);
>
> --
> 2.40.0
>
next prev parent reply other threads:[~2023-05-12 17:14 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-12 10:43 [PATCH 0/1] pmem: allow user to set QUEUE_FLAG_NOWAIT Chaitanya Kulkarni
2023-05-12 10:43 ` [PATCH 1/1] " Chaitanya Kulkarni
2023-05-12 17:14 ` Ira Weiny [this message]
2023-05-13 0:56 ` Chaitanya Kulkarni
2023-05-12 18:54 ` Dan Williams
2023-05-13 0:58 ` Chaitanya Kulkarni
2023-05-15 19:54 ` Jane Chu
2023-05-15 23:53 ` Dan Williams
2023-05-16 17:58 ` Jane Chu
2023-05-12 13:29 ` [PATCH 0/1] " Christoph Hellwig
2023-05-13 0:54 ` Chaitanya Kulkarni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=645e73feb7ff6_aee562944d@iweiny-mobl.notmuch \
--to=ira.weiny@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=kch@nvidia.com \
--cc=nvdimm@lists.linux.dev \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox