From: Bart Van Assche <bvanassche@acm.org>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, Christoph Hellwig <hch@lst.de>,
Jaegeuk Kim <jaegeuk@kernel.org>,
Bart Van Assche <bvanassche@acm.org>,
Ming Lei <ming.lei@redhat.com>,
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
Martijn Coenen <maco@android.com>
Subject: [PATCH 2/2] loop: Add the default_queue_depth kernel module parameter
Date: Mon, 2 Aug 2021 17:02:00 -0700 [thread overview]
Message-ID: <20210803000200.4125318-3-bvanassche@acm.org> (raw)
In-Reply-To: <20210803000200.4125318-1-bvanassche@acm.org>
Recent versions of Android use the zram driver on top of the loop driver.
There is a mismatch between the default loop driver queue depth (128) and
the queue depth of the storage device in my test setup (32). That mismatch
results in write latencies that are higher than necessary. Address this
issue by making the default loop driver queue depth configurable. Compared
to configuring the queue depth by writing into the nr_requests sysfs
attribute, this approach does not involve calling synchronize_rcu() to
modify the queue depth.
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martijn Coenen <maco@android.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
drivers/block/loop.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 9fca3ab3988d..0f1f1ecd941a 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2098,6 +2098,9 @@ module_param(max_loop, int, 0444);
MODULE_PARM_DESC(max_loop, "Maximum number of loop devices");
module_param(max_part, int, 0444);
MODULE_PARM_DESC(max_part, "Maximum number of partitions per loop device");
+static uint32_t default_queue_depth = 128;
+module_param(default_queue_depth, uint, 0644);
+MODULE_PARM_DESC(default_queue_depth, "Default loop device queue depth");
MODULE_LICENSE("GPL");
MODULE_ALIAS_BLOCKDEV_MAJOR(LOOP_MAJOR);
@@ -2330,7 +2333,7 @@ static int loop_add(int i)
err = -ENOMEM;
lo->tag_set.ops = &loop_mq_ops;
lo->tag_set.nr_hw_queues = 1;
- lo->tag_set.queue_depth = 128;
+ lo->tag_set.queue_depth = max(default_queue_depth, 2U);
lo->tag_set.numa_node = NUMA_NO_NODE;
lo->tag_set.cmd_size = sizeof(struct loop_cmd);
lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_STACKING |
next prev parent reply other threads:[~2021-08-03 0:02 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-03 0:01 [PATCH 0/2] Two loop driver patches Bart Van Assche
2021-08-03 0:01 ` [PATCH 1/2] loop: Prevent that an I/O scheduler is assigned Bart Van Assche
2021-08-03 1:54 ` Ming Lei
2021-08-03 5:23 ` Bart Van Assche
2021-08-03 7:18 ` Ming Lei
2021-08-03 0:02 ` Bart Van Assche [this message]
2021-08-03 1:57 ` [PATCH 2/2] loop: Add the default_queue_depth kernel module parameter Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210803000200.4125318-3-bvanassche@acm.org \
--to=bvanassche@acm.org \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=jaegeuk@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=maco@android.com \
--cc=ming.lei@redhat.com \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).