From: Anand Jain <anand.jain@oracle.com>
To: linux-btrfs@vger.kernel.org
Subject: [PATCH v3] btrfs: cleanup btrfs_async_submit_limit to return the final limit value
Date: Tue, 7 Nov 2017 10:17:49 +0800 [thread overview]
Message-ID: <20171107021749.15788-1-anand.jain@oracle.com> (raw)
In-Reply-To: <20171031125946.26844-1-anand.jain@oracle.com>
We feedback IO progress when it falls below the 2/3 times of the limit
obtained from btrfs_async_submit_limit() so to creates a wait for the
write process and make uncontested progress during the async submission.
In general device/transport q depth is 256 and, btrfs_async_submit_limit()
returns 256 times per device which originally was introduced by [1]. But
256 at the device level is for all types of IOs (R/W sync/async) and so
may be it was possible that entire of 256 could have occupied by async
writes and, so later patch [2] took only 2/3 times of 256 which seemed to
work well.
[1]
cb03c743c648
Btrfs: Change the congestion functions to meter the number of async submits as well
[2]
4854ddd0ed0a
Btrfs: Wait for kernel threads to make progress during async submission
This patch is a cleanup patch, no functional changes. And now as we are taking
only 2/3 of limit (256), so btrfs_async_submit_limit() will return 256 * 2/3.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
---
v2: add more change log.
v3: don't compute 256 * 2/3. I didn't know compiler will do it anyway,
thats nice. So keeping that open coded. And comment removed.
fs/btrfs/disk-io.c | 4 ++--
fs/btrfs/volumes.c | 1 -
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index dfdab849037b..6e27259e965b 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -861,7 +861,8 @@ unsigned long btrfs_async_submit_limit(struct btrfs_fs_info *info)
unsigned long limit = min_t(unsigned long,
info->thread_pool_size,
info->fs_devices->open_devices);
- return 256 * limit;
+
+ return (256 * 2/3) * limit;
}
static void run_one_async_start(struct btrfs_work *work)
@@ -887,7 +888,6 @@ static void run_one_async_done(struct btrfs_work *work)
fs_info = async->fs_info;
limit = btrfs_async_submit_limit(fs_info);
- limit = limit * 2 / 3;
/*
* atomic_dec_return implies a barrier for waitqueue_active
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index b39737568c22..61cefa37b56a 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -376,7 +376,6 @@ static noinline void run_scheduled_bios(struct btrfs_device *device)
bdi = device->bdev->bd_bdi;
limit = btrfs_async_submit_limit(fs_info);
- limit = limit * 2 / 3;
loop:
spin_lock(&device->io_lock);
--
2.13.1
prev parent reply other threads:[~2017-11-07 2:17 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-31 12:59 [PATCH] btrfs: cleanup btrfs_async_submit_limit to return the final limit value Anand Jain
2017-10-31 14:18 ` Nikolay Borisov
2017-11-02 5:55 ` Anand Jain
2017-11-02 6:03 ` [PATCH v2] " Anand Jain
2017-11-06 15:34 ` David Sterba
2017-11-06 15:38 ` David Sterba
2017-11-07 2:22 ` Anand Jain
2017-11-07 2:17 ` Anand Jain [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171107021749.15788-1-anand.jain@oracle.com \
--to=anand.jain@oracle.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).