From: Roman Mamedov <rm@romanrm.net>
To: Chris Murphy <lists@colorremedies.com>
Cc: Linux FS Devel <linux-fsdevel@vger.kernel.org>,
Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: dev loop ~23% slower?
Date: Mon, 17 Feb 2020 10:26:10 +0500 [thread overview]
Message-ID: <20200217102610.6e92da97@natsu> (raw)
In-Reply-To: <CAJCQCtSUzj4V__vo5LxrF1Jv2MgUNux=d8JwXq6H_VN=sYunvA@mail.gmail.com>
On Sun, 16 Feb 2020 20:18:05 -0700
Chris Murphy <lists@colorremedies.com> wrote:
> I don't think file system over accounts for much more than a couple
> percent of this, so I'm curious where the slow down might be
> happening? The "hosting" Btrfs file system is not busy at all at the
> time of the loop mounted filesystem's scrub. I did issue 'echo 3 >
> /proc/sys/vm/drop_caches' before the loop mount image being scrubbed,
> otherwise I get ~1.72GiB/s scrubs which exceeds the performance of the
> SSD (which is in the realm of 550MiB/s max)
Try comparing just simple dd read speed of that FS image, compared to dd speed
from the underlying device of the host filesystem. With scrubs you might be
testing the same metric, but it's a rather elaborate way to do so -- and also
to exclude any influence from the loop device driver, or at least to figure
out the extent of it.
For me on 5.4.20:
dd if=zerofile iflag=direct of=/dev/null bs=1M
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.68213 s, 583 MB/s
dd if=/dev/mapper/cryptohome iflag=direct of=/dev/null bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.12917 s, 686 MB/s
Personally I am not really surprised by this difference, of course going
through a filesystem is going to introduce overhead when compared to reading
directly from the block device that it sits on. Although briefly testing the
same on XFS, it does seem to have less of it, about 6% instead of 15% here.
--
With respect,
Roman
prev parent reply other threads:[~2020-02-17 5:26 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-17 3:18 dev loop ~23% slower? Chris Murphy
2020-02-17 5:26 ` Roman Mamedov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200217102610.6e92da97@natsu \
--to=rm@romanrm.net \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=lists@colorremedies.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).