From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Supercilious Dude <supercilious.dude@gmail.com>
Cc: "linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
Linux FS Devel <linux-fsdevel@vger.kernel.org>,
linux-block@vger.kernel.org
Subject: Re: Is it possible that certain physical disk doesn't implement flush correctly?
Date: Sat, 30 Mar 2019 21:09:16 +0800 [thread overview]
Message-ID: <948a62d3-aa3e-418e-00df-d73d4dbfb5a6@gmx.com> (raw)
In-Reply-To: <CAGmvKk5wk9RFhBr20X850MjFDkudzrZvjWVJxGg4GkhtrDfKUw@mail.gmail.com>
[-- Attachment #1.1: Type: text/plain, Size: 1319 bytes --]
On 2019/3/30 下午9:04, Supercilious Dude wrote:
> On Sat, 30 Mar 2019 at 13:00, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>> I'm purposing to measure the execution time of flush/fsync, not write.
>>
>> And if flush takes 0ms, it means it doesn't really write cached data
>> onto disk.
>>
>
> That is correct. The controller ignores your flush requests on the
> virtual disk by design. When the data hits the controller it is
> considered "stored" - the physical disk(s) storing the virtual disk is
> an implementation detail. The performance characteristics of these
> controllers are needed to make big arrays work in a useful manner. My
> controller is connected to 4 HP 2600 enclosures with 12 drives each.
> Waiting for a flush on a single disk before continuing work on the
> remaining 47 disks would be catastrophic for performance.
If controller is doing so, it must have its own power or at least finish
flush when controller writes to its fast cache.
For cache case, if we have enough data, we could still find some clue on
the flush execution time.
Despite that, for that enterprise level usage, it's OK.
But for consumer level storage, I'm not sure, especially for HDDs, and
maybe NVMe devices.
So my question still stands here.
Thanks,
Qu
>
> Regards
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2019-03-30 13:09 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-30 12:31 Is it possible that certain physical disk doesn't implement flush correctly? Qu Wenruo
2019-03-30 12:57 ` Supercilious Dude
2019-03-30 13:00 ` Qu Wenruo
2019-03-30 13:04 ` Supercilious Dude
2019-03-30 13:09 ` Qu Wenruo [this message]
2019-03-30 13:14 ` Supercilious Dude
2019-03-30 13:24 ` Qu Wenruo
2019-03-31 22:45 ` J. Bruce Fields
2019-03-31 23:07 ` Alberto Bursi
2019-03-31 11:27 ` Alberto Bursi
2019-03-31 12:00 ` Qu Wenruo
2019-03-31 13:36 ` Hannes Reinecke
2019-03-31 14:17 ` Qu Wenruo
2019-03-31 14:37 ` Hannes Reinecke
2019-03-31 14:40 ` Qu Wenruo
2019-03-31 12:21 ` Andrei Borzenkov
2019-04-01 11:55 ` Austin S. Hemmelgarn
2019-04-01 12:04 ` Austin S. Hemmelgarn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=948a62d3-aa3e-418e-00df-d73d4dbfb5a6@gmx.com \
--to=quwenruo.btrfs@gmx.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=supercilious.dude@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).