public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Chris Mason <clm@fb.com>
To: Bryan Gurney <bgurney@redhat.com>
Cc: Ric Wheeler <ricwheeler@gmail.com>,
	Lukas Czerner <lczerner@redhat.com>,
	Jan Tulak <jtulak@redhat.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block <linux-block@vger.kernel.org>,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	Nikolay Borisov <nborisov@suse.com>,
	"Dennis Zhou" <dennisz@fb.com>
Subject: Re: Testing devices for discard support properly
Date: Tue, 7 May 2019 21:24:41 +0000	[thread overview]
Message-ID: <31794121-DEDA-4269-8B72-50EB4D0BCABE@fb.com> (raw)
In-Reply-To: <CAHhmqcQw69S3Fn=Nej7MezCOZ3_ZNi64p+PFLSV+b91e1gTjZA@mail.gmail.com>

On 7 May 2019, at 16:09, Bryan Gurney wrote:

> I found an example in my trace of the "two bands of latency" behavior.
> Consider these three segments of trace data during the writes:
>

[ ... ]

> There's an average latency of 14 milliseconds for these 128 kilobyte
> writes.  At 0.218288794 seconds, we can see a sudden appearance of 1.7
> millisecond latency times, much lower than the average.
>
> Then we see an alternation of 1.7 millisecond completions and 14
> millisecond completions, with these two "latency groups" increasing,
> up to about 14 milliseconds and 25 milliseconds at 0.241287187 seconds
> into the trace.
>
> At 0.317351888 seconds, we see the pattern start again, with a sudden
> appearance of 1.89 millisecond latency write completions, among 14.7
> millisecond latency write completions.
>
> If you graph it, it looks like a "triangle wave" pulse, with a
> duration of about 23 milliseconds, that repeats after about 100
> milliseconds.  In a way, it's like a "heartbeat".  This wouldn't be as
> easy to detect with a simple "average" or "percentile" reading.
>
> This was during a simple sequential write at a queue depth of 32, but
> what happens with a write after a discard in the same region of
> sectors?  This behavior could change, depending on different drive
> models, and/or drive controller algorithms.
>

I think these are all really interesting, and definitely support the 
idea of a series of tests we do to make sure a drive implements discard 
in the general ways that we expect.

But with that said, I think a more important discussion as filesystem 
developers is how we protect the rest of the filesystem from high 
latencies caused by discards.  For reads and writes, we've been doing 
this for a long time.  IO schedulers have all kinds of checks and 
balances for REQ_META or REQ_SYNC, and we throttle dirty pages and 
readahead and dance around request batching etc etc.

But for discards, we just open the floodgates and hope it works out.  At 
some point we're going to have to figure out how to queue and throttle 
discards as well as we do reads/writes.  That's kind of tricky because 
the FS needs to coordinate when we're allowed to discard something and 
needs to know when the discard is done, and we all have different 
schemes for keeping track.

-chris

  reply	other threads:[~2019-05-07 21:24 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-06 20:56 Testing devices for discard support properly Ric Wheeler
2019-05-07  7:10 ` Lukas Czerner
2019-05-07  8:48   ` Jan Tulak
2019-05-07  9:40     ` Lukas Czerner
2019-05-07 12:57       ` Ric Wheeler
2019-05-07 15:35         ` Bryan Gurney
2019-05-07 15:44           ` Ric Wheeler
2019-05-07 20:09             ` Bryan Gurney
2019-05-07 21:24               ` Chris Mason [this message]
2019-06-03 20:01                 ` Ric Wheeler
2019-05-07  8:21 ` Nikolay Borisov
2019-05-07 22:04 ` Dave Chinner
2019-05-08  0:07   ` Ric Wheeler
2019-05-08  1:14     ` Dave Chinner
2019-05-08 15:05       ` Ric Wheeler
2019-05-08 17:03         ` Martin K. Petersen
2019-05-08 17:09           ` Ric Wheeler
2019-05-08 17:25             ` Martin K. Petersen
2019-05-08 18:12               ` Ric Wheeler
2019-05-09 16:02                 ` Bryan Gurney
2019-05-09 17:27                   ` Ric Wheeler
2019-05-09 20:35                     ` Bryan Gurney
2019-05-08 21:58             ` Dave Chinner
2019-05-09  2:29               ` Martin K. Petersen
2019-05-09  3:20                 ` Dave Chinner
2019-05-09  4:35                   ` Martin K. Petersen
2019-05-08 16:16   ` Martin K. Petersen
2019-05-08 22:31     ` Dave Chinner
2019-05-09  3:55       ` Martin K. Petersen
2019-05-09 13:40         ` Ric Wheeler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=31794121-DEDA-4269-8B72-50EB4D0BCABE@fb.com \
    --to=clm@fb.com \
    --cc=axboe@kernel.dk \
    --cc=bgurney@redhat.com \
    --cc=dennisz@fb.com \
    --cc=jtulak@redhat.com \
    --cc=lczerner@redhat.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=nborisov@suse.com \
    --cc=ricwheeler@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox