From: Tejun Heo <tj@kernel.org>
To: Theodore Ts'o <tytso@mit.edu>
Cc: linux-ext4@vger.kernel.org
Subject: Re: Saw your commit: Use mutex_lock_io() for journal->j_checkpoint_mutex
Date: Tue, 21 Feb 2017 15:45:08 -0500 [thread overview]
Message-ID: <20170221204508.GB8260@htj.duckdns.org> (raw)
In-Reply-To: <20170221202324.7plxcp22risagxqu@thunk.org>
Hello, Ted.
> If this happens, it almost certainly means that the journal is too
> small. This was something that grad student I was mentoring found
> when we were benchmarking our SMR-friendly jbd2 changes. There's a
> footnote to this effect in the Fast 2017 paper[1]
>
> [1] https://www.usenix.org/conference/fast17/technical-sessions/presentation/aghayev
> (if you want early access to the paper let me know; it's currently
> available to registered FAST 2017 attendees and will be opened up
> at the start of the FAST 2017 conference next week)
>
> The short version is that on average, with a 5 second commit window
> and a 30 second dirty writeback timeout, if you assume the worst case
> of 100% of the metadata blocks being already in the buffer cache (so
> they don't need to be read from disk), in 5 seconds the journal thread
> could potential spew 150*5 == 750MB in a journal transaction. But
> that data won't be written back until 30 seconds later. So if you are
> continuously deleting files for 30 seconds, the journal should have
> room for at least around 4500 megs worth of sequential writing. Now,
> that's an extreme worst case. In reality there will be some disk
> reads, not to mention the metadata writebacks, which will be random.
I see. Yeah, that's close to what we were seeing. We had a
malfunctioning workload which was deleting extremely high number of
files locking up the filesystem and thus other things on the host.
This was a clear misbehavior on the workload but debugging it took
longer than necessary because the waits didn't get accounted as
iowait, so the patch.
> The bottom line is that 128MiB, which was the previous maximum journal
> size, is simply way too small. So in the latest e2fsprogs 1.43.x
> release, the default has been changed so that for a sufficiently large
> disk, the default journal size is 1 gig.
>
> If you are using faster media (say, SSD or PCie-attached flash), and
> you expect to have workloads that are extreme with respect to huge
> amounts of metadata changes, an even bigger journal might be called
> for. (And these are the workloads where the lazy journalling that we
> studied in the FAST paper is helpful, even on convential HDD's.)
>
> Anyway, you might want to pass onto the system administrators (or the
> SRE's, as applicable :-) that if they were hitting this case often,
> they should seriously consider increasing the size of their ext4
> journal.
Thanks a lot for the explanation!
--
tejun
prev parent reply other threads:[~2017-02-21 20:45 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-21 20:23 Saw your commit: Use mutex_lock_io() for journal->j_checkpoint_mutex Theodore Ts'o
2017-02-21 20:45 ` Tejun Heo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170221204508.GB8260@htj.duckdns.org \
--to=tj@kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox