linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Waiman Long <longman@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
	Jonathan Corbet <corbet@lwn.net>,
	Luis Chamberlain <mcgrof@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Iurii Zaikin <yzaikin@google.com>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-doc@vger.kernel.org,
	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>,
	Eric Biggers <ebiggers@google.com>,
	Dave Chinner <david@fromorbit.com>,
	Eric Sandeen <sandeen@redhat.com>
Subject: Re: [PATCH 00/11] fs/dcache: Limit # of negative dentries
Date: Sat, 14 Mar 2020 20:46:40 -0700	[thread overview]
Message-ID: <20200315034640.GV22433@bombadil.infradead.org> (raw)
In-Reply-To: <20200226161404.14136-1-longman@redhat.com>

On Wed, Feb 26, 2020 at 11:13:53AM -0500, Waiman Long wrote:
> As there is no limit for negative dentries, it is possible that a sizeable
> portion of system memory can be tied up in dentry cache slabs. Dentry slabs
> are generally recalimable if the dentries are in the LRUs. Still having
> too much memory used up by dentries can be problematic:
> 
>  1) When a filesystem with too many negative dentries is being unmounted,
>     the process of draining the dentries associated with the filesystem
>     can take some time. To users, the system may seem to hang for
>     a while.  The long wait may also cause unexpected timeout error or
>     other warnings.  This can happen when a long running container with
>     many negative dentries is being destroyed, for instance.
> 
>  2) Tying up too much memory in unused negative dentries means there
>     are less memory available for other use. Even though the kernel is
>     able to reclaim unused dentries when running out of free memory,
>     it will still introduce additional latency to the application
>     reducing its performance.

There's a third problem, which is that having a lot of negative dentries
can clog the hash chains.  I tried to quantify this, and found a weird result:

root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.402s
user	0m4.361s
sys	0m1.230s
root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.572s
user	0m4.337s
sys	0m1.407s
root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.607s
user	0m4.522s
sys	0m1.342s
root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.599s
user	0m4.472s
sys	0m1.369s
root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.574s
user	0m4.498s
sys	0m1.300s

Pretty consistent system time, between about 1.3 and 1.4 seconds.

root@bobo-kvm:~# grep dentry /proc/slabinfo 
dentry             20394  21735    192   21    1 : tunables    0    0    0 : slabdata   1035   1035      0
root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.515s
user	0m4.353s
sys	0m1.359s

At this point, we have 20k dentries allocated.

Now, pollute the dcache with names that don't exist:

root@bobo-kvm:~# for i in `seq 1 100000`; do cat /dev/null$i >/dev/zero; done 2>/dev/null
root@bobo-kvm:~# grep dentry /proc/slabinfo 
dentry             20605  21735    192   21    1 : tunables    0    0    0 : slabdata   1035   1035      0

Huh.  We've kept the number of dentries pretty constant.  Still, maybe the
bad dentries have pushed out the good ones.

root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m6.644s
user	0m4.921s
sys	0m1.946s
root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m6.676s
user	0m5.004s
sys	0m1.909s
root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m6.662s
user	0m4.980s
sys	0m1.916s
root@bobo-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m6.714s
user	0m4.973s
sys	0m1.986s

Well, we certainly made it suck.  Up to a pretty consistent 1.9-2.0 seconds
of kernel time, or 50% worse.  We've also made user time worse, somehow.

Anyhow, I should write a proper C program to measure this.  But I thought
I'd share this raw data with you now to demonstrate that dcache pollution
is a real problem today, even on a machine with 2GB.

  parent reply	other threads:[~2020-03-15  7:16 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-26 16:13 [PATCH 00/11] fs/dcache: Limit # of negative dentries Waiman Long
2020-02-26 16:13 ` [PATCH 01/11] fs/dcache: Fix incorrect accounting " Waiman Long
2020-02-26 16:13 ` [PATCH 02/11] fs/dcache: Simplify __dentry_kill() Waiman Long
2020-02-26 16:13 ` [PATCH 03/11] fs/dcache: Add a counter to track number of children Waiman Long
2020-02-26 16:13 ` [PATCH 04/11] fs/dcache: Add sysctl parameter dentry-dir-max Waiman Long
2020-02-26 16:13 ` [PATCH 05/11] fs/dcache: Reclaim excessive negative dentries in directories Waiman Long
2020-02-26 16:13 ` [PATCH 06/11] fs/dcache: directory opportunistically stores # of positive dentries Waiman Long
2020-02-26 16:14 ` [PATCH 07/11] fs/dcache: Add static key negative_reclaim_enable Waiman Long
2020-02-26 16:14 ` [PATCH 08/11] fs/dcache: Limit dentry reclaim count in negative_reclaim_workfn() Waiman Long
2020-02-26 16:14 ` [PATCH 09/11] fs/dcache: Don't allow small values for dentry-dir-max Waiman Long
2020-02-26 16:14 ` [PATCH 10/11] fs/dcache: Kill off dentry as last resort Waiman Long
2020-02-26 16:14 ` [PATCH 11/11] fs/dcache: Track # of negative dentries reclaimed & killed Waiman Long
2020-02-26 16:29 ` [PATCH 00/11] fs/dcache: Limit # of negative dentries Matthew Wilcox
2020-02-26 19:19   ` Waiman Long
2020-02-26 21:28     ` Matthew Wilcox
2020-02-26 21:28   ` Andreas Dilger
2020-02-26 21:45     ` Matthew Wilcox
2020-02-27  8:07       ` Dave Chinner
2020-02-27  9:55     ` Ian Kent
2020-02-28  3:34       ` Matthew Wilcox
2020-02-28  4:16         ` Ian Kent
2020-02-28  4:36           ` Ian Kent
2020-02-28  4:52             ` Al Viro
2020-02-28  4:22         ` Al Viro
2020-02-28  4:52           ` Ian Kent
2020-02-28 15:32         ` Waiman Long
2020-02-28 15:39           ` Matthew Wilcox
2020-02-28 19:32         ` Theodore Y. Ts'o
2020-02-27 19:04   ` Eric Sandeen
2020-02-27 22:39     ` Dave Chinner
2020-02-27  8:30 ` Dave Chinner
2020-02-28 15:47   ` Waiman Long
2020-03-15  3:46 ` Matthew Wilcox [this message]
2020-03-21 10:17   ` Konstantin Khlebnikov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200315034640.GV22433@bombadil.infradead.org \
    --to=willy@infradead.org \
    --cc=corbet@lwn.net \
    --cc=david@fromorbit.com \
    --cc=ebiggers@google.com \
    --cc=keescook@chromium.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mcgrof@kernel.org \
    --cc=mchehab+samsung@kernel.org \
    --cc=sandeen@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=yzaikin@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).