linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Waiman Long <longman@redhat.com>
Cc: Jan Kara <jack@suse.cz>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	"Wangkai (Kevin,C)" <wangkai86@huawei.com>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
	"walters@verbum.org" <walters@verbum.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"Renjinyong (Renjinyong,
	Business Support Dept)" <renjinyong@huawei.com>
Subject: Re: [PATCH] fs/dcache: dentries should free after files unlinked or directories removed
Date: Mon, 4 Sep 2017 14:59:30 +0200	[thread overview]
Message-ID: <20170904125930.GD1761@quack2.suse.cz> (raw)
In-Reply-To: <5bbcab8d-226a-7382-f466-07d826325635@redhat.com>

On Thu 31-08-17 12:27:27, Waiman Long wrote:
> On 08/31/2017 03:53 AM, Jan Kara wrote:
> > On Sun 27-08-17 11:05:34, Waiman Long wrote:
> >>
> >> It is certainly true that the current scheme of unlimited negative
> >> dentry creation is not a problem under most cases. However, there are
> >> scenarios where it can be a problem.
> >>
> >> A customer discovered the negative dentry issue because of a bug in
> >> their application code. They fixed their code to solve the problem.
> >> However, they wondered if this could be used as one vector of a DoS
> >> attack on a Linux system by having a rogue program generate massive
> >> number of negative dentries continuously. It is the thought of this
> >> malicious use of the negative dentry behavior that prompted me to create
> >> and send out a patch to limit the number of negative dentries allowable
> >> in a system.
> > Well, and how is this fundamentally different from a user consuming
> > resources by other means (positive dentries, inode cache, page cache, anon
> > memory etc.)? Sure you can force slab reclaim to work hard but you have
> > many other ways how a local user can do that. So if you can demonstrate
> > that it is too easy to DoS a system in some way, we can talk about
> > mitigating the attack. But just the ability of making the system busy does
> > not seem serious to me.
> 
> Positive dentries are limited by the total number of files in the file
> system. Negative dentries, on the other hand, have no such limit. There
> are ways to limit other resource usages such as limiting memory usage of
> a user by memory cgroup, filesystem quota for amount of disk space or
> number of files created or owned, etc. However, I am not aware of any
> control mechanism that can limit the number negative dentries generated
> by a given user. That makes negative dentries somewhat different from
> the other resource types that you are talking about.

So I agree they are somewhat different but not fundamentally different -
e.g. total number of files in the file system can be easily so high that
dentries + inodes cannot fit into RAM and thus you are in a very similar
situation as with negative dentries. That's actually one of the reasons why
people were trying to bend memcgs to account slab cache as well. But it
didn't end anywhere AFAIK.

The reason why I'm objecting is that the limit on the number of negative
dentries is another tuning knob, it is for very specific cases, and most of
sysadmins will have no clue how to set it properly (even I wouldn't have a
good idea).

> >> Besides, Kevin had shown that keeping the dentry cache from growing too
> >> big was good for file lookup performance too.
> > Well, that rather speaks for better data structure for dentry lookup (e.g.
> > growing hash tables) rather than for limiting negative dentries? I can
> > imagine there are workloads which would benefit from that as well?
> 
> Current dentry lookup is through a hash table. The lookup performance
> will depend on the number of hashed slots as well as the number of
> entries queued in each slot. So in general, lookup performance
> deteriorates the more entries you put into a given slot. That is true no
> matter how many slots you have allocated.

Agreed, but with rhashtables the number of slots grows dynamically with the
number of entries...

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

  reply	other threads:[~2017-09-04 12:59 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-07  9:54 [PATCH] fs/dcache: dentries should free after files unlinked or directories removed Wangkai
2017-08-25  7:06 ` Wangkai (Kevin,C)
2017-08-25 14:47   ` Jan Kara
2017-08-25 15:10     ` Colin Walters
2017-08-25 16:43       ` Linus Torvalds
2017-08-26  6:56     ` Wangkai (Kevin,C)
2017-08-26 16:18       ` Linus Torvalds
2017-08-27 15:05         ` Waiman Long
2017-08-31  7:53           ` Jan Kara
2017-08-31 16:27             ` Waiman Long
2017-09-04 12:59               ` Jan Kara [this message]
2017-09-04 15:48                 ` Linus Torvalds
2017-09-04 21:45                 ` Waiman Long
2017-08-28  6:31         ` Wangkai (Kevin,C)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170904125930.GD1761@quack2.suse.cz \
    --to=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=renjinyong@huawei.com \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=walters@verbum.org \
    --cc=wangkai86@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).