From: Shakeel Butt <shakeelb@google.com>
To: Jan Kara <jack@suse.cz>
Cc: Michal Hocko <mhocko@kernel.org>,
Amir Goldstein <amir73il@gmail.com>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
Greg Thelen <gthelen@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Mel Gorman <mgorman@suse.de>, Vlastimil Babka <vbabka@suse.cz>,
linux-fsdevel <linux-fsdevel@vger.kernel.org>,
Linux MM <linux-mm@kvack.org>, Cgroups <cgroups@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 3/3] fs: fsnotify: account fsnotify metadata to kmemcg
Date: Thu, 22 Feb 2018 11:00:25 -0800 [thread overview]
Message-ID: <CALvZod4m7naivyVDtFrGmDKeqaWrWuXynVhw32DVLB935RQJYA@mail.gmail.com> (raw)
In-Reply-To: <20180222144844.g4p2diu3cnbr7sx3@quack2.suse.cz>
On Thu, Feb 22, 2018 at 6:48 AM, Jan Kara <jack@suse.cz> wrote:
> On Thu 22-02-18 14:49:44, Michal Hocko wrote:
>> On Tue 20-02-18 19:01:01, Shakeel Butt wrote:
>> > A lot of memory can be consumed by the events generated for the huge or
>> > unlimited queues if there is either no or slow listener. This can cause
>> > system level memory pressure or OOMs. So, it's better to account the
>> > fsnotify kmem caches to the memcg of the listener.
>>
>> How much memory are we talking about here?
>
> 32 bytes per event (on 64-bit) which is small but the number of events is
> not limited in any way (if the creator uses a special flag and has
> CAP_SYS_ADMIN). In the thread [1] a guy from Alibaba wanted this feature so
> among cloud people there is apparently some demand to have a way to limit
> memory usage of such application...
>
>> > There are seven fsnotify kmem caches and among them allocations from
>> > dnotify_struct_cache, dnotify_mark_cache, fanotify_mark_cache and
>> > inotify_inode_mark_cachep happens in the context of syscall from the
>> > listener. So, SLAB_ACCOUNT is enough for these caches.
>> >
>> > The objects from fsnotify_mark_connector_cachep are not accounted as
>> > they are small compared to the notification mark or events and it is
>> > unclear whom to account connector to since it is shared by all events
>> > attached to the inode.
>> >
>> > The allocations from the event caches happen in the context of the event
>> > producer. For such caches we will need to remote charge the allocations
>> > to the listener's memcg. Thus we save the memcg reference in the
>> > fsnotify_group structure of the listener.
>>
>> Is it typical that the listener lives in a different memcg and if yes
>> then cannot this cause one memcg to OOM/DoS the one with the listener?
>
> We have been through these discussions already in [1] back in November :).
> I can understand the wish to limit memory usage of an application using
> unlimited fanotify queues. And yes, it may mean that it will be easier for
> an attacker to get it oom-killed (currently the malicious app would drive
> the whole system oom which will presumably take a bit more effort as there
> is more memory to consume). But then I expect this is what admin prefers
> when he limits memory usage of fanotify listener.
>
Just one clarification, currently the kernel does not trigger
oom-killer for allocations hitting memcg limit in the context of
syscalls but rather return an ENOMEM (after trying memcg reclaim). Jan
has already posted a patch to handle those ENOMEMs.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2018-02-22 19:00 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-21 3:00 [PATCH v2 0/3] Directed kmem charging Shakeel Butt
2018-02-21 3:00 ` [PATCH v2 1/3] mm: memcg: plumbing memcg for kmem cache allocations Shakeel Butt
2018-02-21 3:01 ` [PATCH v2 2/3] mm: memcg: plumbing memcg for kmalloc allocations Shakeel Butt
2018-02-21 3:01 ` [PATCH v2 3/3] fs: fsnotify: account fsnotify metadata to kmemcg Shakeel Butt
2018-02-21 16:35 ` Christopher Lameter
2018-02-22 13:49 ` Michal Hocko
2018-02-22 14:48 ` Jan Kara
2018-02-22 16:44 ` Yang Shi
2018-02-22 19:00 ` Shakeel Butt [this message]
2018-02-21 16:09 ` [PATCH v2 0/3] Directed kmem charging Christopher Lameter
2018-02-21 17:18 ` Shakeel Butt
2018-02-21 17:57 ` Christopher Lameter
2018-02-21 20:05 ` Shakeel Butt
2018-02-22 13:53 ` Jan Kara
2018-02-23 3:16 ` Christopher Lameter
2018-02-21 20:54 ` Andrew Morton
2018-02-22 13:46 ` Jan Kara
2018-02-23 3:19 ` Christopher Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALvZod4m7naivyVDtFrGmDKeqaWrWuXynVhw32DVLB935RQJYA@mail.gmail.com \
--to=shakeelb@google.com \
--cc=akpm@linux-foundation.org \
--cc=amir73il@gmail.com \
--cc=cgroups@vger.kernel.org \
--cc=cl@linux.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).