From: Tejun Heo <tj@kernel.org>
To: Michal Hocko <mhocko@suse.cz>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org, Johannes Weiner <hannes@cmpxchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Andrew Morton <akpm@linux-foundation.org>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
Anton Vorontsov <anton.vorontsov@linaro.org>
Subject: Re: [PATCH 1/3] memcg: limit the number of thresholds per-memcg
Date: Wed, 7 Aug 2013 09:58:18 -0400 [thread overview]
Message-ID: <20130807135818.GG27006@htj.dyndns.org> (raw)
In-Reply-To: <20130807134654.GJ8184@dhcp22.suse.cz>
Hello,
On Wed, Aug 07, 2013 at 03:46:54PM +0200, Michal Hocko wrote:
> OK, I have obviously misunderstood your concern mentioned in the other
> email. Could you be more specific what is the DoS scenario which was
> your concern, then?
So, let's say the file is write-accessible to !priv user which is
under reasonable resource limits. Normally this shouldn't affect priv
system tools which are monitoring the same event as it shouldn't be
able to deplete resources as long as the resource control mechanisms
are configured and functioning properly; however, the memory usage
event puts all event listeners into a single contiguous table which a
!priv user can easily expand to a size where the table can no longer
be enlarged and if a priv system tool or another user tries to
register event afterwards, it'll fail. IOW, it creates a shared
resource which isn't properly provisioned and can be trivially filled
up making it an easy DoS target.
Putting an extra limit on it isn't an actual solution but could be
better, I think. It at least makes it clear that this is a limited
global resource.
Thanks.
--
tejun
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-08-07 13:58 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-07 11:28 [PATCH 1/3] memcg: limit the number of thresholds per-memcg Michal Hocko
2013-08-07 11:28 ` [PATCH 2/3] memcg: Limit the number of events registered on oom_control Michal Hocko
2013-08-07 13:08 ` Tejun Heo
2013-08-07 13:11 ` Tejun Heo
2013-08-07 13:37 ` Michal Hocko
2013-08-07 13:47 ` Tejun Heo
2013-08-07 13:57 ` Michal Hocko
2013-08-07 14:01 ` Tejun Heo
2013-08-07 14:47 ` Michal Hocko
2013-08-07 17:30 ` Michal Hocko
2013-08-09 0:46 ` Tejun Heo
2013-08-07 11:28 ` [PATCH 3/3] vmpressure: limit the number of registered events Michal Hocko
2013-08-07 13:22 ` [PATCH 1/3] memcg: limit the number of thresholds per-memcg Tejun Heo
2013-08-07 13:46 ` Michal Hocko
2013-08-07 13:58 ` Tejun Heo [this message]
2013-08-07 14:37 ` Michal Hocko
2013-08-07 22:05 ` Kirill A. Shutemov
2013-08-08 14:43 ` Michal Hocko
2013-08-09 0:50 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130807135818.GG27006@htj.dyndns.org \
--to=tj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=anton.vorontsov@linaro.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).