From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Petr Cermak <petrcermak@chromium.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Andrew Morton <akpm@linux-foundation.org>,
Bjorn Helgaas <bhelgaas@google.com>,
Primiano Tucci <primiano@chromium.org>,
Hugh Dickins <hughd@google.com>
Subject: Re: [PATCH v2 2/2] task_mmu: Add user-space support for resetting mm->hiwater_rss (peak RSS)
Date: Thu, 15 Jan 2015 01:39:54 +0200 [thread overview]
Message-ID: <20150114233954.GB14615@node.dhcp.inet.fi> (raw)
In-Reply-To: <20150114152225.GB31484@google.com>
On Wed, Jan 14, 2015 at 03:22:25PM +0000, Petr Cermak wrote:
> On Wed, Jan 07, 2015 at 07:24:52PM +0200, Kirill A. Shutemov wrote:
> > And how it's not an ABI break?
> I don't think this is an ABI break because the current behaviour is not
> changed unless you write "5" to /proc/pid/clear_refs. If you do, you are
> explicitly requesting the new functionality.
>
> > We have never-lowering VmHWM for 9+ years. How can you know that nobody
> > expects this behaviour?
> This is why we sent an RFC [1] several weeks ago. We expect this to be
> used mainly by performance-related tools (e.g. profilers) and from the
> comments in the code [2] VmHWM seems to be a best-effort counter. If this
> is strictly a no-go, I can only think of the following two alternatives:
>
> 1. Add an extra resettable field to /proc/pid/status (e.g.
> resettable_hiwater_rss). While this doesn't violate the current
> definition of VmHWM, it adds an extra line to /proc/pid/status,
> which I think is a much bigger issue.
> 2. Introduce a new proc fs file to task_mmu (e.g.
> /proc/pid/profiler_stats), but this feels like overengineering.
BTW, we have memory.max_usage_in_byte in memory cgroup. And it's resetable.
Wouldn't it be enough for your profiling use-case?
--
Kirill A. Shutemov
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-01-14 23:39 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-07 17:06 [PATCH v2 0/2] task_mmu: Add user-space support for resetting mm->hiwater_rss (peak RSS) Petr Cermak
2015-01-07 17:06 ` [PATCH v2 1/2] task_mmu: Reduce excessive indentation in clear_refs_write Petr Cermak
2015-01-07 17:06 ` [PATCH v2 2/2] task_mmu: Add user-space support for resetting mm->hiwater_rss (peak RSS) Petr Cermak
2015-01-07 17:24 ` Kirill A. Shutemov
2015-01-14 15:22 ` Petr Cermak
2015-01-14 23:36 ` Kirill A. Shutemov
2015-01-21 22:58 ` David Rientjes
2015-01-22 0:22 ` Primiano Tucci
2015-01-22 23:27 ` David Rientjes
2015-01-23 0:28 ` Primiano Tucci
2015-01-27 0:00 ` David Rientjes
2015-02-03 3:26 ` Petr Cermak
2015-02-03 15:51 ` Minchan Kim
2015-02-03 20:16 ` David Rientjes
2015-01-14 23:39 ` Kirill A. Shutemov [this message]
2015-01-15 16:46 ` Petr Cermak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150114233954.GB14615@node.dhcp.inet.fi \
--to=kirill@shutemov.name \
--cc=akpm@linux-foundation.org \
--cc=bhelgaas@google.com \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=petrcermak@chromium.org \
--cc=primiano@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).