From: Alex Shi <alex.shi@linux.alibaba.com>
To: David Rientjes <rientjes@google.com>
Cc: David Hildenbrand <david@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>, Michal Hocko <mhocko@suse.com>,
Hugh Dickins <hughd@google.com>,
Andrea Arcangeli <aarcange@redhat.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Song Liu <songliubraving@fb.com>,
Matthew Wilcox <willy@infradead.org>,
Minchan Kim <minchan@kernel.org>,
Chris Kennelly <ckennelly@google.com>,
linux-mm@kvack.org, linux-api@vger.kernel.org
Subject: Re: [RFC] Hugepage collapse in process context
Date: Thu, 4 Mar 2021 18:52:24 +0800 [thread overview]
Message-ID: <b289ea7b-a4f8-824e-d4b2-1b69079f5f5f@linux.alibaba.com> (raw)
In-Reply-To: <544df052-f9f3-f068-f69e-343cc69d994b@google.com>
在 2021/3/2 上午4:56, David Rientjes 写道:
> On Wed, 24 Feb 2021, Alex Shi wrote:
>
>>> Agreed, and happy to see that there's a general consensus for the
>>> direction. Benefit of a new madvise mode is that it can be used for
>>> madvise() as well if you are interested in only a single range of your own
>>> memory and then it doesn't need to reconcile with any of the already
>>> overloaded semantics of MADV_HUGEPAGE.
>>
>> It's a good idea to let process deal with its own THP policy.
>> but current applications will miss the benefit w/o changes, and change is
>> expensive for end users. So except this work, may a per memcg collapse benefit
>> apps and free for them, we often deploy apps in cgroups on server now.
>>
>
> Hi Alex,
>
> I'm not sure that I understand: this MADV_COLLAPSE would be possible for
> process_madvise() as well and by passing a vectored set of ranges so a
> process can do this on behalf of other processes (it's the only way that
> we could theoretically move khugepaged to userspace, although that's not
> an explicit end goal).
>
Forgive my stupidity, I still can't figure out how process_madvise caller
fill the iovec of other's on a common system.
>
> How would you see this working with memcg involved? I had thought this
> was entirely orthogonal to any cgroup.
>
You'r right, it's out of cgroup and better. per cgroup khugepaged could be
a alternative way. but it require a cgroup and not specific on target process.
Thanks
Alex
next prev parent reply other threads:[~2021-03-04 10:53 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <d098c392-273a-36a4-1a29-59731cdf5d3d@google.com>
2021-02-17 8:21 ` [RFC] Hugepage collapse in process context Michal Hocko
2021-02-18 13:43 ` Vlastimil Babka
2021-02-18 13:52 ` David Hildenbrand
2021-02-18 22:34 ` David Rientjes
2021-02-19 16:16 ` Zi Yan
2021-02-24 9:44 ` Alex Shi
2021-03-01 20:56 ` David Rientjes
2021-03-04 10:52 ` Alex Shi [this message]
2021-02-18 8:11 ` Song Liu
2021-02-18 8:39 ` Michal Hocko
2021-02-18 9:53 ` Song Liu
2021-02-18 10:01 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b289ea7b-a4f8-824e-d4b2-1b69079f5f5f@linux.alibaba.com \
--to=alex.shi@linux.alibaba.com \
--cc=aarcange@redhat.com \
--cc=ckennelly@google.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=minchan@kernel.org \
--cc=rientjes@google.com \
--cc=songliubraving@fb.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox