From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Mel Gorman <mgorman@suse.de>
Cc: lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, Matthew Wilcox <willy@linux.intel.com>
Subject: Re: [Lsf-pc] [LSF/MM ATTEND] Memory management -- THP, hugetlb, scalability
Date: Fri, 10 Jan 2014 19:42:04 +0200 [thread overview]
Message-ID: <20140110174204.GA5228@node.dhcp.inet.fi> (raw)
In-Reply-To: <20140108151321.GI27046@suse.de>
On Wed, Jan 08, 2014 at 03:13:21PM +0000, Mel Gorman wrote:
> On Fri, Jan 03, 2014 at 02:25:09PM +0200, Kirill A. Shutemov wrote:
> > Hi,
> >
> > I would like to attend LSF/MM summit. I'm interested in discussion about
> > huge pages, scalability of memory management subsystem and persistent
> > memory.
> >
> > Last year I did some work to fix THP-related regressions and improve
> > scalability. I also work on THP for file-backed pages.
> >
> > Depending on project status, I probably want to bring transparent huge
> > pagecache as a topic.
> >
>
> I think transparent huge pagecache is likely to crop up for more than one
> reason. There is the TLB issue and the motivation that i-TLB pressure is
> a problem in some specialised cases. Whatever the merits of that case,
> transparent hugepage cache has been raised as a potential solution for
> some VM scalability problems. I recognise that dealing with large numbers
> of struct pages is now a problem on larger machines (although I have not
> seen quantified data on the problem nor do I have access to a machine large
> enough to measure it myself) but I'm wary of transparent hugepage cache
> being treated as a primary solution for VM scalability problems. Lacking
> performance data I have no suggestions on what these alternative solutions
> might look like.
Yes, performance data is critical. I'll try bring some.
The only alternative I see is some kind of THP, implemented on filesystem
level. It can work for tmpfs/shm reasonably well. But it looks ad-hoc and
in long term transparent huge pagecache is the way to go, I believe.
Sibling topic is THP for XIP (see Matthew's patchset). Guys want to manage
persistent memory in 2M chunks where it's possible. And THP (but without
struct page in this case) is the obvious solution.
--
Kirill A. Shutemov
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-01-10 17:42 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-03 12:25 [LSF/MM ATTEND] Memory management -- THP, hugetlb, scalability Kirill A. Shutemov
2014-01-08 15:13 ` [Lsf-pc] " Mel Gorman
2014-01-10 17:42 ` Kirill A. Shutemov [this message]
2014-01-10 22:51 ` Matthew Wilcox
2014-01-10 22:59 ` Kirill A. Shutemov
2014-01-11 1:49 ` Matthew Wilcox
2014-01-11 2:55 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140110174204.GA5228@node.dhcp.inet.fi \
--to=kirill@shutemov.name \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mgorman@suse.de \
--cc=willy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).