linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Matthew Wilcox <willy@linux.intel.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [LSF/MM TOPIC] Support for 1GB THP
Date: Tue, 1 Mar 2016 15:20:36 +0300	[thread overview]
Message-ID: <20160301122036.GB19559@node.shutemov.name> (raw)
In-Reply-To: <20160301070911.GD3730@linux.intel.com>

On Tue, Mar 01, 2016 at 02:09:11AM -0500, Matthew Wilcox wrote:
> 
> There are a few issues around 1GB THP support that I've come up against
> while working on DAX support that I think may be interesting to discuss
> in person.
> 
>  - Do we want to add support for 1GB THP for anonymous pages?  DAX support
>    is driving the initial 1GB THP support, but would anonymous VMAs also
>    benefit from 1GB support?  I'm not volunteering to do this work, but
>    it might make an interesting conversation if we can identify some users
>    who think performance would be better if they had 1GB THP support.

At this point I don't think it would have much users. Too much hussle with
non-obvious benefits.

>  - Latency of a major page fault.  According to various public reviews,
>    main memory bandwidth is about 30GB/s on a Core i7-5960X with 4
>    DDR4 channels.  I think people are probably fairly unhappy about
>    doing only 30 page faults per second.  So maybe we need a more complex
>    scheme to handle major faults where we insert a temporary 2MB mapping,
>    prepare the other 2MB pages in the background, then merge them into
>    a 1GB mapping when they're completed.
> 
>  - Cache pressure from 1GB page support.  If we're using NT stores, they
>    bypass the cache, and all should be good.  But if there are
>    architectures that support THP and not NT stores, zeroing a page is
>    just going to obliterate their caches.

At some point I've tested NT stores for clearing 2M THP and it didn't show
much benefit. I guess that could depend on microarhitecture and we
probably should re-test this we new CPU generations.

> Other topics that might interest people from a VM/FS point of view:
> 
>  - Uses for (or replacement of) the radix tree.  We're currently
>    looking at using the radix tree with DAX in order to reduce the number
>    of calls into the filesystem.  That's leading to various enhancements
>    to the radix tree, such as support for a lock bit for exceptional
>    entries (Neil Brown), and support for multi-order entries (me).
>    Is the (enhanced) radix tree the right data structure to be using
>    for this brave new world of huge pages in the page cache, or should
>    we be looking at some other data structure like an RB-tree?

I'm interested in multi-order entires for THP page cache. It's not
required for hugetmpfs, but would be nice to have.
> 
>  - Can we get rid of PAGE_CACHE_SIZE now?  Finally?  Pretty please?

+1 :)

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-03-01 12:20 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-01  7:09 [LSF/MM TOPIC] Support for 1GB THP Matthew Wilcox
2016-03-01 10:25 ` [Lsf-pc] " Jan Kara
2016-03-01 11:00   ` Mel Gorman
2016-03-01 11:51     ` Mel Gorman
2016-03-01 12:09       ` Kirill A. Shutemov
2016-03-01 12:52         ` Mel Gorman
2016-03-01 21:44   ` Matthew Wilcox
2016-03-01 22:15     ` Mike Kravetz
2016-03-01 22:33     ` Rik van Riel
2016-03-01 22:36     ` James Bottomley
2016-03-02 14:14       ` Matthew Wilcox
2016-03-01 12:20 ` Kirill A. Shutemov [this message]
2016-03-01 16:32   ` Christoph Lameter
2016-03-01 21:47     ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160301122036.GB19559@node.shutemov.name \
    --to=kirill@shutemov.name \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=willy@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).