linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: lsf-pc <lsf-pc@lists.linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: [LSF/MM ATTEND] gup/dma, file-backed memory, and THP
Date: Thu, 21 Feb 2019 18:44:27 -0800	[thread overview]
Message-ID: <213b47b1-a63e-06d6-e3ae-fa16e5a23a69@nvidia.com> (raw)

Hi,

I'd like to attend LSF/MM, in particular the following topics and areas:

-- The get_user_pages()+DMA problem. Here, the page tracking technique
seems fairly well settled. I've posted an RFC for the page tracking [1], 
and also posted the first two put_user_pages() patches as non-RFC [2].

However, the interactions with filesystems are still under active 
discussion. That is, what to do when clear_page_dirty_for_io() lands 
on a page that has an active get_user_pages() caller: this is still
being discussed.

I think there are viable solutions and we're getting there, and I
*really* hope we might actually converge on an approach at this 
conference. Although, it's very awkward that Dave Chinner can't make it!

-- Amir Goldstein proposed a "Sharing file backed pages" TOPIC, and 
this is very closely related to the gup/dma issues above, so I want
to be there for that. And generally, get_user_pages and file system
interactions are of course thing I want to be involved in lately.

This next one was partially covered in Zi Yan's ATTEND request. But 
his focus was slightly different, so I wanted to come at it from a
slightly different perspective, which is:

-- THP and huge pages in general. It turns out that some high-thread-count
devices (GPUs, of course, but also various AI chips and FPGA solutions)
do much, much better with 2MB pages. In fact, it's so important that
we've been expecting to be forced to use hugetlbfs, in order to be
guaranteed those size pages. However, it would be better if the system
could instead, reliably and efficiently provide huge pages, to the point
that a GPU-like device could more or less always get a 2 MB page when 
it needs it.

Also, perhaps a minor point: there don't seem to be kernel-side allocators
for huge pages, but in order for device drivers to use them, it seems like 
that would be required.

[1] https://lore.kernel.org/r/20190204052135.25784-1-jhubbard@nvidia.com

[2] https://lore.kernel.org/r/20190208075649.3025-1-jhubbard@nvidia.com

thanks,
-- 
John Hubbard
NVIDIA

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------


                 reply	other threads:[~2019-02-22  2:44 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=213b47b1-a63e-06d6-e3ae-fa16e5a23a69@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).