linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org, Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	David Rientjes <rientjes@google.com>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	James Houghton <jthoughton@google.com>,
	Muchun Song <songmuchun@bytedance.com>
Subject: Re: A mapcount riddle
Date: Thu, 26 Jan 2023 09:51:12 -0800	[thread overview]
Message-ID: <Y9K9kDscpq/Tj/SE@monkey> (raw)
In-Reply-To: <Y9JE8XicIUs/D7dp@dhcp22.suse.cz>

On 01/26/23 10:16, Michal Hocko wrote:
> On Wed 25-01-23 09:59:15, Mike Kravetz wrote:
> > On 01/25/23 09:24, Michal Hocko wrote:
> > > On Tue 24-01-23 12:56:24, Mike Kravetz wrote:
> > > > At first thought this seems bad.  However, I believe this has been the
> > > > behavior since hugetlb PMD sharing was introduced in 2006 and I am
> > > > unaware of any reported issues.  I did a audit of code looking at
> > > > mapcount.  In addition to the above issue with smaps, there appears
> > > > to be an issue with 'migrate_pages' where shared pages could be migrated
> > > > without appropriate privilege.
> > > > 
> > > > 	/* With MPOL_MF_MOVE, we migrate only unshared hugepage. */
> > > > 	if (flags & (MPOL_MF_MOVE_ALL) ||
> > > > 	    (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) {
> > > > 		if (isolate_hugetlb(page, qp->pagelist) &&
> > > > 			(flags & MPOL_MF_STRICT))
> > > > 			/*
> > > > 			 * Failed to isolate page but allow migrating pages
> > > > 			 * which have been queued.
> > > > 			 */
> > > > 			ret = 1;
> > > > 	}
> > > 
> > > Could you elaborate what is problematic about that? The whole pmd
> > > sharing is a cooperative thing. So if some of the processes decides to
> > > migrate the page then why that should be a problem for others sharing
> > > that page via page table? Am I missing something obvious?
> > 
> > Nothing obvious.  It is just that the semantics seem to be that you can
> > only move shared pages if you have CAP_SYS_NICE.
> 
> Correct
> 
> > Certainly cooperation
> > is implied for shared PMDs, but I would guess that most applications are
> > not even aware they are sharing PMDs.
> 
> How come? They have to explicitly map those hugetlb pages to the same
> address. Or is it common that the mapping just lands there by accident?

Mapping to the same address is not required for PMD sharing.  What is
required is that the alignment of PUD_SIZE offsets within the mapped object
(file) are mapped to PUD_SIZE aligned virtual addresses.  That may not be
clear as it is difficult to describe.  Bottom like is that addresses do not
need to match.

However, I am aware of one DB that maps large hugetlb shared areas at
the same virtual address in many processes for application convenience.
PMD sharing was not the reason for mapping at the same virtual address,
and people developing that DB were not necessarily aware that PMDs were
being shared.  I also worked on a performance issue with another application
making use of large hugetlb mappings that was unaware PMD sharing was
happening in their environment.  Since PMD sharing is not documented
anywhere (except source code), I suspect applications are not aware if
they happen make use of shared PMDs.  That is the reason for my
statement above.

> > Consider a group of processes sharing a hugetlb mapping.  If the mapping
> > is PUD_SIZE - huge_page_size, there is no sharing of PMDs and a process
> > without CAP_SYS_NICE can not migrate the shared pages.  However, if nothing
> > else changes and the mapping size is PUD_SIZE (and appropriately aligned)
> > the PMDs are shared.  Should we allow a process to migrate shared pages
> > without CAP_SYS_NICE in this case?
> 
> I am not sure I follow. I have likely got lost in the above. So the
> move_pages interface requires CAP_SYS_NICE to allow moving shared pages.
> pmd shared hugetlb pages fail the "I am shared" detection so even
> processes without CAP_SYS_NICE are allowed to migrate those. This is not
> ideal because somebody unpriviledged (with an access to the address
> space) could impose additional latencies.

Correct.  That is one of the things I will/want to fix.

> The question is whether this really matters for workloads that opt-in for
> pmd sharing. It is my understanding that those are in cooperative mode
> so an adversary player is not a threat model. Or am I wrong in that
> assumption?

Yes, the argument can be made that processes sharing a large hugetlb
object are cooperative and should trust each other.  My plan is to simply
make the code follow the documented behavior.  I would rather not have
different user visible behavior for mappings using shared PMDs.  And,
code changes are rather trivial.

>             I haven't checked very closely but wouldn't be mprotect a
> bigger problem? I do not remember any special casing for hugetlb pmd
> sharing there.

It is not an issue for mprotect.  Any change in protection disables PMD
sharing.
-- 
Mike Kravetz


  reply	other threads:[~2023-01-26 17:51 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-24 20:56 A mapcount riddle Mike Kravetz
2023-01-24 23:00 ` Peter Xu
2023-01-24 23:29   ` Yang Shi
2023-01-25 16:02     ` Peter Xu
2023-01-25 18:26       ` Yang Shi
2023-01-24 23:35   ` Mike Kravetz
2023-01-25 16:46     ` Peter Xu
2023-01-25 18:16       ` Mike Kravetz
2023-01-25 20:13         ` Peter Xu
2023-01-25  8:24 ` Michal Hocko
2023-01-25 17:59   ` Mike Kravetz
2023-01-26  9:16     ` Michal Hocko
2023-01-26 17:51       ` Mike Kravetz [this message]
2023-01-27  9:56         ` Michal Hocko
2023-01-25  9:09 ` David Hildenbrand
2023-01-25 15:26 ` James Houghton
2023-01-25 15:54   ` Peter Xu
2023-01-25 16:22     ` James Houghton
2023-01-25 19:26       ` Vishal Moola
2023-01-26  9:15       ` David Hildenbrand
2023-01-26 18:22         ` Yang Shi
2023-01-26  9:10   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y9K9kDscpq/Tj/SE@monkey \
    --to=mike.kravetz@oracle.com \
    --cc=david@redhat.com \
    --cc=jthoughton@google.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=naoya.horiguchi@linux.dev \
    --cc=peterx@redhat.com \
    --cc=rientjes@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).