From: Andrea Arcangeli <aarcange@redhat.com>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Zi Yan <zi.yan@cs.rutgers.edu>, Minchan Kim <minchan@kernel.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
kirill.shutemov@linux.intel.com, akpm@linux-foundation.org,
vbabka@suse.cz, mgorman@techsingularity.net,
n-horiguchi@ah.jp.nec.com, khandual@linux.vnet.ibm.com,
Zi Yan <ziy@nvidia.com>
Subject: Re: [PATCH v3 03/14] mm: use pmd lock instead of racy checks in zap_pmd_range()
Date: Mon, 13 Feb 2017 15:40:42 +0100 [thread overview]
Message-ID: <20170213144042.GD25530@redhat.com> (raw)
In-Reply-To: <20170213105906.GA16419@node.shutemov.name>
Hello!
On Mon, Feb 13, 2017 at 01:59:06PM +0300, Kirill A. Shutemov wrote:
> On Sun, Feb 12, 2017 at 06:25:09PM -0600, Zi Yan wrote:
> > Since in mm/compaction.c, the kernel does not down_read(mmap_sem) during memory
> > compaction. Namely, base page migrations do not hold down_read(mmap_sem),
> > so in zap_pte_range(), the kernel needs to hold PTE page table locks.
> > Am I right about this?
> >
> > If yes. IMHO, ultimately, when we need to compact 2MB pages to form 1GB pages,
> > in zap_pmd_range(), pmd locks have to be taken to make that kind of compactions
> > possible.
> >
> > Do you agree?
compaction skips compound pages in the LRU entirely because they're
guaranteed to be HPAGE_PMD_ORDER in size by design, so yes compaction
is not effective in helping compound page allocations of order >
HPAGE_PMD_ORDER, you need to use CMA allocation APIs for that instead
of plain alloc_pages for orders higher than HPAGE_PMD_ORDER.
That only leaves order 10 not covered, which happens to match
HPAGE_PMD_ORDER on x86 32bit non-PAE. In fact MAX_ORDER should be
trimmed to 10 for x86-64 and x86 32bit PAE mode, we're probably
wasting a bit of CPU to maintain order 10 for no good on x86-64 but
that's another issue not related to THP pmd updates.
There's no issue with x86 pmd updates in compaction because we don't
compact those in the first place and 1GB pages in THP are not feasible
regardless of MAX_ORDER being 10 or 11.
> I *think* we can get away with speculative (without ptl) check in
> zap_pmd_range() if we make page fault the only place that can turn
> pmd_none() into something else.
>
> It means all other sides that change pmd must not clear it intermittently
> during pmd change, unless run under down_write(mmap_sem).
>
> I found two such problematic places in kernel:
>
> - change_huge_pmd(.prot_numa=1);
>
> - madvise_free_huge_pmd();
>
> Both clear pmd before setting up a modified version. Both under
> down_read(mmap_sem).
>
> The migration path also would need to establish migration pmd atomically
> to make this work.
Pagetables updates are always atomic, the issue here is not the
atomicity nor the lock prefix, but just "not temporarily showing zero
pmds" if only holding the pmd_lock and mmap_sem for reading (i.e. not
hodling it for writing)
>
> Once all these cases will be fixed, zap_pmd_range() would only be able to
> race with page fault if it called from MADV_DONTNEED.
> This case is not a problem.
Yes this case is handled fine by pmd_trans_unstable and
pmd_none_or_trans_huge_or_clear_bad, it's a controlled race with
userland undefined result when it triggers. We've just to be
consistent with the invariants and not let the kernel get confused
about it, so no problem there.
> Andrea, does it sound reasonable to you?
Yes, pmdp_invalidate does exactly that, it won't show a zero pmd,
instead it does:
set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
the change_huge_pmd under mmap_sem for reading is a relatively newer
introduction, in the older THP code that couldn't happen ever, it was
always running under the mmap_sem for writing and it wasn't updated to
cover this race condition when it started to be taken for reading to
arm NUMA hinting faults in task work.
madvise_free_huge_pmd is also a newer addition not present in the
older THP code introduced with MADV_FREE.
Whenever the mmap_sem is taken for reading only, the pmd shouldn't be
zeroed out at any given time, instead it should do like
split_huge_page->pmdp_invalidate. It's not hard to atomically update
the pmd with a new value:
#ifndef __HAVE_ARCH_PMDP_INVALIDATE
void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
pmd_t entry = *pmdp;
set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
}
#endif
set_pmd_at will do (it's atomic as far as C is concerned, not
guaranteed by the C standard but we always depend on gcc to be smart
and make an atomic update in all pgtable updates, nothing special
here).
Simply removing pmdp_huge_get_and_clear_full and calling set_pmd_at
directly should do the trick. The pmd can't change with the pmd_lock
hold so we've just to read it, update it, and overwrite it atomically
with set_pmd_at. It actually will speed up the code removing an
unnecessary clear.
The only reason for doing pmdp_huge_get_and_clear_full in the pte
cases is to avoid losing the dirty bit updated in hardware. That is
non issue for anon memory where all THP anon memory is always dirty as
it can't be swapped natively and it can never be freed and dropped
unless it's splitted first (in turn not being a hugepmd anymore).
The problem with the dirty bit happens in this sequence:
pmd_t pmd = *pmdp; // dirty bit is not set in pmd
// dirty bit is set in hardware in *pmdp
set_pte_at(..., pmd); // dirty bit lost
pmdp_huge_get_and_clear_full prevents the above so it's preferable but
it's unusable outside of mmap_sem for writing.
tmpfs in THP should do the same as the swap API isn't capable of
natively creating large chunks natively.
If we were to track the dirty bit what we could do is to introduce a
xchg based ptep_invalidate that instead of setting the pmd to zero it
will set it to non present or to a migration entry.
In short either we'd drop pmdp_huge_get_and_clear_full entirely and we
only use set_pmd_at, because if we don't use it always like the old
THP code did, or there's no point in wasting CPU for those xchg as
there would be code paths that would eventually lose dirty bits
anyway, or we should replace it with a variant that can be called with
the mmap_sem for reading and the pmd_lock only that doesn't clear the
pmd but that still prevents the hardware to set the dirty bit while we
update it.
Thanks,
Andrea
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-13 14:40 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-05 16:12 [PATCH v3 00/14] mm: page migration enhancement for thp Zi Yan
2017-02-05 16:12 ` [PATCH v3 01/14] mm: thp: make __split_huge_pmd_locked visible Zi Yan
2017-02-06 6:12 ` Naoya Horiguchi
2017-02-06 12:10 ` Zi Yan
2017-02-06 15:02 ` Matthew Wilcox
2017-02-06 15:03 ` Zi Yan
2017-02-05 16:12 ` [PATCH v3 02/14] mm: thp: create new __zap_huge_pmd_locked function Zi Yan
2017-02-05 16:12 ` [PATCH v3 03/14] mm: use pmd lock instead of racy checks in zap_pmd_range() Zi Yan
2017-02-06 4:02 ` Hillf Danton
2017-02-06 4:14 ` Zi Yan
2017-02-06 7:43 ` Naoya Horiguchi
2017-02-06 13:02 ` Zi Yan
2017-02-06 23:22 ` Naoya Horiguchi
2017-02-06 16:07 ` Kirill A. Shutemov
2017-02-06 16:32 ` Zi Yan
2017-02-06 17:35 ` Kirill A. Shutemov
2017-02-07 13:55 ` Aneesh Kumar K.V
2017-02-07 14:12 ` Zi Yan
2017-02-07 14:19 ` Kirill A. Shutemov
2017-02-07 15:11 ` Zi Yan
2017-02-07 16:37 ` Kirill A. Shutemov
2017-02-07 17:14 ` Zi Yan
2017-02-07 17:45 ` Kirill A. Shutemov
2017-02-13 0:25 ` Zi Yan
2017-02-13 10:59 ` Kirill A. Shutemov
2017-02-13 14:40 ` Andrea Arcangeli [this message]
2017-02-05 16:12 ` [PATCH v3 04/14] mm: x86: move _PAGE_SWP_SOFT_DIRTY from bit 7 to bit 1 Zi Yan
2017-02-09 9:14 ` Naoya Horiguchi
2017-02-09 15:07 ` Zi Yan
2017-02-05 16:12 ` [PATCH v3 05/14] mm: mempolicy: add queue_pages_node_check() Zi Yan
2017-02-05 16:12 ` [PATCH v3 06/14] mm: thp: introduce separate TTU flag for thp freezing Zi Yan
2017-02-05 16:12 ` [PATCH v3 07/14] mm: thp: introduce CONFIG_ARCH_ENABLE_THP_MIGRATION Zi Yan
2017-02-05 16:12 ` [PATCH v3 08/14] mm: thp: enable thp migration in generic path Zi Yan
2017-02-09 9:15 ` Naoya Horiguchi
2017-02-09 15:17 ` Zi Yan
2017-02-09 23:04 ` Naoya Horiguchi
2017-02-14 20:13 ` Zi Yan
2017-02-05 16:12 ` [PATCH v3 09/14] mm: thp: check pmd migration entry in common path Zi Yan
2017-02-09 9:16 ` Naoya Horiguchi
2017-02-09 17:36 ` Zi Yan
2017-02-05 16:12 ` [PATCH v3 10/14] mm: soft-dirty: keep soft-dirty bits over thp migration Zi Yan
2017-02-05 16:12 ` [PATCH v3 11/14] mm: hwpoison: soft offline supports " Zi Yan
2017-02-05 16:12 ` [PATCH v3 12/14] mm: mempolicy: mbind and migrate_pages support " Zi Yan
2017-02-05 16:12 ` [PATCH v3 13/14] mm: migrate: move_pages() supports " Zi Yan
2017-02-09 9:16 ` Naoya Horiguchi
2017-02-09 17:37 ` Zi Yan
2017-02-05 16:12 ` [PATCH v3 14/14] mm: memory_hotplug: memory hotremove " Zi Yan
2017-02-23 16:12 ` [PATCH v3 00/14] mm: page migration enhancement for thp Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170213144042.GD25530@redhat.com \
--to=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=khandual@linux.vnet.ibm.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=minchan@kernel.org \
--cc=n-horiguchi@ah.jp.nec.com \
--cc=vbabka@suse.cz \
--cc=zi.yan@cs.rutgers.edu \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).