From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f46.google.com (mail-pa0-f46.google.com [209.85.220.46]) by kanga.kvack.org (Postfix) with ESMTP id B559982F64 for ; Fri, 18 Sep 2015 11:02:23 -0400 (EDT) Received: by padhk3 with SMTP id hk3so53354999pad.3 for ; Fri, 18 Sep 2015 08:02:23 -0700 (PDT) Received: from mga03.intel.com (mga03.intel.com. [134.134.136.65]) by mx.google.com with ESMTP id gw3si14209827pac.79.2015.09.18.08.02.01 for ; Fri, 18 Sep 2015 08:02:02 -0700 (PDT) From: "Kirill A. Shutemov" Subject: [PATCHv11 07/37] thp, mlock: do not allow huge pages in mlocked area Date: Fri, 18 Sep 2015 18:01:10 +0300 Message-Id: <1442588500-77331-8-git-send-email-kirill.shutemov@linux.intel.com> In-Reply-To: <1442588500-77331-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1442588500-77331-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Andrea Arcangeli , Hugh Dickins Cc: Dave Hansen , Mel Gorman , Rik van Riel , Vlastimil Babka , Christoph Lameter , Naoya Horiguchi , Steve Capper , "Aneesh Kumar K.V" , Johannes Weiner , Michal Hocko , Jerome Marchand , Sasha Levin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" With new refcounting THP can belong to several VMAs. This makes tricky to track THP pages, when they partially mlocked. It can lead to leaking mlocked pages to non-VM_LOCKED vmas and other problems. With this patch we will split all pages on mlock and avoid fault-in/collapse new THP in VM_LOCKED vmas. I've tried alternative approach: do not mark THP pages mlocked and keep them on normal LRUs. This way vmscan could try to split huge pages on memory pressure and free up subpages which doesn't belong to VM_LOCKED vmas. But this is user-visible change: we screw up Mlocked accouting reported in meminfo, so I had to leave this approach aside. We can bring something better later, but this should be good enough for now. Signed-off-by: Kirill A. Shutemov Tested-by: Sasha Levin Tested-by: Aneesh Kumar K.V Acked-by: Jerome Marchand --- mm/gup.c | 3 ++- mm/huge_memory.c | 5 ++++- mm/memory.c | 3 ++- mm/mlock.c | 51 +++++++++++++++++++-------------------------------- 4 files changed, 27 insertions(+), 35 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 6880085d3790..20fa606b4e76 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -927,7 +927,8 @@ long populate_vma_page_range(struct vm_area_struct *vma, gup_flags = FOLL_TOUCH | FOLL_POPULATE | FOLL_MLOCK; if (vma->vm_flags & VM_LOCKONFAULT) gup_flags &= ~FOLL_POPULATE; - + if (vma->vm_flags & VM_LOCKED) + gup_flags |= FOLL_SPLIT; /* * We want to touch writable mappings with a write fault in order * to break COW, except for shared mappings because these don't COW diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9bcfb0637b0e..1e87739ceefc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -873,6 +873,8 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) return VM_FAULT_FALLBACK; + if (vma->vm_flags & VM_LOCKED) + return VM_FAULT_FALLBACK; if (unlikely(anon_vma_prepare(vma))) return VM_FAULT_OOM; if (unlikely(khugepaged_enter(vma, vma->vm_flags))) @@ -2621,7 +2623,8 @@ static bool hugepage_vma_check(struct vm_area_struct *vma) if ((!(vma->vm_flags & VM_HUGEPAGE) && !khugepaged_always()) || (vma->vm_flags & VM_NOHUGEPAGE)) return false; - + if (vma->vm_flags & VM_LOCKED) + return false; if (!vma->anon_vma || vma->vm_ops) return false; if (is_vma_temporary_stack(vma)) diff --git a/mm/memory.c b/mm/memory.c index 2b80837368d5..fd9bf1febf06 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2161,7 +2161,8 @@ static int wp_page_copy(struct mm_struct *mm, struct vm_area_struct *vma, pte_unmap_unlock(page_table, ptl); mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); - if (old_page) { + /* THP pages are never mlocked */ + if (old_page && !PageTransCompound(old_page)) { /* * Don't let another task, with possibly unlocked vma, * keep the mlocked page. diff --git a/mm/mlock.c b/mm/mlock.c index 339d9e0949b6..ef5fafd934b6 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -443,39 +443,26 @@ void munlock_vma_pages_range(struct vm_area_struct *vma, page = follow_page_mask(vma, start, FOLL_GET | FOLL_DUMP, &page_mask); - if (page && !IS_ERR(page)) { - if (PageTransHuge(page)) { - lock_page(page); - /* - * Any THP page found by follow_page_mask() may - * have gotten split before reaching - * munlock_vma_page(), so we need to recompute - * the page_mask here. - */ - page_mask = munlock_vma_page(page); - unlock_page(page); - put_page(page); /* follow_page_mask() */ - } else { - /* - * Non-huge pages are handled in batches via - * pagevec. The pin from follow_page_mask() - * prevents them from collapsing by THP. - */ - pagevec_add(&pvec, page); - zone = page_zone(page); - zoneid = page_zone_id(page); + if (page && !IS_ERR(page) && !PageTransCompound(page)) { + /* + * Non-huge pages are handled in batches via + * pagevec. The pin from follow_page_mask() + * prevents them from collapsing by THP. + */ + pagevec_add(&pvec, page); + zone = page_zone(page); + zoneid = page_zone_id(page); - /* - * Try to fill the rest of pagevec using fast - * pte walk. This will also update start to - * the next page to process. Then munlock the - * pagevec. - */ - start = __munlock_pagevec_fill(&pvec, vma, - zoneid, start, end); - __munlock_pagevec(&pvec, zone); - goto next; - } + /* + * Try to fill the rest of pagevec using fast + * pte walk. This will also update start to + * the next page to process. Then munlock the + * pagevec. + */ + start = __munlock_pagevec_fill(&pvec, vma, + zoneid, start, end); + __munlock_pagevec(&pvec, zone); + goto next; } /* It's a bug to munlock in the middle of a THP page */ VM_BUG_ON((start >> PAGE_SHIFT) & page_mask); -- 2.5.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org