From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by ozlabs.org (Postfix) with ESMTP id EBF352C00B6 for ; Fri, 10 Aug 2012 01:04:21 +1000 (EST) From: "Kirill A. Shutemov" To: linux-mm@kvack.org Subject: [PATCH v2 1/6] THP: Use real address for NUMA policy Date: Thu, 9 Aug 2012 18:02:58 +0300 Message-Id: <1344524583-1096-2-git-send-email-kirill.shutemov@linux.intel.com> In-Reply-To: <1344524583-1096-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1344524583-1096-1-git-send-email-kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org, linux-sh@vger.kernel.org, Jan Beulich , "H. Peter Anvin" , sparclinux@vger.kernel.org, Andrea Arcangeli , Andi Kleen , Robert Richter , x86@kernel.org, Hugh Dickins , Ingo Molnar , Mel Gorman , Alex Shi , Thomas Gleixner , KAMEZAWA Hiroyuki , Tim Chen , linux-kernel@vger.kernel.org, Andy Lutomirski , Johannes Weiner , Andrew Morton , linuxppc-dev@lists.ozlabs.org, "Kirill A. Shutemov" List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Andi Kleen Use the fault address, not the rounded down hpage address for NUMA policy purposes. In some circumstances this can give more exact NUMA policy. Signed-off-by: Andi Kleen Signed-off-by: Kirill A. Shutemov --- mm/huge_memory.c | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 57c4b93..70737ec 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -681,11 +681,11 @@ static inline gfp_t alloc_hugepage_gfpmask(int defrag, gfp_t extra_gfp) static inline struct page *alloc_hugepage_vma(int defrag, struct vm_area_struct *vma, - unsigned long haddr, int nd, + unsigned long address, int nd, gfp_t extra_gfp) { return alloc_pages_vma(alloc_hugepage_gfpmask(defrag, extra_gfp), - HPAGE_PMD_ORDER, vma, haddr, nd); + HPAGE_PMD_ORDER, vma, address, nd); } #ifndef CONFIG_NUMA @@ -710,7 +710,7 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, if (unlikely(khugepaged_enter(vma))) return VM_FAULT_OOM; page = alloc_hugepage_vma(transparent_hugepage_defrag(vma), - vma, haddr, numa_node_id(), 0); + vma, address, numa_node_id(), 0); if (unlikely(!page)) { count_vm_event(THP_FAULT_FALLBACK); goto out; @@ -944,7 +944,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, if (transparent_hugepage_enabled(vma) && !transparent_hugepage_debug_cow()) new_page = alloc_hugepage_vma(transparent_hugepage_defrag(vma), - vma, haddr, numa_node_id(), 0); + vma, address, numa_node_id(), 0); else new_page = NULL; -- 1.7.7.6