From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f198.google.com (mail-qt0-f198.google.com [209.85.216.198]) by kanga.kvack.org (Postfix) with ESMTP id 738226B02A3 for ; Mon, 26 Sep 2016 11:24:05 -0400 (EDT) Received: by mail-qt0-f198.google.com with SMTP id 16so306758065qtn.1 for ; Mon, 26 Sep 2016 08:24:05 -0700 (PDT) Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id w7si14791328qkc.263.2016.09.26.08.24.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 26 Sep 2016 08:24:04 -0700 (PDT) From: zi.yan@sent.com Subject: [PATCH v1 01/12] mm: mempolicy: add queue_pages_node_check() Date: Mon, 26 Sep 2016 11:22:23 -0400 Message-Id: <20160926152234.14809-2-zi.yan@sent.com> In-Reply-To: <20160926152234.14809-1-zi.yan@sent.com> References: <20160926152234.14809-1-zi.yan@sent.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: benh@kernel.crashing.org, mgorman@techsingularity.net, kirill.shutemov@linux.intel.com, akpm@linux-foundation.org, dave.hansen@linux.intel.com, n-horiguchi@ah.jp.nec.com From: Naoya Horiguchi Introduce a separate check routine related to MPOL_MF_INVERT flag. This patch just does cleanup, no behavioral change. Signed-off-by: Naoya Horiguchi --- mm/mempolicy.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 2da72a5..dc8e913 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -475,6 +475,15 @@ struct queue_pages { struct vm_area_struct *prev; }; +static inline bool queue_pages_node_check(struct page *page, + struct queue_pages *qp) +{ + int nid = page_to_nid(page); + unsigned long flags = qp->flags; + + return node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT); +} + /* * Scan through pages checking if pages follow certain conditions, * and move them to the pagelist if they do. @@ -528,8 +537,7 @@ retry: */ if (PageReserved(page)) continue; - nid = page_to_nid(page); - if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT)) + if (queue_pages_node_check(page, qp)) continue; if (PageTransCompound(page)) { get_page(page); @@ -561,7 +569,6 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, #ifdef CONFIG_HUGETLB_PAGE struct queue_pages *qp = walk->private; unsigned long flags = qp->flags; - int nid; struct page *page; spinlock_t *ptl; pte_t entry; @@ -571,8 +578,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, if (!pte_present(entry)) goto unlock; page = pte_page(entry); - nid = page_to_nid(page); - if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT)) + if (queue_pages_node_check(page, qp)) goto unlock; /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */ if (flags & (MPOL_MF_MOVE_ALL) || -- 2.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org