From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BAACCA9ED1 for ; Fri, 1 Nov 2019 07:58:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DBF962080F for ; Fri, 1 Nov 2019 07:58:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBF962080F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3C13C6B026A; Fri, 1 Nov 2019 03:58:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34CEE6B026C; Fri, 1 Nov 2019 03:58:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C6156B026D; Fri, 1 Nov 2019 03:58:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id E266E6B026A for ; Fri, 1 Nov 2019 03:58:38 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 7DD1E180AD81A for ; Fri, 1 Nov 2019 07:58:38 +0000 (UTC) X-FDA: 76106956716.12.sheet70_112e7c163b545 X-HE-Tag: sheet70_112e7c163b545 X-Filterd-Recvd-Size: 5890 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Fri, 1 Nov 2019 07:58:37 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Nov 2019 00:58:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,254,1569308400"; d="scan'208";a="225962499" Received: from yhuang-dev.sh.intel.com ([10.239.159.29]) by fmsmga004.fm.intel.com with ESMTP; 01 Nov 2019 00:58:35 -0700 From: "Huang, Ying" To: Peter Zijlstra Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Ingo Molnar , Dave Hansen , Dan Williams , Fengguang Wu Subject: [RFC 06/10] autonuma, memory tiering: Skip to scan fastest memory Date: Fri, 1 Nov 2019 15:57:23 +0800 Message-Id: <20191101075727.26683-7-ying.huang@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191101075727.26683-1-ying.huang@intel.com> References: <20191101075727.26683-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Huang Ying In memory tiering NUMA balancing mode, the hot pages of the workload in the fastest memory node couldn't be promoted to anywhere, so it's unnecessary to identify the hot pages in the fastest memory node via changing their PTE mapping to have PROT_NONE. So that the page faults could be avoided too. The patch improves the score of pmbench memory accessing benchmark with 80:20 read/write ratio and normal access address distribution by 4.6% on a 2 socket Intel server with Optance DC Persistent Memory. The autonuma hint faults for DRAM node is reduced to almost 0 in the test. Known problem: the statistics of autonuma such as per-node memory accesses, and local/remote ratio, etc. will be influenced. Especially the NUMA scanning period automatic adjustment will not work reasonably. So we cannot rely on that. Fortunately, there's no CPU in the PMEM NUMA nodes, so we will not move tasks there because of the statistics issue. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Dave Hansen Cc: Dan Williams Cc: Fengguang Wu Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- mm/huge_memory.c | 30 +++++++++++++++++++++--------- mm/mprotect.c | 14 +++++++++++++- 2 files changed, 34 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 885642c82aaa..61e241ce20fa 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -32,6 +32,7 @@ #include #include #include +#include =20 #include #include @@ -1937,17 +1938,28 @@ int change_huge_pmd(struct vm_area_struct *vma, p= md_t *pmd, } #endif =20 - /* - * Avoid trapping faults against the zero page. The read-only - * data is likely to be read-cached on the local CPU and - * local/remote hits to the zero page are not interesting. - */ - if (prot_numa && is_huge_zero_pmd(*pmd)) - goto unlock; + if (prot_numa) { + struct page *page; + /* + * Avoid trapping faults against the zero page. The read-only + * data is likely to be read-cached on the local CPU and + * local/remote hits to the zero page are not interesting. + */ + if (is_huge_zero_pmd(*pmd)) + goto unlock; =20 - if (prot_numa && pmd_protnone(*pmd)) - goto unlock; + if (pmd_protnone(*pmd)) + goto unlock; =20 + page =3D pmd_page(*pmd); + /* + * Skip if normal numa balancing is disabled and no + * faster memory node to promote to + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + next_promotion_node(page_to_nid(page)) =3D=3D -1) + goto unlock; + } /* * In case prot_numa, we are under down_read(mmap_sem). It's critical * to not clear pmd intermittently to avoid race with MADV_DONTNEED diff --git a/mm/mprotect.c b/mm/mprotect.c index d69b9913388e..0636f2e5e05b 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -79,6 +80,7 @@ static unsigned long change_pte_range(struct vm_area_st= ruct *vma, pmd_t *pmd, */ if (prot_numa) { struct page *page; + int nid; =20 /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) @@ -105,7 +107,17 @@ static unsigned long change_pte_range(struct vm_area= _struct *vma, pmd_t *pmd, * Don't mess with PTEs if page is already on the node * a single-threaded process is running on. */ - if (target_node =3D=3D page_to_nid(page)) + nid =3D page_to_nid(page); + if (target_node =3D=3D nid) + continue; + + /* + * Skip scanning if normal numa + * balancing is disabled and no faster + * memory node to promote to + */ + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && + next_promotion_node(nid) =3D=3D -1) continue; } =20 --=20 2.23.0