From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C0AEC3815B for ; Mon, 13 Apr 2020 10:05:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DAB0E206E9 for ; Mon, 13 Apr 2020 10:05:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DAB0E206E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 68D348E00F8; Mon, 13 Apr 2020 06:04:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5CB398E00FA; Mon, 13 Apr 2020 06:04:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 385C78E00F8; Mon, 13 Apr 2020 06:04:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id E610C8E00FA for ; Mon, 13 Apr 2020 06:04:56 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AFDD3283D for ; Mon, 13 Apr 2020 10:04:56 +0000 (UTC) X-FDA: 76702398192.22.level60_2b172fcb1c91e X-HE-Tag: level60_2b172fcb1c91e X-Filterd-Recvd-Size: 5031 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Apr 2020 10:04:55 +0000 (UTC) IronPort-SDR: FfwylIsEXz35HYpTtOtPRO7TJdqUV7cJcIQ25VOclB3kA/L1M79WaTtfw23SyrNFI85M9f3SHB LNIAT0cmhjxw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2020 03:04:54 -0700 IronPort-SDR: Xqmk6jTy4ieXBOVMMhQAIu0zp0jWrua68mhW+1Xlt2bLQU6qzENoTC3uHAMwruDw6g3AMNRYlm FlSr+BPAeAIg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,378,1580803200"; d="scan'208";a="453111385" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga005.fm.intel.com with ESMTP; 13 Apr 2020 03:04:51 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 87B8A16B; Mon, 13 Apr 2020 13:04:50 +0300 (EEST) From: "Kirill A. Shutemov" To: akpm@linux-foundation.org, Andrea Arcangeli Cc: Zi Yan , Yang Shi , Ralph Campbell , John Hubbard , William Kucharski , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCH 2/6] khugepaged: Do not stop collapse if less than half PTEs are referenced Date: Mon, 13 Apr 2020 13:04:43 +0300 Message-Id: <20200413100447.20073-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200413100447.20073-1-kirill.shutemov@linux.intel.com> References: <20200413100447.20073-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __collapse_huge_page_swapin() checks the number of referenced PTE to decide if the memory range is hot enough to justify swapin. We have few problems with the approach: - It is way too late: we can do the check much earlier and safe time. khugepaged_scan_pmd() already knows if we have any pages to swap in and number of referenced page. - It stops collapse altogether if there's not enough referenced pages, not only swappingin. Fix it by making the right check early. We also can avoid additional page table scanning if khugepaged_scan_pmd() haven't found any swap entries. Signed-off-by: Kirill A. Shutemov Fixes: 0db501f7a34c ("mm, thp: convert from optimistic swapin collapsing = to conservative") --- mm/khugepaged.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 99bab7e4d05b..5968ec5ddd6b 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -902,11 +902,6 @@ static bool __collapse_huge_page_swapin(struct mm_st= ruct *mm, .pgoff =3D linear_page_index(vma, address), }; =20 - /* we only decide to swapin, if there is enough young ptes */ - if (referenced < HPAGE_PMD_NR/2) { - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); - return false; - } vmf.pte =3D pte_offset_map(pmd, address); for (; vmf.address < address + HPAGE_PMD_NR*PAGE_SIZE; vmf.pte++, vmf.address +=3D PAGE_SIZE) { @@ -946,7 +941,7 @@ static bool __collapse_huge_page_swapin(struct mm_str= uct *mm, static void collapse_huge_page(struct mm_struct *mm, unsigned long address, struct page **hpage, - int node, int referenced) + int node, int referenced, int unmapped) { pmd_t *pmd, _pmd; pte_t *pte; @@ -1003,7 +998,8 @@ static void collapse_huge_page(struct mm_struct *mm, * If it fails, we release mmap_sem and jump out_nolock. * Continuing to collapse causes inconsistency. */ - if (!__collapse_huge_page_swapin(mm, vma, address, pmd, referenced)) { + if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, + pmd, referenced)) { mem_cgroup_cancel_charge(new_page, memcg, true); up_read(&mm->mmap_sem); goto out_nolock; @@ -1214,22 +1210,21 @@ static int khugepaged_scan_pmd(struct mm_struct *= mm, mmu_notifier_test_young(vma->vm_mm, address)) referenced++; } - if (writable) { - if (referenced) { + if (!writable) { + result =3D SCAN_PAGE_RO; + } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) { + result =3D SCAN_LACK_REFERENCED_PAGE; + } else { result =3D SCAN_SUCCEED; ret =3D 1; - } else { - result =3D SCAN_LACK_REFERENCED_PAGE; - } - } else { - result =3D SCAN_PAGE_RO; } out_unmap: pte_unmap_unlock(pte, ptl); if (ret) { node =3D khugepaged_find_target_node(); /* collapse_huge_page will return with the mmap_sem released */ - collapse_huge_page(mm, address, hpage, node, referenced); + collapse_huge_page(mm, address, hpage, node, + referenced, unmapped); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, --=20 2.26.0