From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C322FC433F5 for ; Sat, 30 Apr 2022 03:23:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To:Subject: MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jtXcXQjD1m7rMO4u/KWCjUnZ35Led09ob8yX4k+4xE4=; b=xoC+g45/Y0FBhh XBA57MVrM+ogSSxtveATPkAHAZUm4oGZBFl1ZjOwQHgIFEDftQP8EvmRSMHydP+bgPTGoNX3b0RRp Pytbzh+paYv/ej8EZC+EdScvzv++XIJat3bzdF8PermA524tbELlxlCrYOaomtsZ0zLAJd6ZomLJe W/9utMUa+U52iCY7OJ6XCg9NC5vYFnZIogLANijEPumxlrUMTtAARVkehJSKdR8zfSK8LV17AE20D 1DOozr91b5LCycaUV9naJNBpzITOX2XKTUAuzg0JzEzLemtRAECSt9OorsQF7k4RiDt3hp2GesSZN L+GLioFmrJA32B8cp2DQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkdgG-00DSLC-QX; Sat, 30 Apr 2022 03:22:00 +0000 Received: from out30-44.freemail.mail.aliyun.com ([115.124.30.44]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkdgC-00DSKm-SQ for linux-arm-kernel@lists.infradead.org; Sat, 30 Apr 2022 03:21:59 +0000 X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R851e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=e01e04394; MF=baolin.wang@linux.alibaba.com; NM=1; PH=DS; RN=31; SR=0; TI=SMTPD_---0VBla4YD_1651288907; Received: from 30.32.86.96(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VBla4YD_1651288907) by smtp.aliyun-inc.com(127.0.0.1); Sat, 30 Apr 2022 11:21:50 +0800 Message-ID: Date: Sat, 30 Apr 2022 11:22:33 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1 Subject: Re: [PATCH 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping To: Gerald Schaefer Cc: akpm@linux-foundation.org, mike.kravetz@oracle.com, catalin.marinas@arm.com, will@kernel.org, tsbogend@alpha.franken.de, James.Bottomley@HansenPartnership.com, deller@gmx.de, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, ysato@users.sourceforge.jp, dalias@libc.org, davem@davemloft.net, arnd@arndb.de, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org References: <20220429220214.4cfc5539@thinkpad> From: Baolin Wang In-Reply-To: <20220429220214.4cfc5539@thinkpad> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220429_202157_174497_4CC812C0 X-CRM114-Status: GOOD ( 28.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 4/30/2022 4:02 AM, Gerald Schaefer wrote: > On Fri, 29 Apr 2022 16:14:43 +0800 > Baolin Wang wrote: > >> On some architectures (like ARM64), it can support CONT-PTE/PMD size >> hugetlb, which means it can support not only PMD/PUD size hugetlb: >> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page >> size specified. >> >> When unmapping a hugetlb page, we will get the relevant page table >> entry by huge_pte_offset() only once to nuke it. This is correct >> for PMD or PUD size hugetlb, since they always contain only one >> pmd entry or pud entry in the page table. >> >> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, >> since they can contain several continuous pte or pmd entry with >> same page table attributes, so we will nuke only one pte or pmd >> entry for this CONT-PTE/PMD size hugetlb page. >> >> And now we only use try_to_unmap() to unmap a poisoned hugetlb page, >> which means now we will unmap only one pte entry for a CONT-PTE or >> CONT-PMD size poisoned hugetlb page, and we can still access other >> subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, >> which will cause serious issues possibly. >> >> So we should change to use huge_ptep_clear_flush() to nuke the >> hugetlb page table to fix this issue, which already considered >> CONT-PTE and CONT-PMD size hugetlb. >> >> Note we've already used set_huge_swap_pte_at() to set a poisoned >> swap entry for a poisoned hugetlb page. >> >> Signed-off-by: Baolin Wang >> --- >> mm/rmap.c | 34 +++++++++++++++++----------------- >> 1 file changed, 17 insertions(+), 17 deletions(-) >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 7cf2408..1e168d7 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1564,28 +1564,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >> break; >> } >> } >> + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > > Unlike in your patch 2/3, I do not see that this (huge) pteval would later > be used again with set_huge_pte_at() instead of set_pte_at(). Not sure if > this (huge) pteval could end up at a set_pte_at() later, but if yes, then > this would be broken on s390, and you'd need to use set_huge_pte_at() > instead of set_pte_at() like in your patch 2/3. IIUC, As I said in the commit message, we will only unmap a poisoned hugetlb page by try_to_unmap(), and the poisoned hugetlb page will be remapped with a poisoned entry by set_huge_swap_pte_at() in try_to_unmap_one(). So I think no need change to use set_huge_pte_at() instead of set_pte_at() for other cases, since the hugetlb page will not hit other cases. if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm); set_huge_swap_pte_at(mm, address, pvmw.pte, pteval, vma_mmu_pagesize(vma)); } else { dec_mm_counter(mm, mm_counter(&folio->page)); set_pte_at(mm, address, pvmw.pte, pteval); } } > > Please note that huge_ptep_get functions do not return valid PTEs on s390, > and such PTEs must never be set directly with set_pte_at(), but only with > set_huge_pte_at(). > > Background is that, for hugetlb pages, we are of course not really dealing > with PTEs at this level, but rather PMDs or PUDs, depending on hugetlb size. > On s390, the layout is quite different for PTEs and PMDs / PUDs, and > unfortunately the hugetlb code is not properly reflecting this by using > PMD or PUD types, like the THP code does. > > So, as work-around, on s390, the huge_ptep_xxx functions will return > only fake PTEs, which must be converted again to a proper PMD or PUD, > before writing them to the page table, which is what happens in > set_huge_pte_at(), but not in set_pte_at(). Thanks for your explanation. As I said as above, I think we've already handled the hugetlb with set_huge_swap_pte_at() in try_to_unmap_one(). _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel