From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F1821C4828D for ; Wed, 7 Feb 2024 11:21:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uaUXPlRVwKBMzp70T20m+DZKr5q/kIbEoNFQPc+XEVI=; b=sOmM3M4jTzPrLA uteWtaoYPBMIXjDR8d/DfCXSe3pl0rV398lJI5EMboiutnas1JAG9uRTeT5g3Ggisq3vBkJZf6z+G 04aUIBP2ijJUGWnJ9x7V395J2SiLhVqLLy3aRuCHljE2SRo4uNwkdjHr16TzOU2OaZnv8Jt9v6k7U UrnTBxUyI67HYijgc2X1Jpi+3Oisp+A2snSYenVgR0cTwEt59v+f1YxE1cck6BIQpFp9Ov1m5o2+4 kpTL2MG8LQOxAbShwUuBcyRUbQgkrwqdNZgsYZKo/bI8sVZUc+dLOiPXhKVU3QHiI+qrN1ZMVp/AN 1F9nrezPE3SKV2eUrL1A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rXfza-0000000ATaS-42Sy; Wed, 07 Feb 2024 11:21:26 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rXfza-0000000ATZw-0UBo for linux-arm-kernel@bombadil.infradead.org; Wed, 07 Feb 2024 11:21:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=WpX5mJ+dnqA+dv/RTqJrWW+rQFLmTBT94nD0uWiwdMI=; b=ImxC9KpOSn9Kp0yFacpAPbEXBC QgbMi18qkmmyI9lmhwLdu/xEpKlsg9SptWborQ7SxET2RfEPO+Etjs97p0IoPatPFO8iaA7bC36YI e7WvpDgWq+MzGUdEBxK7cVsjWIRazsEidEFb/U+NdZDDGTs8FISZ6QMNvr3SjN6CZ7a6bkW+ioYky UEjihsZRLEASsrjmij8aP/NxFrPSfPJaX2CgtUHmVgu5Wj6uwoNAH+PFymrZivF7DJhEoTOvsTqiF fpw9JM9IxVo6HV5JHCv3GF301Y0zZnQF2HU2kGXzcDvZb+6pIdofdsvOGJtPSayOiLNECLLfJVaRE dl3sWP0Q==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rXfzR-0000000EzNf-1XL0; Wed, 07 Feb 2024 11:21:17 +0000 Date: Wed, 7 Feb 2024 11:21:17 +0000 From: Matthew Wilcox To: Will Deacon Cc: Nanyong Sun , Catalin Marinas , mike.kravetz@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org, anshuman.khandual@arm.com, wangkefeng.wang@huawei.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize Message-ID: References: <20240113094436.2506396-1-sunnanyong@huawei.com> <20240207111252.GA22167@willie-the-truck> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240207111252.GA22167@willie-the-truck> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Feb 07, 2024 at 11:12:52AM +0000, Will Deacon wrote: > On Sat, Jan 27, 2024 at 01:04:15PM +0800, Nanyong Sun wrote: > > > > On 2024/1/26 2:06, Catalin Marinas wrote: > > > On Sat, Jan 13, 2024 at 05:44:33PM +0800, Nanyong Sun wrote: > > > > HVO was previously disabled on arm64 [1] due to the lack of necessary > > > > BBM(break-before-make) logic when changing page tables. > > > > This set of patches fix this by adding necessary BBM sequence when > > > > changing page table, and supporting vmemmap page fault handling to > > > > fixup kernel address translation fault if vmemmap is concurrently accessed. > > > I'm not keen on this approach. I'm not even sure it's safe. In the > > > second patch, you take the init_mm.page_table_lock on the fault path but > > > are we sure this is unlocked when the fault was taken? > > I think this situation is impossible. In the implementation of the second > > patch, when the page table is being corrupted > > (the time window when a page fault may occur), vmemmap_update_pte() already > > holds the init_mm.page_table_lock, > > and unlock it until page table update is done.Another thread could not hold > > the init_mm.page_table_lock and > > also trigger a page fault at the same time. > > If I have missed any points in my thinking, please correct me. Thank you. > > It still strikes me as incredibly fragile to handle the fault and trying > to reason about all the users of 'struct page' is impossible. For example, > can the fault happen from irq context? The pte lock cannot be taken in irq context (which I think is what you're asking?) While it is not possible to reason about all users of struct page, we are somewhat relieved of that work by noting that this is only for hugetlbfs, so we don't need to reason about slab, page tables, netmem or zsmalloc. > If we want to optimise the vmemmap mapping for arm64, I think we need to > consider approaches which avoid the possibility of the fault altogether. > It's more complicated to implement, but I think it would be a lot more > robust. > > Andrew -- please can you drop these from -next? > > Thanks, > > Will _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel