From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 976B718F2E7; Tue, 30 Jul 2024 06:43:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722321835; cv=none; b=Ws6wghqM/QpLNMr46XQZzwYn6rUWVgAm+Z4F0MxwDLIF5B0Od2bnJXz4EKfy7u1mYu8LPaHwMzi6jxPyxdzjyfigUZ62HJzTRZt837Q9mY+qV45Lj2Z8dwM04NwizuLG/XBso/30+j5WfUN13TFNCGXUZQhct+U7XCn/bVPpf0U= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722321835; c=relaxed/simple; bh=liNll9iqljHOGZGyoe5+iBjU60hIZmqEawxmeeNx05k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kTJDvKaVUnZx/d7mK4n5eseWaPG+1YrCc6C4+4E4ajil5HQAhtBXnA2vHP/7f5SE5P9kQYeCvVnA/1WwnKZXJDjkT2LchYRElpwgC9MQZaRNFZYX2Qau9xSBs+YUKXNadxsk7TSRJmO34SzHeElmD3xjFNzlGpwDKz6DB9zMb3M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sF6QdOTY; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sF6QdOTY" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33505C4AF10; Tue, 30 Jul 2024 06:43:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722321835; bh=liNll9iqljHOGZGyoe5+iBjU60hIZmqEawxmeeNx05k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sF6QdOTYXsuhZ5/wKcNXr1fpLi2Nc7NTYTTlW0GyhipNPb/9ArXCaTfUMRF4glafN ERGFWbhPiJp0rpmYk//9iOQ4Y6AWC6KbdqdnXw9EAsyfZb9SiKLpZrjeJa4W887xAU iVeioOBZRlSigQTkAI1svU6u1CTnxum7RI995sRZM+TABKT73TI9KeunNb+F7Daix0 V49XC5PL7gFKWpc7LTU2ULT4wVE0DoSUB5onWtjcXwdrpZRUDJf6X0eNb/eT2TMkmJ 5rFakvhSK2oYTuzfh3G24YHCdvf2JT6UzyA+rwvZu+uZaNsor+hHhabSQIlD9nTc6+ JA10DmcfN2EZw== From: alexs@kernel.org To: Will Deacon , "Aneesh Kumar K . V" , Nick Piggin , Peter Zijlstra , Russell King , Catalin Marinas , Brian Cain , WANG Xuerui , Geert Uytterhoeven , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Michael Ellerman , Naveen N Rao , Paul Walmsley , Albert Ou , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Andy Lutomirski , Bibo Mao , Baolin Wang , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Qi Zheng , Vishal Moola , "Aneesh Kumar K . V" , Kemeng Shi , Lance Yang , Peter Xu , Barry Song , linux-s390@vger.kernel.org Cc: Guo Ren , Christophe Leroy , Palmer Dabbelt , Mike Rapoport , Oscar Salvador , Alexandre Ghiti , Jisheng Zhang , Samuel Holland , Anup Patel , Josh Poimboeuf , Breno Leitao , Alexander Gordeev , Gerald Schaefer , Hugh Dickins , David Hildenbrand , Ryan Roberts , Matthew Wilcox , Alex Shi , Andrew Morton Subject: [RFC PATCH 05/18] mm/thp: use ptdesc in do_huge_pmd_anonymous_page Date: Tue, 30 Jul 2024 14:46:59 +0800 Message-ID: <20240730064712.3714387-6-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240730064712.3714387-1-alexs@kernel.org> References: <20240730064712.3714387-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-openrisc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Alex Shi ince we have ptdesc struct now, better to use replace pgtable_t, aka 'struct page *'. It's alaos a preparation for return ptdesc pointer in pte_alloc_one series function. Signed-off-by: Alex Shi Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/huge_memory.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0ee104093121..d86108d81a99 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1087,16 +1087,16 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) if (!(vmf->flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(vma->vm_mm) && transparent_hugepage_use_zero_page()) { - pgtable_t pgtable; + struct ptdesc *ptdesc; struct folio *zero_folio; vm_fault_t ret; - pgtable = pte_alloc_one(vma->vm_mm); - if (unlikely(!pgtable)) + ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm)); + if (unlikely(!ptdesc)) return VM_FAULT_OOM; zero_folio = mm_get_huge_zero_folio(vma->vm_mm); if (unlikely(!zero_folio)) { - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); count_vm_event(THP_FAULT_FALLBACK); return VM_FAULT_FALLBACK; } @@ -1106,21 +1106,21 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) ret = check_stable_address_space(vma->vm_mm); if (ret) { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); } else if (userfaultfd_missing(vma)) { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); ret = handle_userfault(vmf, VM_UFFD_MISSING); VM_BUG_ON(ret & VM_FAULT_FALLBACK); } else { - set_huge_zero_folio(pgtable, vma->vm_mm, vma, + set_huge_zero_folio(ptdesc_page(ptdesc), vma->vm_mm, vma, haddr, vmf->pmd, zero_folio); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); spin_unlock(vmf->ptl); } } else { spin_unlock(vmf->ptl); - pte_free(vma->vm_mm, pgtable); + pte_free(vma->vm_mm, ptdesc_page(ptdesc)); } return ret; } -- 2.43.0