From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DA6EC4332F for ; Tue, 13 Dec 2022 20:18:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 66E9C8E0003; Tue, 13 Dec 2022 15:18:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 61EB18E0002; Tue, 13 Dec 2022 15:18:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E6958E0003; Tue, 13 Dec 2022 15:18:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3F7B78E0002 for ; Tue, 13 Dec 2022 15:18:57 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 156001C6043 for ; Tue, 13 Dec 2022 20:18:57 +0000 (UTC) X-FDA: 80238396714.22.7462F46 Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) by imf24.hostedemail.com (Postfix) with ESMTP id 69ACE180012 for ; Tue, 13 Dec 2022 20:18:55 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qsFxQFry; spf=pass (imf24.hostedemail.com: domain of jthoughton@google.com designates 209.85.221.50 as permitted sender) smtp.mailfrom=jthoughton@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670962735; a=rsa-sha256; cv=none; b=SU66ZWw5maIQgL3mxCwFphov9MdgfWXAQwWPLma290d2V3GEaWfHaq076dDpU4x+eXp1Zo k0+qwqxBXrJ68KHHGqdl3oKu8T3PICMVV+QSUpnsTOBe0xTqWn1USUiCXpPLxCuhjm0pQE 3biu7Py2DnnVB+XED07SNAApq/RzIF4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qsFxQFry; spf=pass (imf24.hostedemail.com: domain of jthoughton@google.com designates 209.85.221.50 as permitted sender) smtp.mailfrom=jthoughton@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670962735; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DdPGwnHVcPtYyMt3MrhvxEjvQ4s6Eib4j5KCNAxoOwA=; b=Ll26nUBSzkCtYoOd7j8MG1lbkYvRo3WLD89UyOyaAnmO7YLl1+x25uHZ+X4OjhqpyNJV+8 5OeC29HqlMsO23RDYTXqMF4D9psj8Ldd0tyjyORlchV+fffZ80HeM/I6yuMHFUNUkGo8s1 FQsjg7zx5c8DLvNOFjT02I5wTdWK6XM= Received: by mail-wr1-f50.google.com with SMTP id h7so16973111wrs.6 for ; Tue, 13 Dec 2022 12:18:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=DdPGwnHVcPtYyMt3MrhvxEjvQ4s6Eib4j5KCNAxoOwA=; b=qsFxQFry5mILcXIT4Bd2F6nDdAxJNxOjfwc1/g5uw8BwhhLINvF4j+O496cTEhrCYb 3giv7n4Ts/ZM5rnLotQH3t82XP+wiuXZselkm516bb8hjgjfq2vbGAf1CzBORYdgpAXb 1yb/H92D+wlK07tNzEoM8gccJRZAZpihCHH+YdDg5+6RdZpE69beQB4lEAZVbETbOc3p Tp62rRFZwDgvxN7XH+YUG16OGbOcj0QFtnb++haX2bBey7LJW/zxNu5OPtOA56hKDH98 pEcL4b94h2UXjsIjd5cjcJMleqwpcS1iCpATaSc5kj734sJqRRlMjGD5zR371E0UEdOx viCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=DdPGwnHVcPtYyMt3MrhvxEjvQ4s6Eib4j5KCNAxoOwA=; b=5yOY7TM2i+fguy7PBiDDllxNRuhVwOTi8EVxN+IdiMUYXT/dh9ESnfZSx55ta+iz0a VdpBLprrWLCoXbtJCJHy0WvuDDz93oncSRwmybSy9J2Z7uVw0Be5OVO3egHKRvJ7ONrF 5zw+h0SUhNOWmB7Pe9PlRmZIDpB8oRa1+uy7NOw7vMtBWhjqugfSy89uTpTHimlDoKMy pFdi5aQo3GSDUs7V5CMIrNRT3ugsrOpmXC1pE8HuDRUvmbv6iExPGlWRkabKJkBJnAUF k3FZlzmZmy/bPEctqVlhQIv0NbSPVoqJ80s0BiID9OgG48J3zMfs/20cMfX0NgXqdfHZ CyTg== X-Gm-Message-State: ANoB5pktLccig9ekKi/ub/vvXRyrqt4Bpnb59WJhL8aAZnGRyPEzbbIf qqdvlO1LGF8B8xV0Tn+LvWMtx9J623O2rkuo8seusA== X-Google-Smtp-Source: AA0mqf6bunHhJ+W3cdKvhJ5Fhxb3bCtPNjpRfGK1BXiz2CO/kIRDd4QnBVW6hbL4wy87qsIqKDT2FyOA/Y+n56dMMzg= X-Received: by 2002:adf:dbd2:0:b0:236:3cf5:4528 with SMTP id e18-20020adfdbd2000000b002363cf54528mr59267457wrj.355.1670962733865; Tue, 13 Dec 2022 12:18:53 -0800 (PST) MIME-Version: 1.0 References: <20221021163703.3218176-1-jthoughton@google.com> <20221021163703.3218176-12-jthoughton@google.com> In-Reply-To: From: James Houghton Date: Tue, 13 Dec 2022 15:18:41 -0500 Message-ID: Subject: Re: [RFC PATCH v2 11/47] hugetlb: add hugetlb_pmd_alloc and hugetlb_pte_alloc To: Mike Kravetz Cc: Muchun Song , Peter Xu , David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Queue-Id: 69ACE180012 X-Rspamd-Server: rspam01 X-Stat-Signature: xow7p3j9igjj4wpy99heno59h1sta7w7 X-HE-Tag: 1670962735-275423 X-HE-Meta: U2FsdGVkX1+GrbJCe4Z0eNGBxHzTA1KK7V1w80s2LSDDGbPHar7z356cOhbSa7QNCf9BxKfrWPHDk1yu2itp2Fipx9SOyXuLIPCPeSl9Jp9U6DFBrLvqbiEGO0uiogM8lJGUt3CljsLSDUDUfE5OvD3wI+5Mw5eSfDl3pdJh2nDqst7hY2vl+vaso5qyrS8AR0DNUPCIi6qU/OGmHhqtlfoIOeG7DNVGYzxyKquBxumt4rMjoAXPfUE0EWfdM2sNaqEPPHyvdqZB9Waff6O5EyOrs/S/Af1mmBO3NFzxzR5ujvPXG65QcUaFRz3ivBuEzAouqIuCdLJrByv/7vsvyNyDj2dwbkkOP3eSkceJUMFNX1HlJC4BeDVCtOgG9T9DCaA+2P5K4IlXif8qRDxVvUXb4WKE+qKFDD7M3VhB7aHCpUWDu17vu1AsahB2Ika6b74kKt2PbjE6JccDjDwL++fx0LlG5NGTxwABeBpa307FYUWqFGsyYXBxoQy4GzSDkWlbjbRO8Z3ePAFQ98IPVdtbFrIIfCEssGPAImPkaWRT3t6q1Gtn/ZH6v+rEJRM+MvvyMGGPvNkRByy6fkBgbKEXCXH6nXPte1CJA/YmGjCEzyheFru8RcTpePUOMcIrasbCxsLqyCRtmni7U+X3I779cmveBvI5x76nMa67+9Jciruwr/bFhUmulahSjEEu4UUZAYBcAVbFCBYSkxb6TOUjO88JuWd2VQOVwfCs8TwePW63K3pcxiVvKvVGiTs5LM7YHAgU7IVNWR/N13JXSSFIQALhrzxfSMlmLts78HC5Z/TPZtjfxnXL53+wdOd2h5+gm6bTIGg6+rnuIVJYhS2g39nzwA+YWKIdkzIy6l66vacguts54TvNZzSYi/RVBsVNRzgVyozibWu/U7v333+x50HFfqETWEyesL5OewOWZCJcp6SsNytN9/oQYIuPqJa5Hsu2n2a1oRLTfW4 rPbMu8Xj Oo087gNDrKJjti+WKpImsfBfQuBXXi3hs0oUwimEtjS3YqI2zms2x/ukOgFH6AABTGsJU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 13, 2022 at 2:32 PM Mike Kravetz wrote: > > On 10/21/22 16:36, James Houghton wrote: > > These functions are used to allocate new PTEs below the hstate PTE. This > > will be used by hugetlb_walk_step, which implements stepping forwards in > > a HugeTLB high-granularity page table walk. > > > > The reasons that we don't use the standard pmd_alloc/pte_alloc* > > functions are: > > 1) This prevents us from accidentally overwriting swap entries or > > attempting to use swap entries as present non-leaf PTEs (see > > pmd_alloc(); we assume that !pte_none means pte_present and > > non-leaf). > > 2) Locking hugetlb PTEs can different than regular PTEs. (Although, as > > implemented right now, locking is the same.) > > 3) We can maintain compatibility with CONFIG_HIGHPTE. That is, HugeTLB > > HGM won't use HIGHPTE, but the kernel can still be built with it, > > and other mm code will use it. > > > > When GENERAL_HUGETLB supports P4D-based hugepages, we will need to > > implement hugetlb_pud_alloc to implement hugetlb_walk_step. > > > > Signed-off-by: James Houghton > > --- > > include/linux/hugetlb.h | 5 +++ > > mm/hugetlb.c | 94 +++++++++++++++++++++++++++++++++++++++++ > > 2 files changed, 99 insertions(+) > > > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > > index d30322108b34..003255b0e40f 100644 > > --- a/include/linux/hugetlb.h > > +++ b/include/linux/hugetlb.h > > @@ -119,6 +119,11 @@ void hugetlb_pte_copy(struct hugetlb_pte *dest, const struct hugetlb_pte *src) > > > > bool hugetlb_pte_present_leaf(const struct hugetlb_pte *hpte, pte_t pte); > > > > +pmd_t *hugetlb_pmd_alloc(struct mm_struct *mm, struct hugetlb_pte *hpte, > > + unsigned long addr); > > +pte_t *hugetlb_pte_alloc(struct mm_struct *mm, struct hugetlb_pte *hpte, > > + unsigned long addr); > > + > > struct hugepage_subpool { > > spinlock_t lock; > > long count; > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index a0e46d35dabc..e3733388adee 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -341,6 +341,100 @@ static bool has_same_uncharge_info(struct file_region *rg, > > #endif > > } > > > > +pmd_t *hugetlb_pmd_alloc(struct mm_struct *mm, struct hugetlb_pte *hpte, > > + unsigned long addr) > > A little confused as there are no users yet ... Is hpte the 'hstate PTE' > that we are trying to allocate ptes under? For example, in the case of > a hugetlb_pmd_alloc caller hpte would be a PUD or CONT_PMD size pte? The hpte is the level above the level we're trying to allocate (not necessarily the 'hstate PTE'). I'll make that clear in the comments for both functions. So consider allocating 4K PTEs for a 1G HugeTLB page: - With the hstate 'PTE' (PUD), we make a hugetlb_pte with that PUD (let's call it 'hpte') - We call hugetlb_pmd_alloc(hpte) which will leave 'hpte' the same, but the pud_t that hpte->ptep points to is no longer a leaf. - We call hugetlb_walk_step(hpte) to step down a level to get a PMD, changing hpte. The hpte->ptep is now pointing to a blank pmd_t. - We call hugetlb_pte_alloc(hpte) to allocate a bunch of PTEs and populate the pmd_t. - We call hugetlb_walk_step(hpte) to step down again. This is basically what hugetlb_hgm_walk does (in the next patch). We only change 'hpte' when we do a step, and that is when we populate 'shift'. The 'sz' parameter for hugetlb_walk_step is what architectures can use to populate hpte->shift appropriately (ignored for x86). For arm64, we can use 'sz' to populate hpte->shift with what the caller wants when we are free to choose (like if all the PTEs are none, we can do CONT_PTE_SHIFT). See [1]'s implementation of hugetlb_walk_step for what I *think* is correct for arm64. [1] https://github.com/48ca/linux/commit/bf3b8742e95c58c2431c80c5bed5cb5cb95885af > > > +{ > > + spinlock_t *ptl = hugetlb_pte_lockptr(mm, hpte); > > + pmd_t *new; > > + pud_t *pudp; > > + pud_t pud; > > + > > + if (hpte->level != HUGETLB_LEVEL_PUD) > > + return ERR_PTR(-EINVAL); > > Ah yes, it is PUD level. However, I guess CONT_PMD would also be valid > on arm64? The level is always PGD, P4D, PUD, PMD, or PTE. CONT_PTE is on HUGETLB_LEVEL_PTE, CONT_PMD is on HUGETLB_LEVEL_PMD. These functions are supposed to be used for all architectures (in their implementations of 'hugetlb_walk_step'; that's why they're not static, actually. I'll make that clear in the commit description). > > > + > > + pudp = (pud_t *)hpte->ptep; > > +retry: > > + pud = *pudp; > > We might want to consider a READ_ONCE here. I am not an expert on such > things, but recall a similar as pointed out in the now obsolete commit > 27ceae9833843. Agreed. Will try to change all PTE reading to use READ_ONCE, though they can be easy to miss... :( Thanks very much for the reviews so far, Mike! - James > > -- > Mike Kravetz >