From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45ACBC48BF6 for ; Wed, 21 Feb 2024 11:49:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ThgcIBF+2Db8SEGiiX3v8wVyUsjcgO7L6q6RCmThrfM=; b=Bal9HKxIawiZHj 1QdO8mH4PZbDd2WnwrhxDYNb/8FTtJ0Ov9Fkm1aXNzy7h8+JNw0RThH9BgptT7k5Xh8zWQLJifxXr UN4xStiV0msyJnAnbYNy3hxlVsLod+Butl6pjNcaim0CfzRX0kUt9F3GPFde8Chcm2HtbNx1D+6zP MJiG/+15dG0osQxasZgs04zZlGgtS99O5Q7VRsqNqE8kCa0MuJXbkwKnffvXXo1L2n/1OvkfL4XhG alXsKPnVJtD1ucCtyAWmgtSRkf3toZJP/u0u2MY3GAp9CmEPVnV8YmeSKvbgZqKhb6pGHyzEm1q8p pvOHs7nlgst4ZGwXQWrw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rcl6d-00000000lAO-0fLn; Wed, 21 Feb 2024 11:49:43 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rcl6a-00000000l8e-0XyZ for linux-arm-kernel@lists.infradead.org; Wed, 21 Feb 2024 11:49:41 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1708516179; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=bin7AzDlChDSAGWFKufeiYIoVcIyTk+S1hiBB6egTbw=; b=OPENmYmH/Kll2miKhE7xGduul9ePvmrhl4SItiWuge4gh2gE1fdoao4BnrMW1mvVDnfZ6l nrdnlq6h1Jzr3K+7dwAWZGmZyV75uahg1kKKAfE35pWhFropLJC6F4JkuMx7Y2yfjIjyDb kMKnNeen2CRdNBM/IAc36teKsPtXVQ4= Received: from mail-oo1-f72.google.com (mail-oo1-f72.google.com [209.85.161.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-656-sPLNgqUxORitmIl4Tk2jUQ-1; Wed, 21 Feb 2024 06:49:36 -0500 X-MC-Unique: sPLNgqUxORitmIl4Tk2jUQ-1 Received: by mail-oo1-f72.google.com with SMTP id 006d021491bc7-5a0168a3df5so207249eaf.0 for ; Wed, 21 Feb 2024 03:49:36 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708516175; x=1709120975; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=bin7AzDlChDSAGWFKufeiYIoVcIyTk+S1hiBB6egTbw=; b=KAoectpCxLRO1fE4hx4ID0uxNsmBkE/dTHAhkqP2uu0BeEKAH2P+y7YK+VlD8mvENH vztPXoMAIb75hgLhuxL0RcApcVS1tVhhYfpf/UXMEUf401RwEHIJdorkg3uyVdySxRl7 20oWqqN+bg1lOFDl8oI7frAyedbuRCV8Gir3sEBdSma3gSNf2Z7Gyd+VH+Epu5sNLCK0 t5Hi6MNDJ6dSp9vzaw+PgenM86AzzHEkAAgH/HzAaFWZNpvjTYGnNfohPVBlS6nHCDAj BP40U4CQsJApnGAp1gQQeIt4t/3x5bxyZw+sqdsIhGUexGl6Alo3rYWgG++0/3IO5Sop N51w== X-Forwarded-Encrypted: i=1; AJvYcCUzOt0KzWaka+pyU5LJeXZhtvbH0kVTnTpctr1D2GW+oM0fHj/1VYX+T4lV+osf/iM1rihYIqAImxtqVlTYrnLnAw17q7LPUZokkBV5LkDLEDZ3dW0= X-Gm-Message-State: AOJu0YwVcoXGb/ULDtcieXq7jU4xD8OMM6Mxl9j45hdpnk4Nf1/CscCt kwLsg58KD4yBHJR+0w7qP+QKfZ5ed/pGVbepX0XL13/rOhNDZdKWJK7OQWGrMj51BTMDLpDh+P/ AUs0Zi9LVp3ScpbyGLeZoBgUXj6UgbEZ96RaaiptZZW2qCJMMByBhV9GLJQWruyz6Qy7y/U6y X-Received: by 2002:a05:6358:e49a:b0:178:9f1d:65e3 with SMTP id by26-20020a056358e49a00b001789f1d65e3mr16912434rwb.0.1708516175565; Wed, 21 Feb 2024 03:49:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IGzL228YTGBKB2uXR1bZQUntRIhXzOYaVH/u+cU7rZshqOXYoJ5XOeq8q0ePg1PQ/TTyms+Jw== X-Received: by 2002:a05:6358:e49a:b0:178:9f1d:65e3 with SMTP id by26-20020a056358e49a00b001789f1d65e3mr16912401rwb.0.1708516175144; Wed, 21 Feb 2024 03:49:35 -0800 (PST) Received: from x1n ([43.228.180.230]) by smtp.gmail.com with ESMTPSA id w24-20020aa78598000000b006e4695e519csm5146763pfn.194.2024.02.21.03.49.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Feb 2024 03:49:34 -0800 (PST) Date: Wed, 21 Feb 2024 19:49:22 +0800 From: Peter Xu To: Jason Gunthorpe Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask() Message-ID: References: <20240103091423.400294-1-peterx@redhat.com> <20240103091423.400294-11-peterx@redhat.com> <20240115184900.GV734935@nvidia.com> MIME-Version: 1.0 In-Reply-To: <20240115184900.GV734935@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240221_034940_255133_85608CFE X-CRM114-Status: GOOD ( 25.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Jan 15, 2024 at 02:49:00PM -0400, Jason Gunthorpe wrote: > On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote: > > diff --git a/mm/gup.c b/mm/gup.c > > index 63845b3ec44f..760406180222 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma, > > return NULL; > > } > > > > +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES > > +static struct page *follow_huge_pud(struct vm_area_struct *vma, > > + unsigned long addr, pud_t *pudp, > > + int flags, struct follow_page_context *ctx) > > +{ > > + struct mm_struct *mm = vma->vm_mm; > > + struct page *page; > > + pud_t pud = *pudp; > > + unsigned long pfn = pud_pfn(pud); > > + int ret; > > + > > + assert_spin_locked(pud_lockptr(mm, pudp)); > > + > > + if ((flags & FOLL_WRITE) && !pud_write(pud)) > > + return NULL; > > + > > + if (!pud_present(pud)) > > + return NULL; > > + > > + pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; > > + > > +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD > > + if (pud_devmap(pud)) { > > Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ? Sure. > > > + /* > > + * device mapped pages can only be returned if the caller > > + * will manage the page reference count. > > + * > > + * At least one of FOLL_GET | FOLL_PIN must be set, so > > + * assert that here: > > + */ > > + if (!(flags & (FOLL_GET | FOLL_PIN))) > > + return ERR_PTR(-EEXIST); > > + > > + if (flags & FOLL_TOUCH) > > + touch_pud(vma, addr, pudp, flags & FOLL_WRITE); > > + > > + ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); > > + if (!ctx->pgmap) > > + return ERR_PTR(-EFAULT); > > + } > > +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ > > + page = pfn_to_page(pfn); > > + > > + if (!pud_devmap(pud) && !pud_write(pud) && > > + gup_must_unshare(vma, flags, page)) > > + return ERR_PTR(-EMLINK); > > + > > + ret = try_grab_page(page, flags); > > + if (ret) > > + page = ERR_PTR(ret); > > + else > > + ctx->page_mask = HPAGE_PUD_NR - 1; > > + > > + return page; > > +} > > +#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ > > +static struct page *follow_huge_pud(struct vm_area_struct *vma, > > + unsigned long addr, pud_t *pudp, > > + int flags, struct follow_page_context *ctx) > > +{ > > + return NULL; > > +} > > +#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ > > + > > static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, > > pte_t *pte, unsigned int flags) > > { > > @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, > > > > pudp = pud_offset(p4dp, address); > > pud = READ_ONCE(*pudp); > > - if (pud_none(pud)) > > + if (pud_none(pud) || !pud_present(pud)) > > return no_page_table(vma, flags, address); > > Isn't 'pud_none() || !pud_present()' redundent? A none pud is > non-present, by definition? Hmm yes, seems redundant. Let me drop it. > > > - if (pud_devmap(pud)) { > > + if (pud_huge(pud)) { > > ptl = pud_lock(mm, pudp); > > - page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); > > + page = follow_huge_pud(vma, address, pudp, flags, ctx); > > spin_unlock(ptl); > > if (page) > > return page; > > Otherwise it looks OK to me > > Reviewed-by: Jason Gunthorpe Thanks! -- Peter Xu _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel