From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C413CCD194 for ; Wed, 15 Oct 2025 17:28:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B494010E89A; Wed, 15 Oct 2025 17:28:14 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.b="O/AIKjGH"; dkim-atps=neutral Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by gabe.freedesktop.org (Postfix) with ESMTPS id AFBEE10E89A; Wed, 15 Oct 2025 17:28:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=OHD1PbWJgOF9XOuL5HA73G273J12JtIymtajQ7fqsQ4=; b=O/AIKjGH3q2vAiI35ZMnFpDXL9 C+s4LMohbjn1T1gowDff1QATHCAKFtMlK/vQLkE6V3EMtM4PyVK7+qNjjH8p/SMgAOrlYDh7E5JNj s6YHmSgHX3PZeq5i2cQTrpXa6f/EsBMgS+kaQRLCyHVc51UqP5td4HDZnJ8pJHHOP2BQCs6Kdb3dl hDdhTVC23HoZ8cjO9DV1mJas9/6i26NP5x/fG5klzP8VqwqGrFQpNOM+S4KcTQZO8AUi9LAwkUeyi W84rWDUlfmhJWd6nz90Jtg0an/C/x6jqBk720u/RVEgVMVk7m7XzfqEOkFEtR41Jp2xUsiPZ+01/Z FUrLjK/g==; Received: from willy by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1v95I1-00000002tYN-00lF; Wed, 15 Oct 2025 17:27:53 +0000 Date: Wed, 15 Oct 2025 18:27:52 +0100 From: Matthew Wilcox To: =?iso-8859-1?Q?Lo=EFc?= Molinari Cc: Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , Boris Brezillon , Rob Herring , Steven Price , Liviu Dudau , Melissa Wen , =?iso-8859-1?Q?Ma=EDra?= Canal , Hugh Dickins , Baolin Wang , Andrew Morton , Al Viro , =?utf-8?Q?Miko=C5=82aj?= Wasiak , Christian Brauner , Nitin Gote , Andi Shyti , Jonathan Corbet , Christopher Healy , Bagas Sanjaya , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, kernel@collabora.com Subject: Re: [PATCH v4 03/13] drm/shmem-helper: Map huge pages in fault handlers Message-ID: References: <20251015153018.43735-1-loic.molinari@collabora.com> <20251015153018.43735-4-loic.molinari@collabora.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20251015153018.43735-4-loic.molinari@collabora.com> X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Wed, Oct 15, 2025 at 05:30:07PM +0200, Loïc Molinari wrote: This looks fine, no need to resend to fix this, but if you'd written the previous patch slightly differently, you'd've reduced the amount of code you moved around in this patch, which would have made it easier to review. > + /* Map a range of pages around the faulty address. */ > + do { > + pfn = page_to_pfn(pages[start_pgoff]); > + ret = vmf_insert_pfn(vma, addr, pfn); > + addr += PAGE_SIZE; > + } while (++start_pgoff <= end_pgoff && ret == VM_FAULT_NOPAGE); It looks to me like we have an opportunity to do better here by adding a vmf_insert_pfns() interface. I don't think we should delay your patch series to add it, but let's not forget to do that; it can have very good performnce effects on ARM to use contptes. > @@ -617,8 +645,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) [...] > > - ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); > + if (drm_gem_shmem_map_pmd(vmf, vmf->address, pages[page_offset])) { > + ret = VM_FAULT_NOPAGE; > + goto out; > } Does this actually work?