From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8252C61CE7 for ; Wed, 11 Jun 2025 12:07:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 254966B0088; Wed, 11 Jun 2025 08:07:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22BCC6B009A; Wed, 11 Jun 2025 08:07:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 141B76B009C; Wed, 11 Jun 2025 08:07:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E731C6B0088 for ; Wed, 11 Jun 2025 08:07:18 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6D21B120A5B for ; Wed, 11 Jun 2025 12:07:18 +0000 (UTC) X-FDA: 83542994556.26.04F3A28 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 317A31C0009 for ; Wed, 11 Jun 2025 12:07:16 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K1LSq7GX; spf=pass (imf20.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749643636; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UtWlysWlQBjRkoN9+CJXCvGFy/ErJq/rTFp6EVa14FI=; b=n1wl0QQ7AU+k07VXjbUyzVVX0lAOmM2XH58Wk4OUYjO3t/le3YlOjtTrU5tPorc3TqKSHv n9xRwscHVfyx+1V+njTR++xV8ott+bKfFaB71bs+iiOtw+r3PBu4Hezz+sIfog/AezbIn1 KLoK5WKybHIkHnsKF3F/3Ha+FdFkgPM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749643636; a=rsa-sha256; cv=none; b=1SFM4CbEE+zox7ourzZzLCfwQQ/16WuZxB7k6unumkTonz7OXh57shov1X1X/hz3i3D3d8 vT7o7ZEdMOQiNm9AVCdD32zmnC2tkoqU16cFqtQdftitQ74zaFWQzYivEyDAhg0v2iUXdy m0TlCAL3qflJGg5TLr1Qsso0+WrQTpY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K1LSq7GX; spf=pass (imf20.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1749643635; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UtWlysWlQBjRkoN9+CJXCvGFy/ErJq/rTFp6EVa14FI=; b=K1LSq7GX/U83F5Mpdf4tBC1DdyIi5onAS2QF3gFYATFb2EdV21xMN7xdhZellq66CyGYpF auk58mpkC7bTwna2GlUwvdSFCKYrS0YjVyL80dSYtMZiP5QMMp65LjhHeB/WYOi0RjL2a8 +jmtobYpXyeDIzwEHXEVg5QpniHvkO4= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-86-oINp1mudOpG8p_RNOvYxAw-1; Wed, 11 Jun 2025 08:07:14 -0400 X-MC-Unique: oINp1mudOpG8p_RNOvYxAw-1 X-Mimecast-MFC-AGG-ID: oINp1mudOpG8p_RNOvYxAw_1749643634 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-6fac216872cso135803306d6.2 for ; Wed, 11 Jun 2025 05:07:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749643634; x=1750248434; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UtWlysWlQBjRkoN9+CJXCvGFy/ErJq/rTFp6EVa14FI=; b=rhS8t5QW6OF3CFJGlxgaMovg4fKjZyZE2m9HRO7kyCHg5w+IU8ODJBOdYB8uMvJpeo KXc5+4Cry354rDCUQvJGNVAUy8Uyf4f5jImFRJiYmXBX5/a0HOl325FJ9DKB8YCVX0OG 6jw6hgVNoq8pGUV+wqRhVwxcGKItirpl6rPEp9U/eJrug/2tx6xbTmSeDv7bFtgXSSIr kQNEHM6u0t44XdQXM2zYmCFKZ/EuBW1OCsYKxiw8wvXabcMn/n2IqgqcDxGunFUQzXtD PBpe5dWlJJgWFEfimh+QKR985TVAK9UmirctPc7Xuth6moqdoyPhmECf352kmX5STZuO D3eg== X-Gm-Message-State: AOJu0YxWD9tIuoq/J9UEWGFGeVC2R9NYRP9seJ2+N8mQSiouAd5JWLiZ wbD8t/YasvmQ6hWNrpUei6ffUUInLVxTSKJv5gMtnIsDHlzSoAVG6UGB3SGHm694TL7tRRgnaMe SlFuIZAvPeiK42KkzbxG8fUpZcSv+S5kTd4cPDvET38Yhlc2tbGhD X-Gm-Gg: ASbGnctYc+vO03A8V0ZSca5fTkQUU7FEkuA/2AdLrJErE9HKwCYtAYjisbgu4QJlEku bWsk8L5cB+9Yecryqph18ERu8R4OJq/ecHcCkjy5wIqKDuf/Fy9NECCCE8bDzL4I7efJOgBBklE Zw4kwsWlJWDFFCz1rpdLzzvJZ7SMPwgw3PpIBpDFsVL/HLznhFaY3jnCU6NYHLbRDEz/ptWrEfX sHAOyagaiB+Jw4D7VM5cRzm7b2VGhlqaNyE5MYc+r5s6xCMRWjfWuPjQHId8Zrkxz4gTSPy+mGH DyHkU1kUMa/y8bmrnjH/OgZ0hLiOWpHFbVR+GTgUSg== X-Received: by 2002:a05:6214:2247:b0:6fb:eff:853f with SMTP id 6a1803df08f44-6fb2d1355e8mr38204106d6.11.1749643633913; Wed, 11 Jun 2025 05:07:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEM4p2qx6IwsF7ygt/fJLoNiy0HXlnShIRf40WhWevNTjDTOZVqQY8Q1r9EhVLBlQWkw0T8DQ== X-Received: by 2002:a05:6214:2247:b0:6fb:eff:853f with SMTP id 6a1803df08f44-6fb2d1355e8mr38203456d6.11.1749643633381; Wed, 11 Jun 2025 05:07:13 -0700 (PDT) Received: from localhost (nat-pool-muc-t.redhat.com. [149.14.88.26]) by smtp.gmail.com with UTF8SMTPSA id 6a1803df08f44-6fb09b2a1b6sm80566356d6.100.2025.06.11.05.07.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 11 Jun 2025 05:07:12 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, David Hildenbrand , Andrew Morton , Alistair Popple , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Dan Williams , Oscar Salvador Subject: [PATCH v2 3/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pud() Date: Wed, 11 Jun 2025 14:06:54 +0200 Message-ID: <20250611120654.545963-4-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250611120654.545963-1-david@redhat.com> References: <20250611120654.545963-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 0P7dpluEPslf-_UmhVjiM6yMb1OhW8gSb_IRPD1OUa4_1749643634 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspam-User: X-Rspamd-Queue-Id: 317A31C0009 X-Stat-Signature: 54u79cza9f75suiehj6du5k4o3cpt6t7 X-Rspamd-Server: rspam04 X-HE-Tag: 1749643636-307037 X-HE-Meta: U2FsdGVkX19F5B719j74O++yuUQSsYGyzjMsBiQ9Mv0OHAVy3ck0ftkANS8l96ZoZbdHmngyAiUptaYMnYFj0te9icTK6ZUZTtk6APU8XoUdLQ30CvhJjFI+kfygvbHw7P7MwupEAIhGXXF0ybT6hwdE8iT6Pk8ejRETkXJL+2imhwa9FbdpJqy2EX4turUrhEmzyTwxMD1NLe6CU3iAGX0iw2Uk5FXqmr1DRE633xpRmrQjiIgqiNxmeUTsAFzEPKNfBY1yTqTQ+a+amL478Idr6d1W7gza3xyCT9NcP2Y+szEWG/B9qKLL+06xiV3t2jE2OyhD+6dmey6pKLL8l3pqu5w57/6J7ybERwrqH+om2uHa7UxzTjM1Nj3WbRMpE/vocy4LxcuvEFN+424bex7b5DSNrEarWhh/VMPvpjlcJNqV/Y1BX7vs27OIoljRwZkcEuhpDMdBt6gqlfSuk6KzF5ZxvDwNlA5e32/spyzr1d4m0vkNiCwFMm9uxSoL1iwJzEauRrNVndW5JLOIzxcT+yecTEyK9802ncESNoEwVQM1Bz0JO6g705OUH9vXHtijpPRHz2r0JOMRa3SvoPI8t7lfoM0Ddej9Gd3cLtCsbmVv181sp712qWPnZKvbInJPc4v7JNYRaUG0NK4dMVOkUd8eDeNRo1AbDEZCJBg4oS15Ip1iCuIA6H6kpF1GYtsDgLpNRZe3So0Xt3Bxt+X2ky/Gj98tSQ7VjvtYDWTgFfPjex9CQxbXtibn3C0FDF+NUouXoOyqlyIdNaHzLE2JaDe1mm2bqnt2UQcyiijCM/YRrZS3kGvmnWN/VpAFyh/Au9rQlq55f0/gENJox4eNafo7n9qVNUCuSdv2gH1XY7xddMBEiqN7qFmDNP7m53oKN6itHTFSYd0vRSnj9pLsC7OIAf2zJrskjNTHtGmEdhI/m6bNSnd2SZq0r0MYZ1yl3+HlbV0vqyEWK5F 6yOVkHDb herQchhQKpjHLcPxjA78joMZTa/TyQk6QnfeqcO8bcVBAdqgK7o4QnLj/WanHfUUs9KU2OmV9ijw6Hrz20MLVWsVnq9reljT2JHiPW0AEAc+J6jl4QE6tahFhJukm4AsWcN2QnoepIzUHd5z7fXlpWn3Ij7r+YzW3rwy0ZxwzuLDUxm+gqZQ+aucYzKXomKJpCHkx1E5uHMQmEnRnqtMOYp01TgqR982uLW4BoP7IGJzze/7LW2384MVHwg5FkLFl4pQy6YZmoYHYftHeofEBwaqBJ640++9CtS1dq4jQ0fIs1zZbc90TdjLj92mgUdO1lKtS6l5Wng5aSLkfpmCx7PyOp60IzU5YMRvZmizwIHkcni1o4lHV67OsBxNTec7QbOcxIbdHVoEs/3fmL/D1ADvgksx5+LH/6lhXkGkaKyfvklH3fckfj2y6MOPVmStFQU6jYsT40lTbJWcUcMt2GftE1MZHwdL+iqZFNSqLXtYFRinsPF5hyjdqx8nMv+c0b39l3pq6N+U/7rzgoAbiwTTmvrFPD72f3/LT060cmjHo5UFjQu16Vsn4Kg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Marking PUDs that map a "normal" refcounted folios as special is against our rules documented for vm_normal_page(). Fortunately, there are not that many pud_special() check that can be mislead and are right now rather harmless: e.g., none so far bases decisions whether to grab a folio reference on that decision. Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big implications as it seems. Getting this right will get more important as we introduce folio_normal_page_pud() and start using it in more place where we currently special-case based on other VMA flags. Fix it just like we fixed vmf_insert_folio_pmd(). Add folio_mk_pud() to mimic what we do with folio_mk_pmd(). Fixes: dbe54153296d ("mm/huge_memory: add vmf_insert_folio_pud()") Signed-off-by: David Hildenbrand --- include/linux/mm.h | 19 ++++++++++++++++- mm/huge_memory.c | 51 +++++++++++++++++++++++++--------------------- 2 files changed, 46 insertions(+), 24 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fa538feaa8d95..912b6d40a12d6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1816,7 +1816,24 @@ static inline pmd_t folio_mk_pmd(struct folio *folio, pgprot_t pgprot) { return pmd_mkhuge(pfn_pmd(folio_pfn(folio), pgprot)); } -#endif + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD +/** + * folio_mk_pud - Create a PUD for this folio + * @folio: The folio to create a PUD for + * @pgprot: The page protection bits to use + * + * Create a page table entry for the first page of this folio. + * This is suitable for passing to set_pud_at(). + * + * Return: A page table entry suitable for mapping this folio. + */ +static inline pud_t folio_mk_pud(struct folio *folio, pgprot_t pgprot) +{ + return pud_mkhuge(pfn_pud(folio_pfn(folio), pgprot)); +} +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_MMU */ static inline bool folio_has_pincount(const struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7e3e9028873e5..4734de1dc0ae4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1535,15 +1535,18 @@ static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) return pud; } -static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, pfn_t pfn, pgprot_t prot, bool write) +static void insert_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, struct folio_or_pfn fop, pgprot_t prot, bool write) { struct mm_struct *mm = vma->vm_mm; pud_t entry; if (!pud_none(*pud)) { + const unsigned long pfn = fop.is_folio ? folio_pfn(fop.folio) : + pfn_t_to_pfn(fop.pfn); + if (write) { - if (WARN_ON_ONCE(pud_pfn(*pud) != pfn_t_to_pfn(pfn))) + if (WARN_ON_ONCE(pud_pfn(*pud) != pfn)) return; entry = pud_mkyoung(*pud); entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma); @@ -1553,11 +1556,19 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, return; } - entry = pud_mkhuge(pfn_t_pud(pfn, prot)); - if (pfn_t_devmap(pfn)) - entry = pud_mkdevmap(entry); - else - entry = pud_mkspecial(entry); + if (fop.is_folio) { + entry = folio_mk_pud(fop.folio, vma->vm_page_prot); + + folio_get(fop.folio); + folio_add_file_rmap_pud(fop.folio, &fop.folio->page, vma); + add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PUD_NR); + } else { + entry = pud_mkhuge(pfn_t_pud(fop.pfn, prot)); + if (pfn_t_devmap(fop.pfn)) + entry = pud_mkdevmap(entry); + else + entry = pud_mkspecial(entry); + } if (write) { entry = pud_mkyoung(pud_mkdirty(entry)); entry = maybe_pud_mkwrite(entry, vma); @@ -1581,6 +1592,9 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) unsigned long addr = vmf->address & PUD_MASK; struct vm_area_struct *vma = vmf->vma; pgprot_t pgprot = vma->vm_page_prot; + struct folio_or_pfn fop = { + .pfn = pfn, + }; spinlock_t *ptl; /* @@ -1600,7 +1614,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot); ptl = pud_lock(vma->vm_mm, vmf->pud); - insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write); + insert_pud(vma, addr, vmf->pud, fop, pgprot, write); spin_unlock(ptl); return VM_FAULT_NOPAGE; @@ -1622,6 +1636,10 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio, unsigned long addr = vmf->address & PUD_MASK; pud_t *pud = vmf->pud; struct mm_struct *mm = vma->vm_mm; + struct folio_or_pfn fop = { + .folio = folio, + .is_folio = true, + }; spinlock_t *ptl; if (addr < vma->vm_start || addr >= vma->vm_end) @@ -1631,20 +1649,7 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio, return VM_FAULT_SIGBUS; ptl = pud_lock(mm, pud); - - /* - * If there is already an entry present we assume the folio is - * already mapped, hence no need to take another reference. We - * still call insert_pfn_pud() though in case the mapping needs - * upgrading to writeable. - */ - if (pud_none(*vmf->pud)) { - folio_get(folio); - folio_add_file_rmap_pud(folio, &folio->page, vma); - add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR); - } - insert_pfn_pud(vma, addr, vmf->pud, pfn_to_pfn_t(folio_pfn(folio)), - vma->vm_page_prot, write); + insert_pud(vma, addr, vmf->pud, fop, vma->vm_page_prot, write); spin_unlock(ptl); return VM_FAULT_NOPAGE; -- 2.49.0