From: David Hildenbrand <david@redhat.com>
To: Alistair Popple <apopple@nvidia.com>,
akpm@linux-foundation.org, dan.j.williams@intel.com,
linux-mm@kvack.org
Cc: lina@asahilina.net, zhang.lyra@gmail.com,
gerald.schaefer@linux.ibm.com, vishal.l.verma@intel.com,
dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com,
jack@suse.cz, jgg@ziepe.ca, catalin.marinas@arm.com,
will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com,
dave.hansen@linux.intel.com, ira.weiny@intel.com,
willy@infradead.org, djwong@kernel.org, tytso@mit.edu,
linmiaohe@huawei.com, peterx@redhat.com,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev,
linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org,
jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com
Subject: Re: [PATCH v4 12/25] mm/memory: Enhance insert_page_into_pte_locked() to create writable mappings
Date: Fri, 20 Dec 2024 20:01:02 +0100 [thread overview]
Message-ID: <d4d32e17-d8e2-4447-bd33-af41e89a528f@redhat.com> (raw)
In-Reply-To: <25a23433cb70f0fe6af92042eb71e962fcbf092b.1734407924.git-series.apopple@nvidia.com>
On 17.12.24 06:12, Alistair Popple wrote:
> In preparation for using insert_page() for DAX, enhance
> insert_page_into_pte_locked() to handle establishing writable
> mappings. Recall that DAX returns VM_FAULT_NOPAGE after installing a
> PTE which bypasses the typical set_pte_range() in finish_fault.
>
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> Suggested-by: Dan Williams <dan.j.williams@intel.com>
>
> ---
>
> Changes since v2:
>
> - New patch split out from "mm/memory: Add dax_insert_pfn"
> ---
> mm/memory.c | 45 +++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 37 insertions(+), 8 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 06bb29e..cd82952 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2126,19 +2126,47 @@ static int validate_page_before_insert(struct vm_area_struct *vma,
> }
>
> static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
> - unsigned long addr, struct page *page, pgprot_t prot)
> + unsigned long addr, struct page *page,
> + pgprot_t prot, bool mkwrite)
> {
> struct folio *folio = page_folio(page);
> + pte_t entry = ptep_get(pte);
> pte_t pteval;
>
> - if (!pte_none(ptep_get(pte)))
> - return -EBUSY;
> + if (!pte_none(entry)) {
> + if (!mkwrite)
> + return -EBUSY;
> +
> + /*
> + * For read faults on private mappings the PFN passed in may not
> + * match the PFN we have mapped if the mapped PFN is a writeable
> + * COW page. In the mkwrite case we are creating a writable PTE
> + * for a shared mapping and we expect the PFNs to match. If they
> + * don't match, we are likely racing with block allocation and
> + * mapping invalidation so just skip the update.
> + */
Would it make sense to instead have here
/* See insert_pfn(). */
But ...
> + if (pte_pfn(entry) != page_to_pfn(page)) {
> + WARN_ON_ONCE(!is_zero_pfn(pte_pfn(entry)));
> + return -EFAULT;
> + }
> + entry = maybe_mkwrite(entry, vma);
> + entry = pte_mkyoung(entry);
> + if (ptep_set_access_flags(vma, addr, pte, entry, 1))
> + update_mmu_cache(vma, addr, pte);
... I am not sure if we want the above at all. Someone inserted a page,
which is refcounted + mapcounted already.
Now you ignore that and do like the second insertion "worked" ?
No, that feels wrong, I suspect you will run into refcount+mapcount issues.
If there is already something, inserting must fail IMHO. If you want to
change something to upgrade write permissions, then a different
interface should be used.
> + return 0;
> + }
> +
> /* Ok, finally just insert the thing.. */
> pteval = mk_pte(page, prot);
> if (unlikely(is_zero_folio(folio))) {
> pteval = pte_mkspecial(pteval);
> } else {
> folio_get(folio);
> + entry = mk_pte(page, prot);
> + if (mkwrite) {
> + entry = pte_mkyoung(entry);
> + entry = maybe_mkwrite(pte_mkdirty(entry), vma);> + }
> inc_mm_counter(vma->vm_mm, mm_counter_file(folio));
> folio_add_file_rmap_pte(folio, page, vma);
> }
> @@ -2147,7 +2175,7 @@ static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
> }
>
> static int insert_page(struct vm_area_struct *vma, unsigned long addr,
> - struct page *page, pgprot_t prot)
> + struct page *page, pgprot_t prot, bool mkwrite)
> {
> int retval;
> pte_t *pte;
> @@ -2160,7 +2188,8 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
> pte = get_locked_pte(vma->vm_mm, addr, &ptl);
> if (!pte)
> goto out;
> - retval = insert_page_into_pte_locked(vma, pte, addr, page, prot);
> + retval = insert_page_into_pte_locked(vma, pte, addr, page, prot,
> + mkwrite);
Alignment looks odd. Likely you can also just put it into a single line.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-12-20 19:01 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-17 5:12 [PATCH v4 00/25] fs/dax: Fix ZONE_DEVICE page reference counts Alistair Popple
2024-12-17 5:12 ` [PATCH v4 01/25] fuse: Fix dax truncate/punch_hole fault path Alistair Popple
2024-12-17 5:12 ` [PATCH v4 02/25] fs/dax: Return unmapped busy pages from dax_layout_busy_page_range() Alistair Popple
2024-12-17 5:12 ` [PATCH v4 03/25] fs/dax: Don't skip locked entries when scanning entries Alistair Popple
2024-12-17 5:12 ` [PATCH v4 04/25] fs/dax: Refactor wait for dax idle page Alistair Popple
2024-12-17 5:12 ` [PATCH v4 05/25] fs/dax: Create a common implementation to break DAX layouts Alistair Popple
2024-12-17 5:12 ` [PATCH v4 06/25] fs/dax: Always remove DAX page-cache entries when breaking layouts Alistair Popple
2024-12-17 5:12 ` [PATCH v4 07/25] fs/dax: Ensure all pages are idle prior to filesystem unmount Alistair Popple
2024-12-17 5:12 ` [PATCH v4 08/25] fs/dax: Remove PAGE_MAPPING_DAX_SHARED mapping flag Alistair Popple
2024-12-17 5:12 ` [PATCH v4 09/25] mm/gup.c: Remove redundant check for PCI P2PDMA page Alistair Popple
2024-12-17 22:06 ` David Hildenbrand
2024-12-17 5:12 ` [PATCH v4 10/25] mm/mm_init: Move p2pdma page refcount initialisation to p2pdma Alistair Popple
2024-12-17 22:14 ` David Hildenbrand
2024-12-18 22:49 ` Alistair Popple
2024-12-20 18:29 ` David Hildenbrand
2024-12-17 5:12 ` [PATCH v4 11/25] mm: Allow compound zone device pages Alistair Popple
2024-12-17 5:12 ` [PATCH v4 12/25] mm/memory: Enhance insert_page_into_pte_locked() to create writable mappings Alistair Popple
2024-12-20 19:01 ` David Hildenbrand [this message]
2024-12-20 19:06 ` David Hildenbrand
2025-01-06 2:07 ` Alistair Popple
2025-01-07 11:29 ` David Hildenbrand
2024-12-17 5:12 ` [PATCH v4 13/25] mm/memory: Add vmf_insert_page_mkwrite() Alistair Popple
2024-12-17 5:12 ` [PATCH v4 14/25] rmap: Add support for PUD sized mappings to rmap Alistair Popple
2024-12-17 22:27 ` David Hildenbrand
2024-12-18 22:55 ` Alistair Popple
2024-12-20 18:31 ` David Hildenbrand
2024-12-17 5:12 ` [PATCH v4 15/25] huge_memory: Add vmf_insert_folio_pud() Alistair Popple
2024-12-20 18:52 ` David Hildenbrand
2025-01-06 6:39 ` Alistair Popple
2024-12-17 5:12 ` [PATCH v4 16/25] huge_memory: Add vmf_insert_folio_pmd() Alistair Popple
2024-12-20 18:54 ` David Hildenbrand
2024-12-17 5:13 ` [PATCH v4 17/25] memremap: Add is_device_dax_page() and is_fsdax_page() helpers Alistair Popple
2024-12-20 18:39 ` David Hildenbrand
2024-12-17 5:13 ` [PATCH v4 18/25] gup: Don't allow FOLL_LONGTERM pinning of FS DAX pages Alistair Popple
2024-12-17 22:33 ` David Hildenbrand
2024-12-17 5:13 ` [PATCH v4 19/25] proc/task_mmu: Ignore ZONE_DEVICE pages Alistair Popple
2024-12-17 22:31 ` David Hildenbrand
2024-12-18 23:11 ` Alistair Popple
2024-12-20 18:32 ` David Hildenbrand
2025-01-06 6:43 ` Alistair Popple
2024-12-17 5:13 ` [PATCH v4 20/25] mm/mlock: Skip ZONE_DEVICE PMDs during mlock Alistair Popple
2024-12-17 22:28 ` David Hildenbrand
2024-12-17 5:13 ` [PATCH v4 21/25] fs/dax: Properly refcount fs dax pages Alistair Popple
2024-12-17 5:13 ` [PATCH v4 22/25] device/dax: Properly refcount device dax pages when mapping Alistair Popple
2024-12-17 5:13 ` [PATCH v4 23/25] mm: Remove pXX_devmap callers Alistair Popple
2024-12-17 5:13 ` [PATCH v4 24/25] mm: Remove devmap related functions and page table bits Alistair Popple
2024-12-17 5:13 ` [PATCH v4 25/25] Revert "riscv: mm: Add support for ZONE_DEVICE" Alistair Popple
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d4d32e17-d8e2-4447-bd33-af41e89a528f@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bhelgaas@google.com \
--cc=catalin.marinas@arm.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=dave.jiang@intel.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=gerald.schaefer@linux.ibm.com \
--cc=hch@lst.de \
--cc=ira.weiny@intel.com \
--cc=jack@suse.cz \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=lina@asahilina.net \
--cc=linmiaohe@huawei.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=logang@deltatee.com \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=nvdimm@lists.linux.dev \
--cc=peterx@redhat.com \
--cc=tytso@mit.edu \
--cc=vishal.l.verma@intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=zhang.lyra@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).