linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>, Jann Horn <jannh@google.com>,
	"Liam R . Howlett" <Liam.Howlett@oracle.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Matthew Wilcox <willy@infradead.org>,
	Pedro Falcato <pfalcato@suse.de>, Rik van Riel <riel@surriel.com>,
	Harry Yoo <harry.yoo@oracle.com>, Zi Yan <ziy@nvidia.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Nico Pache <npache@redhat.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	Jakub Matena <matenajakub@gmail.com>,
	Wei Yang <richard.weiyang@gmail.com>,
	Barry Song <baohua@kernel.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 01/11] mm/mremap: introduce more mergeable mremap via MREMAP_RELOCATE_ANON
Date: Tue, 17 Jun 2025 14:07:38 +0200	[thread overview]
Message-ID: <018c0663-dffb-49d0-895c-63bc9e5f9aec@redhat.com> (raw)
In-Reply-To: <d51af1de-110a-4cde-9091-98e15367dda3@lucifer.local>

> 
>>
>>> +	/* The above check should imply these. */
>>> +	VM_WARN_ON_ONCE(folio_mapcount(folio) > folio_nr_pages(folio));
>>> +	VM_WARN_ON_ONCE(!PageAnonExclusive(folio_page(folio, 0)));
>>
>> This can trigger in one nasty case, where we can lose the PAE bit during
>> swapin (refault from the swapcache while the folio is under writeback, and
>> the device does not allow for modifying the data while under writeback).
> 
> Ugh god wasn't aware of that. So maybe drop this second one?

Yes.

> 
>>
>>> +
>>> +	/*
>>> +	 * A pinned folio implies that it will be used for a duration longer
>>> +	 * than that over which the mmap_lock is held, meaning that another part
>>> +	 * of the kernel may be making use of this folio.
>>> +	 *
>>> +	 * Since we are about to manipulate index & mapping fields, we cannot
>>> +	 * safely proceed because whatever has pinned this folio may then
>>> +	 * incorrectly assume these do not change.
>>> +	 */
>>> +	if (folio_maybe_dma_pinned(folio))
>>> +		goto out;
>>
>> As discussed, this can race with GUP-fast. SO *maybe* we can just allow for
>> moving these.
> 
> I'm guessing you mean as discussed below? :P Or in the cover letter I've not
> read yet? :P

The latter .. IIRC :P It was late ...

> 
> Yeah, to be honest you shouldn't be fiddling with index, mapping anyway except
> via rmap logic.
> 
> I will audit access of these fields just to be safe.
> 

[...]

>>> +
>>> +	state.ptep = ptep_start;
>>> +	for (; !pte_done(&state); pte_next(&state, nr_pages)) {
>>> +		pte_t pte = ptep_get(state.ptep);
>>> +
>>> +		if (pte_none(pte) || !pte_present(pte)) {
>>> +			nr_pages = 1;
>>
>> What if we have
>>
>> (a) A migration entry (possibly we might fail migration and simply remap the
>> original folio)
>>
>> (b) A swap entry with a folio in the swapcache that we can refault.
>>
>> I don't think we can simply skip these ...
> 
> Good point... will investigate these cases.

migration entries are really nasty ... probably have to wait for the 
migration entry to become a present pte again.

swap entries ... we could lookup any folio in the swapcache and adjust that.

> 
>>
>>> +			continue;
>>> +		}
>>> +
>>> +		nr_pages = relocate_anon_pte(pmc, &state, undo);
>>> +		if (!nr_pages) {
>>> +			ret = false;
>>> +			goto out;
>>> +		}
>>> +	}
>>> +
>>> +	ret = true;
>>> +out:
>>> +	pte_unmap_unlock(ptep_start, state.ptl);
>>> +	return ret;
>>> +}
>>> +
>>> +static bool __relocate_anon_folios(struct pagetable_move_control *pmc, bool undo)
>>> +{
>>> +	pud_t *pudp;
>>> +	pmd_t *pmdp;
>>> +	unsigned long extent;
>>> +	struct mm_struct *mm = current->mm;
>>> +
>>> +	if (!pmc->len_in)
>>> +		return true;
>>> +
>>> +	for (; !pmc_done(pmc); pmc_next(pmc, extent)) {
>>> +		pmd_t pmd;
>>> +		pud_t pud;
>>> +
>>> +		extent = get_extent(NORMAL_PUD, pmc);
>>> +
>>> +		pudp = get_old_pud(mm, pmc->old_addr);
>>> +		if (!pudp)
>>> +			continue;
>>> +		pud = pudp_get(pudp);
>>> +
>>> +		if (pud_trans_huge(pud) || pud_devmap(pud))
>>> +			return false;
>>
>> We don't support PUD-size THP, why to we have to fail here?
> 
> This is just to be in line with other 'magical future where we have PUD THP'
> stuff in mremap.c.
> 
> A later commit that permits huge folio support actually lets us support these...
> 
>>
>>> +
>>> +		extent = get_extent(NORMAL_PMD, pmc);
>>> +		pmdp = get_old_pmd(mm, pmc->old_addr);
>>> +		if (!pmdp)
>>> +			continue;
>>> +		pmd = pmdp_get(pmdp);
>>> +
>>> +		if (is_swap_pmd(pmd) || pmd_trans_huge(pmd) ||
>>> +		    pmd_devmap(pmd))
>>> +			return false;
>>
>> Okay, this case could likely be handled later (present anon folio or
>> migration entry; everything else, we can skip).
> 
> Hmm, but how? the PMD cannot be traversed in this case?
> 
> 'Present' migration entry? Migration entries are non-present right? :) Or is it
> different at PMD?

"present anon folio" or "migration entry" :)

So that latter meant a PMD migration entry (that is non-present)

[...]

>>>    	pmc.new = new_vma;
>>> +	if (relocate_anon) {
>>> +		lock_new_anon_vma(new_vma);
>>> +		pmc.relocate_locked = new_vma;
>>> +
>>> +		if (!relocate_anon_folios(&pmc, /* undo= */false)) {
>>> +			unsigned long start = new_vma->vm_start;
>>> +			unsigned long size = new_vma->vm_end - start;
>>> +
>>> +			/* Undo if fails. */
>>> +			relocate_anon_folios(&pmc, /* undo= */true);
>>
>> You'd assume this cannot fail, but I think it can: imagine concurrent
>> GUP-fast ...
> 
> Well if we change the racey code to ignore DMA pinned we should be ok right?

We completely block migration/swapout, or could they happen 
concurrently? I assume you'd block them already using the rmap locks in 
write mode.

> 
>>
>> I really wish we can find a way to not require the fallback.
> 
> Yeah the fallback is horrible but we really do need it. See the page table move
> fallback code for nightmares also :)
> 
> We could also alternatively:
> 
> - Have some kind of anon_vma fragmentation where some folios in range reference
>    a different anon_vma that we link to the original VMA (quite possibly very
>    broken though).
> 
> - Keep a track of folios somehow and separate them from the page table walk (but
>    then we risk races)
> 
> - Have some way of telling the kernel that such a situation exists with a new
>    object that can be pointed to by folio->mapping, that the rmap code recognise,
>    like essentially an 'anon_vma migration entry' which can fail.
> 
> I already considered combining this operation with the page table move
> operation, but the locking gets horrible and the undo is categorically much
> worse and I'm not sure it's actually workable.

Yeah, I have to further think about that. :(

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2025-06-17 12:07 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-09 13:26 [PATCH 00/11] mm/mremap: introduce more mergeable mremap via MREMAP_RELOCATE_ANON Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 01/11] " Lorenzo Stoakes
2025-06-16 20:58   ` David Hildenbrand
2025-06-17  6:37     ` Harry Yoo
2025-06-17  9:52       ` Lorenzo Stoakes
2025-06-17 10:01         ` David Hildenbrand
2025-06-17 10:07     ` Lorenzo Stoakes
2025-06-17 12:07       ` David Hildenbrand [this message]
2025-06-17 11:15   ` Harry Yoo
2025-06-17 11:24     ` Lorenzo Stoakes
2025-06-17 11:49       ` David Hildenbrand
2025-06-17 20:09   ` Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 02/11] mm/mremap: add MREMAP_MUST_RELOCATE_ANON Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 03/11] mm/mremap: add MREMAP[_MUST]_RELOCATE_ANON support for large folios Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 04/11] tools UAPI: Update copy of linux/mman.h from the kernel sources Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 05/11] tools/testing/selftests: add sys_mremap() helper to vm_util.h Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 06/11] tools/testing/selftests: add mremap() cases that merge normally Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 07/11] tools/testing/selftests: add MREMAP_RELOCATE_ANON merge test cases Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 08/11] tools/testing/selftests: expand mremap() tests for MREMAP_RELOCATE_ANON Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 09/11] tools/testing/selftests: have CoW self test use MREMAP_RELOCATE_ANON Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 10/11] tools/testing/selftests: test relocate anon in split huge page test Lorenzo Stoakes
2025-06-09 13:26 ` [PATCH 11/11] tools/testing/selftests: add MREMAP_RELOCATE_ANON fork tests Lorenzo Stoakes
2025-06-16 20:24 ` [PATCH 00/11] mm/mremap: introduce more mergeable mremap via MREMAP_RELOCATE_ANON David Hildenbrand
2025-06-16 20:41   ` David Hildenbrand
2025-06-17  8:34     ` Pedro Falcato
2025-06-17  8:45       ` David Hildenbrand
2025-06-17 10:57         ` Lorenzo Stoakes
2025-06-17 11:58           ` David Hildenbrand
2025-06-17 12:47             ` Lorenzo Stoakes
2025-06-20 18:59           ` Pedro Falcato
2025-06-20 19:28             ` Lorenzo Stoakes
2025-06-24  9:38               ` David Hildenbrand
2025-06-24 10:19                 ` Lorenzo Stoakes
2025-06-24 12:05                   ` David Hildenbrand
2025-06-17 10:20       ` Lorenzo Stoakes
2025-06-17 10:50   ` Lorenzo Stoakes
2025-06-17  5:42 ` Lai, Yi
2025-06-17  6:45   ` Harry Yoo
2025-06-17  9:33     ` Lorenzo Stoakes
2025-06-25 15:44 ` Lorenzo Stoakes
2025-06-25 15:58   ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=018c0663-dffb-49d0-895c-63bc9e5f9aec@redhat.com \
    --to=david@redhat.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=dev.jain@arm.com \
    --cc=harry.yoo@oracle.com \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=matenajakub@gmail.com \
    --cc=npache@redhat.com \
    --cc=pfalcato@suse.de \
    --cc=richard.weiyang@gmail.com \
    --cc=riel@surriel.com \
    --cc=ryan.roberts@arm.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).