* [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier @ 2025-08-03 11:11 Lorenzo Stoakes 2025-08-03 11:11 ` [PATCH 6.17 1/3] mm/mremap: allow multi-VMA move when filesystem uses thp_get_unmapped_area Lorenzo Stoakes ` (4 more replies) 0 siblings, 5 replies; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-03 11:11 UTC (permalink / raw) To: Andrew Morton Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel The multi-VMA move functionality introduced in commit d23cb648e365 ("mm/mremap: permit mremap() move of multiple VMA") doesn't allow moves of file-backed mappings which specify a custom f_op->get_unmapped_area handler excepting hugetlb and shmem. We expand this to include thp_get_unmapped_area to support file-backed mappings for filesystems which use large folios. Additionally, when the first VMA in a range is not compatible with a multi-VMA move, instead of moving the first VMA and returning an error, this series results in us not moving anything and returning an error immediately. Examining this second change in detail: The semantics of multi-VMA moves in mremap() very clearly indicate that a failure can result in a partial move of VMAs. This is in line with other aggregate operations within the kernel, which share these semantics. There are two classes of failures we're concerned with - eligiblity for mutli-VMA move, and transient failures that would occur even if the user individually moved each VMA. The latter are either a product of the user using mremap() incorrectly or a failure due to out-of-memory conditions (which, given the allocations involved are small, would likely be fatal in any case), or hitting the mapping limit. Regardless of the cause, transient issues would be fatal anyway, so it isn't really material which VMAs succeeded at being moved or not. However with when it comes to multi-VMA move eligiblity, we face another issue - we must allow a single VMA to succeed regardless of this eligiblity (as, of course, it is not a multi-VMA move) - but we must then fail multi-VMA operations. The two means by which VMAs may fail the eligbility test are - the VMAs being UFFD-armed, or the VMA being file-backed and providing its own f_op->get_unmapped_area() helper (because this may result in MREMAP_FIXED being disregarded), excepting those known to correctly handle MREMAP_FIXED. It is therefore conceivable that a user could erroneously try to use this functionality in these instances, and would prefer to not perform any move at all should that occur. This series therefore avoids any move of subsequent VMAs should the first be multi-VMA move ineligble and the input span exceeds that of the first VMA. We also add detailed test logic to assert that multi VMA move with ineligible VMAs functions as expected. Andrew - I think this should go in as a hot-fix for 6.17, as it would be better to change the semantics here before the functionality appears in a released kernel. Lorenzo Stoakes (3): mm/mremap: allow multi-VMA move when filesystem uses thp_get_unmapped_area mm/mremap: catch invalid multi VMA moves earlier selftests/mm: add test for invalid multi VMA operations mm/mremap.c | 40 ++-- tools/testing/selftests/mm/mremap_test.c | 264 ++++++++++++++++++++++- 2 files changed, 284 insertions(+), 20 deletions(-) -- 2.50.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 6.17 1/3] mm/mremap: allow multi-VMA move when filesystem uses thp_get_unmapped_area 2025-08-03 11:11 [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes @ 2025-08-03 11:11 ` Lorenzo Stoakes 2025-08-08 13:38 ` Vlastimil Babka 2025-08-03 11:11 ` [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier Lorenzo Stoakes ` (3 subsequent siblings) 4 siblings, 1 reply; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-03 11:11 UTC (permalink / raw) To: Andrew Morton Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel We currently restrict multi-VMA move to avoid filesystems or drivers which provide a custom f_op->get_unmapped_area handler unless it is known to correctly handle MREMAP_FIXED. We do this so we do not get unexpected result when moving from one area to another (for instance, if the handler would align things resulting in the moved VMAs having different gaps than the original mapping). More and more filesystems are moving to using large folios, and typically do so (in part) by setting f_op->get_unmapped_area to thp_get_unmapped_area. When mremap() invokes the file system's get_unmapped MREMAP_FIXED, it does so via get_unmapped_area(), called in vrm_set_new_addr(). In order to do so, it converts the MREMAP_FIXED flag to a MAP_FIXED flag and passes this to the unmapped area handler. The __get_unmapped_area() function (called by get_unmapped_area()) in turn invokes the filesystem or driver's f_op->get_unmapped_area() handler. Therefore this is a point at which thp_get_unmapped_area() may be called (also, this is the case for anonymous mappings where the size is huge page aligned). thp_get_unmapped_area() calls thp_get_unmapped_area_vmflags() and __thp_get_unmapped_area() in turn (falling back to mm_get_unmapped_area_vm_flags() which is known to handle MAP_FIXED correctly). The __thp_get_unmapped_area() function in turn does nothing to change the address hint, nor the MAP_FIXED flag, only adjusting alignment parameters. It hten calls mm_get_unmapped_area_vmflags(), and in turn arch-specific unmapped area functions, all of which honour MAP_FIXED correctly. Therefore, we can safely add thp_get_unmapped_area to the known-good handlers. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> --- mm/mremap.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index 677a4d744df9..46f9f3160dff 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1616,7 +1616,7 @@ static void notify_uffd(struct vma_remap_struct *vrm, bool failed) static bool vma_multi_allowed(struct vm_area_struct *vma) { - struct file *file; + struct file *file = vma->vm_file; /* * We can't support moving multiple uffd VMAs as notify requires @@ -1629,15 +1629,17 @@ static bool vma_multi_allowed(struct vm_area_struct *vma) * Custom get unmapped area might result in MREMAP_FIXED not * being obeyed. */ - file = vma->vm_file; - if (file && !vma_is_shmem(vma) && !is_vm_hugetlb_page(vma)) { - const struct file_operations *fop = file->f_op; - - if (fop->get_unmapped_area) - return false; - } + if (!file || !file->f_op->get_unmapped_area) + return true; + /* Known good. */ + if (vma_is_shmem(vma)) + return true; + if (is_vm_hugetlb_page(vma)) + return true; + if (file->f_op->get_unmapped_area == thp_get_unmapped_area) + return true; - return true; + return false; } static int check_prep_vma(struct vma_remap_struct *vrm) -- 2.50.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 1/3] mm/mremap: allow multi-VMA move when filesystem uses thp_get_unmapped_area 2025-08-03 11:11 ` [PATCH 6.17 1/3] mm/mremap: allow multi-VMA move when filesystem uses thp_get_unmapped_area Lorenzo Stoakes @ 2025-08-08 13:38 ` Vlastimil Babka 0 siblings, 0 replies; 14+ messages in thread From: Vlastimil Babka @ 2025-08-08 13:38 UTC (permalink / raw) To: Lorenzo Stoakes, Andrew Morton Cc: Liam R . Howlett, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel On 8/3/25 13:11, Lorenzo Stoakes wrote: > We currently restrict multi-VMA move to avoid filesystems or drivers which > provide a custom f_op->get_unmapped_area handler unless it is known to > correctly handle MREMAP_FIXED. > > We do this so we do not get unexpected result when moving from one area to > another (for instance, if the handler would align things resulting in the > moved VMAs having different gaps than the original mapping). > > More and more filesystems are moving to using large folios, and typically > do so (in part) by setting f_op->get_unmapped_area to > thp_get_unmapped_area. > > When mremap() invokes the file system's get_unmapped MREMAP_FIXED, it does > so via get_unmapped_area(), called in vrm_set_new_addr(). In order to do > so, it converts the MREMAP_FIXED flag to a MAP_FIXED flag and passes this > to the unmapped area handler. > > The __get_unmapped_area() function (called by get_unmapped_area()) in turn > invokes the filesystem or driver's f_op->get_unmapped_area() handler. > > Therefore this is a point at which thp_get_unmapped_area() may be called > (also, this is the case for anonymous mappings where the size is huge page > aligned). > > thp_get_unmapped_area() calls thp_get_unmapped_area_vmflags() and > __thp_get_unmapped_area() in turn (falling back to > mm_get_unmapped_area_vm_flags() which is known to handle MAP_FIXED > correctly). > > The __thp_get_unmapped_area() function in turn does nothing to change the > address hint, nor the MAP_FIXED flag, only adjusting alignment > parameters. It hten calls mm_get_unmapped_area_vmflags(), and in turn > arch-specific unmapped area functions, all of which honour MAP_FIXED > correctly. > > Therefore, we can safely add thp_get_unmapped_area to the known-good > handlers. > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier 2025-08-03 11:11 [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes 2025-08-03 11:11 ` [PATCH 6.17 1/3] mm/mremap: allow multi-VMA move when filesystem uses thp_get_unmapped_area Lorenzo Stoakes @ 2025-08-03 11:11 ` Lorenzo Stoakes 2025-08-08 14:19 ` Vlastimil Babka ` (3 more replies) 2025-08-03 11:11 ` [PATCH 6.17 3/3] selftests/mm: add test for invalid multi VMA operations Lorenzo Stoakes ` (2 subsequent siblings) 4 siblings, 4 replies; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-03 11:11 UTC (permalink / raw) To: Andrew Morton Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel In remap_move() we must account for both a single VMA case, where we are permitted to move a single VMA regardless of multi-VMA move eligiblity, and multiple VMAs which, of course, must be eligible for such an operation. We determine this via vma_multi_allowed(). Currently, if the first VMA is not eligible, but others are, we will move the first then return an error. This is not ideal, as we are performing an operation which we don't need to do which has an impact on the memory mapping. We can very easily determine if this is a multi VMA move prior to the move of the first VMA, by checking vma->vm_end vs. the specified end address. Therefore this patch does so, and as a result eliminates unnecessary logic around tracking whether the first VMA was permitted or not. This is most useful for cases where a user attempts to erroneously move mutliple VMAs which are not eligible for non-transient reasons - for instance, UFFD-armed VMAs, or file-backed VMAs backed by a file system or driver which specifies a custom f_op->get_unmapped_area. In the less likely instance of a failure due to transient issues such as out of memory or mapping limits being hit, the issue is already likely fatal and so the fact the operation may be partially complete is acceptable. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> --- mm/mremap.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index 46f9f3160dff..f61a9ea0b244 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1816,10 +1816,11 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) unsigned long start = vrm->addr; unsigned long end = vrm->addr + vrm->old_len; unsigned long new_addr = vrm->new_addr; - bool allowed = true, seen_vma = false; unsigned long target_addr = new_addr; unsigned long res = -EFAULT; unsigned long last_end; + bool seen_vma = false; + VMA_ITERATOR(vmi, current->mm, start); /* @@ -1833,9 +1834,6 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) unsigned long len = min(end, vma->vm_end) - addr; unsigned long offset, res_vma; - if (!allowed) - return -EFAULT; - /* No gap permitted at the start of the range. */ if (!seen_vma && start < vma->vm_start) return -EFAULT; @@ -1863,9 +1861,14 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) vrm->new_addr = target_addr + offset; vrm->old_len = vrm->new_len = len; - allowed = vma_multi_allowed(vma); - if (seen_vma && !allowed) - return -EFAULT; + if (!vma_multi_allowed(vma)) { + /* This is not the first VMA, abort immediately. */ + if (seen_vma) + return -EFAULT; + /* This is the first, but there are more, abort. */ + if (vma->vm_end < end) + return -EFAULT; + } res_vma = check_prep_vma(vrm); if (!res_vma) @@ -1874,7 +1877,8 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) return res_vma; if (!seen_vma) { - VM_WARN_ON_ONCE(allowed && res_vma != new_addr); + VM_WARN_ON_ONCE(vma_multi_allowed(vma) && + res_vma != new_addr); res = res_vma; } -- 2.50.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier 2025-08-03 11:11 ` [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier Lorenzo Stoakes @ 2025-08-08 14:19 ` Vlastimil Babka 2025-08-08 14:34 ` Lorenzo Stoakes 2025-08-08 14:43 ` Lorenzo Stoakes ` (2 subsequent siblings) 3 siblings, 1 reply; 14+ messages in thread From: Vlastimil Babka @ 2025-08-08 14:19 UTC (permalink / raw) To: Lorenzo Stoakes, Andrew Morton Cc: Liam R . Howlett, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel On 8/3/25 13:11, Lorenzo Stoakes wrote: > In remap_move() we must account for both a single VMA case, where we are > permitted to move a single VMA regardless of multi-VMA move eligiblity, and > multiple VMAs which, of course, must be eligible for such an operation. > > We determine this via vma_multi_allowed(). > > Currently, if the first VMA is not eligible, but others are, we will move > the first then return an error. This is not ideal, as we are performing an > operation which we don't need to do which has an impact on the memory > mapping. > > We can very easily determine if this is a multi VMA move prior to the move > of the first VMA, by checking vma->vm_end vs. the specified end address. > > Therefore this patch does so, and as a result eliminates unnecessary logic > around tracking whether the first VMA was permitted or not. > > This is most useful for cases where a user attempts to erroneously move > mutliple VMAs which are not eligible for non-transient reasons - for > instance, UFFD-armed VMAs, or file-backed VMAs backed by a file system or > driver which specifies a custom f_op->get_unmapped_area. > > In the less likely instance of a failure due to transient issues such as > out of memory or mapping limits being hit, the issue is already likely > fatal and so the fact the operation may be partially complete is > acceptable. > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > --- > mm/mremap.c | 20 ++++++++++++-------- > 1 file changed, 12 insertions(+), 8 deletions(-) > > diff --git a/mm/mremap.c b/mm/mremap.c > index 46f9f3160dff..f61a9ea0b244 100644 > --- a/mm/mremap.c > +++ b/mm/mremap.c > @@ -1816,10 +1816,11 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) > unsigned long start = vrm->addr; > unsigned long end = vrm->addr + vrm->old_len; > unsigned long new_addr = vrm->new_addr; > - bool allowed = true, seen_vma = false; > unsigned long target_addr = new_addr; > unsigned long res = -EFAULT; > unsigned long last_end; > + bool seen_vma = false; > + > VMA_ITERATOR(vmi, current->mm, start); > > /* > @@ -1833,9 +1834,6 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) > unsigned long len = min(end, vma->vm_end) - addr; > unsigned long offset, res_vma; > > - if (!allowed) > - return -EFAULT; > - > /* No gap permitted at the start of the range. */ > if (!seen_vma && start < vma->vm_start) > return -EFAULT; > @@ -1863,9 +1861,14 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) > vrm->new_addr = target_addr + offset; > vrm->old_len = vrm->new_len = len; > > - allowed = vma_multi_allowed(vma); > - if (seen_vma && !allowed) > - return -EFAULT; > + if (!vma_multi_allowed(vma)) { > + /* This is not the first VMA, abort immediately. */ > + if (seen_vma) > + return -EFAULT; > + /* This is the first, but there are more, abort. */ > + if (vma->vm_end < end) > + return -EFAULT; Hm there can just also be a gap, and we permit gaps at the end (unlike at the start), right? So we might be denying a multi vma mremap for !vma_multi_allowed() reasons even if it's a single vma and a gap. AFAICS this is not regressing the behavior prior to d23cb648e365 ("mm/mremap: permit mremap() move of multiple VMAs") as such mremap() would be denied anyway by the "/* We can't remap across vm area boundaries */" check in check_prep_vma(). So the question is just if we want this odd corner case to behave like this, and if yes then be more explicit about it perhaps. > + } > > res_vma = check_prep_vma(vrm); > if (!res_vma) > @@ -1874,7 +1877,8 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) > return res_vma; > > if (!seen_vma) { > - VM_WARN_ON_ONCE(allowed && res_vma != new_addr); > + VM_WARN_ON_ONCE(vma_multi_allowed(vma) && > + res_vma != new_addr); > res = res_vma; > } > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier 2025-08-08 14:19 ` Vlastimil Babka @ 2025-08-08 14:34 ` Lorenzo Stoakes 2025-08-08 14:46 ` Lorenzo Stoakes 0 siblings, 1 reply; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-08 14:34 UTC (permalink / raw) To: Vlastimil Babka Cc: Andrew Morton, Liam R . Howlett, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel On Fri, Aug 08, 2025 at 04:19:09PM +0200, Vlastimil Babka wrote: > On 8/3/25 13:11, Lorenzo Stoakes wrote: [snip] > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > > --- > > mm/mremap.c | 20 ++++++++++++-------- > > 1 file changed, 12 insertions(+), 8 deletions(-) [snip] > > @@ -1863,9 +1861,14 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) > > vrm->new_addr = target_addr + offset; > > vrm->old_len = vrm->new_len = len; > > > > - allowed = vma_multi_allowed(vma); > > - if (seen_vma && !allowed) > > - return -EFAULT; > > + if (!vma_multi_allowed(vma)) { > > + /* This is not the first VMA, abort immediately. */ > > + if (seen_vma) > > + return -EFAULT; > > + /* This is the first, but there are more, abort. */ > > + if (vma->vm_end < end) > > + return -EFAULT; > > Hm there can just also be a gap, and we permit gaps at the end (unlike at > the start), right? I don't think we should allow a single VMA with gap, it's actually more correct to maintain existing behavour in this case. > So we might be denying a multi vma mremap for !vma_multi_allowed() > reasons even if it's a single vma and a gap. This is therfore a useful exercise in preventing us from permitting this case I think. > > AFAICS this is not regressing the behavior prior to d23cb648e365 > ("mm/mremap: permit mremap() move of multiple VMAs") as such mremap() would > be denied anyway by the "/* We can't remap across vm area boundaries */" > check in check_prep_vma(). Yup. And this code is _only_ called for MREMAP_FIXED. So nothing else is impacted. > > So the question is just if we want this odd corner case to behave like this, > and if yes then be more explicit about it perhaps. We definitely do IMO. There's no reason to change this behaviour. The end gap thing in multi was more a product of 'why not permit it' but now is more a case of 'it means we don't have to go check or fail partially'. So I think this is fine. > > > + } > > > > res_vma = check_prep_vma(vrm); > > if (!res_vma) > > @@ -1874,7 +1877,8 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) > > return res_vma; > > > > if (!seen_vma) { > > - VM_WARN_ON_ONCE(allowed && res_vma != new_addr); > > + VM_WARN_ON_ONCE(vma_multi_allowed(vma) && > > + res_vma != new_addr); > > res = res_vma; > > } > > > I can update the commit msg accordingly... ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier 2025-08-08 14:34 ` Lorenzo Stoakes @ 2025-08-08 14:46 ` Lorenzo Stoakes 0 siblings, 0 replies; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-08 14:46 UTC (permalink / raw) To: Vlastimil Babka Cc: Andrew Morton, Liam R . Howlett, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel On Fri, Aug 08, 2025 at 03:34:13PM +0100, Lorenzo Stoakes wrote: > On Fri, Aug 08, 2025 at 04:19:09PM +0200, Vlastimil Babka wrote: > > On 8/3/25 13:11, Lorenzo Stoakes wrote: > [snip] > > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > > > --- > > > mm/mremap.c | 20 ++++++++++++-------- > > > 1 file changed, 12 insertions(+), 8 deletions(-) > [snip] > > > @@ -1863,9 +1861,14 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) > > > vrm->new_addr = target_addr + offset; > > > vrm->old_len = vrm->new_len = len; > > > > > > - allowed = vma_multi_allowed(vma); > > > - if (seen_vma && !allowed) > > > - return -EFAULT; > > > + if (!vma_multi_allowed(vma)) { > > > + /* This is not the first VMA, abort immediately. */ > > > + if (seen_vma) > > > + return -EFAULT; > > > + /* This is the first, but there are more, abort. */ > > > + if (vma->vm_end < end) > > > + return -EFAULT; > > > > Hm there can just also be a gap, and we permit gaps at the end (unlike at > > the start), right? > > I don't think we should allow a single VMA with gap, it's actually more > correct to maintain existing behavour in this case. > > > So we might be denying a multi vma mremap for !vma_multi_allowed() > > reasons even if it's a single vma and a gap. > > This is therfore a useful exercise in preventing us from permitting this > case I think. > > > > > AFAICS this is not regressing the behavior prior to d23cb648e365 > > ("mm/mremap: permit mremap() move of multiple VMAs") as such mremap() would > > be denied anyway by the "/* We can't remap across vm area boundaries */" > > check in check_prep_vma(). > > Yup. > > And this code is _only_ called for MREMAP_FIXED. So nothing else is impacted. > > > > > So the question is just if we want this odd corner case to behave like this, > > and if yes then be more explicit about it perhaps. > > We definitely do IMO. There's no reason to change this behaviour. > > The end gap thing in multi was more a product of 'why not permit it' but > now is more a case of 'it means we don't have to go check or fail > partially'. > > So I think this is fine. > > > > > > + } > > > > > > res_vma = check_prep_vma(vrm); > > > if (!res_vma) > > > @@ -1874,7 +1877,8 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) > > > return res_vma; > > > > > > if (!seen_vma) { > > > - VM_WARN_ON_ONCE(allowed && res_vma != new_addr); > > > + VM_WARN_ON_ONCE(vma_multi_allowed(vma) && > > > + res_vma != new_addr); > > > res = res_vma; > > > } > > > > > > > I can update the commit msg accordingly... I have asked Andrew to update with a clear explanation of this (see [0]), and made clear why I feel it's consistent for us to disallow this behaviour for non-eligible VMAs while permitting it for eligible ones. It means we can simply say 'for eligible pure moves, you may specify gaps between or after VMAs spanning 1 or more VMAs'. Cheers, Lorenzo [0]: https://lore.kernel.org/linux-mm/df80b788-0546-4b78-a2fa-64d26e5a35b8@lucifer.local/ ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier 2025-08-03 11:11 ` [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier Lorenzo Stoakes 2025-08-08 14:19 ` Vlastimil Babka @ 2025-08-08 14:43 ` Lorenzo Stoakes 2025-08-08 17:17 ` Vlastimil Babka 2025-08-16 7:52 ` Lorenzo Stoakes 3 siblings, 0 replies; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-08 14:43 UTC (permalink / raw) To: Andrew Morton Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel Andrew, please adjust commit message as follows: On Sun, Aug 03, 2025 at 12:11:22PM +0100, Lorenzo Stoakes wrote: > In remap_move() we must account for both a single VMA case, where we are > permitted to move a single VMA regardless of multi-VMA move eligiblity, and > multiple VMAs which, of course, must be eligible for such an operation. > > We determine this via vma_multi_allowed(). > > Currently, if the first VMA is not eligible, but others are, we will move > the first then return an error. This is not ideal, as we are performing an > operation which we don't need to do which has an impact on the memory > mapping. > > We can very easily determine if this is a multi VMA move prior to the move > of the first VMA, by checking vma->vm_end vs. the specified end address. > > Therefore this patch does so, and as a result eliminates unnecessary logic > around tracking whether the first VMA was permitted or not. > > This is most useful for cases where a user attempts to erroneously move > mutliple VMAs which are not eligible for non-transient reasons - for > instance, UFFD-armed VMAs, or file-backed VMAs backed by a file system or > driver which specifies a custom f_op->get_unmapped_area. > > In the less likely instance of a failure due to transient issues such as > out of memory or mapping limits being hit, the issue is already likely > fatal and so the fact the operation may be partially complete is > acceptable. > Previously, any attempt to solely move a VMA would require that the span specified reside within the span of that single VMA, with no gaps before or afterwards. After commit d23cb648e365 ("mm/mremap: permit mremap() move of multiple VMAs"), the multi VMA move permitted a gap to exist only after VMAs. This was done to provide maximum flexibility. However, We have consequently permitted this behaviour for the move of a single VMA including those not eligible for multi VMA move. The change introduced here means that we no longer permit non-eligible VMAs from being moved in this way. This is consistent, as it means all eligible VMA moves are treated the same, and all non-eligible moves are treated as they were before. This change does not break previous behaviour, which equally would have disallowed such a move (only in all cases). > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier 2025-08-03 11:11 ` [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier Lorenzo Stoakes 2025-08-08 14:19 ` Vlastimil Babka 2025-08-08 14:43 ` Lorenzo Stoakes @ 2025-08-08 17:17 ` Vlastimil Babka 2025-08-16 7:52 ` Lorenzo Stoakes 3 siblings, 0 replies; 14+ messages in thread From: Vlastimil Babka @ 2025-08-08 17:17 UTC (permalink / raw) To: Lorenzo Stoakes, Andrew Morton Cc: Liam R . Howlett, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel On 8/3/25 13:11, Lorenzo Stoakes wrote: > In remap_move() we must account for both a single VMA case, where we are > permitted to move a single VMA regardless of multi-VMA move eligiblity, and > multiple VMAs which, of course, must be eligible for such an operation. > > We determine this via vma_multi_allowed(). > > Currently, if the first VMA is not eligible, but others are, we will move > the first then return an error. This is not ideal, as we are performing an > operation which we don't need to do which has an impact on the memory > mapping. > > We can very easily determine if this is a multi VMA move prior to the move > of the first VMA, by checking vma->vm_end vs. the specified end address. > > Therefore this patch does so, and as a result eliminates unnecessary logic > around tracking whether the first VMA was permitted or not. > > This is most useful for cases where a user attempts to erroneously move > mutliple VMAs which are not eligible for non-transient reasons - for > instance, UFFD-armed VMAs, or file-backed VMAs backed by a file system or > driver which specifies a custom f_op->get_unmapped_area. > > In the less likely instance of a failure due to transient issues such as > out of memory or mapping limits being hit, the issue is already likely > fatal and so the fact the operation may be partially complete is > acceptable. > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> With the updated commit log, Reviewed-by: Vlastimil Babka <vbabka@suse.cz> ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier 2025-08-03 11:11 ` [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier Lorenzo Stoakes ` (2 preceding siblings ...) 2025-08-08 17:17 ` Vlastimil Babka @ 2025-08-16 7:52 ` Lorenzo Stoakes 3 siblings, 0 replies; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-16 7:52 UTC (permalink / raw) To: Andrew Morton Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel Hi Andrew, Fixing a silly issue that syzbot picked up, I reuse vma incorrectly, very easy fix, fix-patch below. (Vlastimil had a look at this off-list). Cheers, Lorenzo ----8<---- From 87fc8e42946938688d637f694cd6e80552a26667 Mon Sep 17 00:00:00 2001 From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Date: Sat, 16 Aug 2025 08:37:41 +0100 Subject: [PATCH] mm/mremap: do not incorrectly reference invalid VMA in VM_WARN_ON_ONCE() The VMA which is referenced here may have since been merged (which is the entire point of the warning), and yet we still reference it. Fix this by storing whether or not a multi move is permitted ahead of time and have the VM_WARN_ON_ONCE() be predicated on this. Reported-by: syzbot+4e221abf50259362f4f4@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-mm/689ff5f6.050a0220.e29e5.0030.GAE@google.com/ Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> --- mm/mremap.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index 18aa0b3b828f..33b642076205 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1837,6 +1837,7 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) unsigned long addr = max(vma->vm_start, start); unsigned long len = min(end, vma->vm_end) - addr; unsigned long offset, res_vma; + bool multi_allowed; /* No gap permitted at the start of the range. */ if (!seen_vma && start < vma->vm_start) @@ -1865,7 +1866,8 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) vrm->new_addr = target_addr + offset; vrm->old_len = vrm->new_len = len; - if (!vma_multi_allowed(vma)) { + multi_allowed = vma_multi_allowed(vma); + if (!multi_allowed) { /* This is not the first VMA, abort immediately. */ if (seen_vma) return -EFAULT; @@ -1881,8 +1883,7 @@ static unsigned long remap_move(struct vma_remap_struct *vrm) return res_vma; if (!seen_vma) { - VM_WARN_ON_ONCE(vma_multi_allowed(vma) && - res_vma != new_addr); + VM_WARN_ON_ONCE(multi_allowed && res_vma != new_addr); res = res_vma; } -- 2.50.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 6.17 3/3] selftests/mm: add test for invalid multi VMA operations 2025-08-03 11:11 [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes 2025-08-03 11:11 ` [PATCH 6.17 1/3] mm/mremap: allow multi-VMA move when filesystem uses thp_get_unmapped_area Lorenzo Stoakes 2025-08-03 11:11 ` [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier Lorenzo Stoakes @ 2025-08-03 11:11 ` Lorenzo Stoakes 2025-08-08 13:19 ` [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes 2025-08-12 4:01 ` Andrew Morton 4 siblings, 0 replies; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-03 11:11 UTC (permalink / raw) To: Andrew Morton Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel We can use UFFD to easily assert invalid multi VMA moves, so do so, asserting expected behaviour when VMAs invalid for a multi VMA operation are encountered. We assert both that such operations are not permitted, and that we do not even attempt to move the first VMA under these circumstances. We also assert that we can still move a single VMA regardless. We then assert that a partial failure can occur if the invalid VMA appears later in the range of multiple VMAs, both at the very next VMA, and also at the end of the range. As part of this change, we are using the is_range_valid() helper more aggressively. Therefore, fix a bug where stale buffered data would hang around on success, causing subsequent calls to is_range_valid() to potentially give invalid results. We simply have to fflush() the stream on success to resolve this issue. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> --- tools/testing/selftests/mm/mremap_test.c | 264 ++++++++++++++++++++++- 1 file changed, 261 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/mm/mremap_test.c b/tools/testing/selftests/mm/mremap_test.c index fccf9e797a0c..5bd52a951cbd 100644 --- a/tools/testing/selftests/mm/mremap_test.c +++ b/tools/testing/selftests/mm/mremap_test.c @@ -5,10 +5,14 @@ #define _GNU_SOURCE #include <errno.h> +#include <fcntl.h> +#include <linux/userfaultfd.h> #include <stdlib.h> #include <stdio.h> #include <string.h> +#include <sys/ioctl.h> #include <sys/mman.h> +#include <syscall.h> #include <time.h> #include <stdbool.h> @@ -168,6 +172,7 @@ static bool is_range_mapped(FILE *maps_fp, unsigned long start, if (first_val <= start && second_val >= end) { success = true; + fflush(maps_fp); break; } } @@ -175,6 +180,15 @@ static bool is_range_mapped(FILE *maps_fp, unsigned long start, return success; } +/* Check if [ptr, ptr + size) mapped in /proc/self/maps. */ +static bool is_ptr_mapped(FILE *maps_fp, void *ptr, unsigned long size) +{ + unsigned long start = (unsigned long)ptr; + unsigned long end = start + size; + + return is_range_mapped(maps_fp, start, end); +} + /* * Returns the start address of the mapping on success, else returns * NULL on failure. @@ -733,6 +747,249 @@ static void mremap_move_multiple_vmas_split(unsigned int pattern_seed, dont_unmap ? " [dontunnmap]" : ""); } +#ifdef __NR_userfaultfd +static void mremap_move_multi_invalid_vmas(FILE *maps_fp, + unsigned long page_size) +{ + char *test_name = "mremap move multiple invalid vmas"; + const size_t size = 10 * page_size; + bool success = true; + char *ptr, *tgt_ptr; + int uffd, err, i; + void *res; + struct uffdio_api api = { + .api = UFFD_API, + .features = UFFD_EVENT_PAGEFAULT, + }; + + uffd = syscall(__NR_userfaultfd, O_NONBLOCK); + if (uffd == -1) { + err = errno; + perror("userfaultfd"); + if (err == EPERM) { + ksft_test_result_skip("%s - missing uffd", test_name); + return; + } + success = false; + goto out; + } + if (ioctl(uffd, UFFDIO_API, &api)) { + perror("ioctl UFFDIO_API"); + success = false; + goto out_close_uffd; + } + + ptr = mmap(NULL, size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANON, -1, 0); + if (ptr == MAP_FAILED) { + perror("mmap"); + success = false; + goto out_close_uffd; + } + + tgt_ptr = mmap(NULL, size, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0); + if (tgt_ptr == MAP_FAILED) { + perror("mmap"); + success = false; + goto out_close_uffd; + } + if (munmap(tgt_ptr, size)) { + perror("munmap"); + success = false; + goto out_unmap; + } + + /* + * Unmap so we end up with: + * + * 0 2 4 6 8 10 offset in buffer + * |*| |*| |*| |*| |*| + * |*| |*| |*| |*| |*| + * + * Additionally, register each with UFFD. + */ + for (i = 0; i < 10; i += 2) { + void *unmap_ptr = &ptr[(i + 1) * page_size]; + unsigned long start = (unsigned long)&ptr[i * page_size]; + struct uffdio_register reg = { + .range = { + .start = start, + .len = page_size, + }, + .mode = UFFDIO_REGISTER_MODE_MISSING, + }; + + if (ioctl(uffd, UFFDIO_REGISTER, ®) == -1) { + perror("ioctl UFFDIO_REGISTER"); + success = false; + goto out_unmap; + } + if (munmap(unmap_ptr, page_size)) { + perror("munmap"); + success = false; + goto out_unmap; + } + } + + /* + * Now try to move the entire range which is invalid for multi VMA move. + * + * This will fail, and no VMA should be moved, as we check this ahead of + * time. + */ + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); + err = errno; + if (res != MAP_FAILED) { + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); + success = false; + goto out_unmap; + } + if (err != EFAULT) { + errno = err; + perror("mrmeap() unexpected error"); + success = false; + goto out_unmap; + } + if (is_ptr_mapped(maps_fp, tgt_ptr, page_size)) { + fprintf(stderr, + "Invalid uffd-armed VMA at start of multi range moved\n"); + success = false; + goto out_unmap; + } + + /* + * Now try to move a single VMA, this should succeed as not multi VMA + * move. + */ + res = mremap(ptr, page_size, page_size, + MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); + if (res == MAP_FAILED) { + perror("mremap single invalid-multi VMA"); + success = false; + goto out_unmap; + } + + /* + * Unmap the VMA, and remap a non-uffd registered (therefore, multi VMA + * move valid) VMA at the start of ptr range. + */ + if (munmap(tgt_ptr, page_size)) { + perror("munmap"); + success = false; + goto out_unmap; + } + res = mmap(ptr, page_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); + if (res == MAP_FAILED) { + perror("mmap"); + success = false; + goto out_unmap; + } + + /* + * Now try to move the entire range, we should succeed in moving the + * first VMA, but no others, and report a failure. + */ + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); + err = errno; + if (res != MAP_FAILED) { + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); + success = false; + goto out_unmap; + } + if (err != EFAULT) { + errno = err; + perror("mrmeap() unexpected error"); + success = false; + goto out_unmap; + } + if (!is_ptr_mapped(maps_fp, tgt_ptr, page_size)) { + fprintf(stderr, "Valid VMA not moved\n"); + success = false; + goto out_unmap; + } + + /* + * Unmap the VMA, and map valid VMA at start of ptr range, and replace + * all existing multi-move invalid VMAs, except the last, with valid + * multi-move VMAs. + */ + if (munmap(tgt_ptr, page_size)) { + perror("munmap"); + success = false; + goto out_unmap; + } + if (munmap(ptr, size - 2 * page_size)) { + perror("munmap"); + success = false; + goto out_unmap; + } + for (i = 0; i < 8; i += 2) { + res = mmap(&ptr[i * page_size], page_size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0); + if (res == MAP_FAILED) { + perror("mmap"); + success = false; + goto out_unmap; + } + } + + /* + * Now try to move the entire range, we should succeed in moving all but + * the last VMA, and report a failure. + */ + res = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tgt_ptr); + err = errno; + if (res != MAP_FAILED) { + fprintf(stderr, "mremap() succeeded for multi VMA uffd armed\n"); + success = false; + goto out_unmap; + } + if (err != EFAULT) { + errno = err; + perror("mrmeap() unexpected error"); + success = false; + goto out_unmap; + } + + for (i = 0; i < 10; i += 2) { + bool is_mapped = is_ptr_mapped(maps_fp, + &tgt_ptr[i * page_size], page_size); + + if (i < 8 && !is_mapped) { + fprintf(stderr, "Valid VMA not moved at %d\n", i); + success = false; + goto out_unmap; + } else if (i == 8 && is_mapped) { + fprintf(stderr, "Invalid VMA moved at %d\n", i); + success = false; + goto out_unmap; + } + } + +out_unmap: + if (munmap(tgt_ptr, size)) + perror("munmap tgt"); + if (munmap(ptr, size)) + perror("munmap src"); +out_close_uffd: + close(uffd); +out: + if (success) + ksft_test_result_pass("%s\n", test_name); + else + ksft_test_result_fail("%s\n", test_name); +} +#else +static void mremap_move_multi_invalid_vmas(FILE *maps_fp, unsigned long page_size) +{ + char *test_name = "mremap move multiple invalid vmas"; + + ksft_test_result_skip("%s - missing uffd", test_name); +} +#endif /* __NR_userfaultfd */ + /* Returns the time taken for the remap on success else returns -1. */ static long long remap_region(struct config c, unsigned int threshold_mb, char *rand_addr) @@ -1074,7 +1331,7 @@ int main(int argc, char **argv) char *rand_addr; size_t rand_size; int num_expand_tests = 2; - int num_misc_tests = 8; + int num_misc_tests = 9; struct test test_cases[MAX_TEST] = {}; struct test perf_test_cases[MAX_PERF_TEST]; int page_size; @@ -1197,8 +1454,6 @@ int main(int argc, char **argv) mremap_expand_merge(maps_fp, page_size); mremap_expand_merge_offset(maps_fp, page_size); - fclose(maps_fp); - mremap_move_within_range(pattern_seed, rand_addr); mremap_move_1mb_from_start(pattern_seed, rand_addr); mremap_shrink_multiple_vmas(page_size, /* inplace= */true); @@ -1207,6 +1462,9 @@ int main(int argc, char **argv) mremap_move_multiple_vmas(pattern_seed, page_size, /* dontunmap= */ true); mremap_move_multiple_vmas_split(pattern_seed, page_size, /* dontunmap= */ false); mremap_move_multiple_vmas_split(pattern_seed, page_size, /* dontunmap= */ true); + mremap_move_multi_invalid_vmas(maps_fp, page_size); + + fclose(maps_fp); if (run_perf_tests) { ksft_print_msg("\n%s\n", -- 2.50.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier 2025-08-03 11:11 [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes ` (2 preceding siblings ...) 2025-08-03 11:11 ` [PATCH 6.17 3/3] selftests/mm: add test for invalid multi VMA operations Lorenzo Stoakes @ 2025-08-08 13:19 ` Lorenzo Stoakes 2025-08-12 4:01 ` Andrew Morton 4 siblings, 0 replies; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-08 13:19 UTC (permalink / raw) To: Andrew Morton Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel On Sun, Aug 03, 2025 at 12:11:20PM +0100, Lorenzo Stoakes wrote: > The multi-VMA move functionality introduced in commit d23cb648e365 > ("mm/mremap: permit mremap() move of multiple VMA") doesn't allow moves of > file-backed mappings which specify a custom f_op->get_unmapped_area handler > excepting hugetlb and shmem. > > We expand this to include thp_get_unmapped_area to support file-backed > mappings for filesystems which use large folios. > > Additionally, when the first VMA in a range is not compatible with a > multi-VMA move, instead of moving the first VMA and returning an error, > this series results in us not moving anything and returning an error > immediately. > > Examining this second change in detail: > > The semantics of multi-VMA moves in mremap() very clearly indicate that a > failure can result in a partial move of VMAs. > > This is in line with other aggregate operations within the kernel, which > share these semantics. > > There are two classes of failures we're concerned with - eligiblity for > mutli-VMA move, and transient failures that would occur even if the user > individually moved each VMA. > > The latter are either a product of the user using mremap() incorrectly or a > failure due to out-of-memory conditions (which, given the allocations > involved are small, would likely be fatal in any case), or hitting the > mapping limit. Correction here, it's very confusing to refer to user error as a 'transient' failure (it's not), so replace this sentence with: The latter is due to out-of-memory conditions (which, given the allocations involved are small, would likely be fatal in any case), or hitting the mapping limit. > > Regardless of the cause, transient issues would be fatal anyway, so it > isn't really material which VMAs succeeded at being moved or not. > > However with when it comes to multi-VMA move eligiblity, we face another > issue - we must allow a single VMA to succeed regardless of this eligiblity > (as, of course, it is not a multi-VMA move) - but we must then fail > multi-VMA operations. > > The two means by which VMAs may fail the eligbility test are - the VMAs > being UFFD-armed, or the VMA being file-backed and providing its own > f_op->get_unmapped_area() helper (because this may result in MREMAP_FIXED > being disregarded), excepting those known to correctly handle MREMAP_FIXED. > > It is therefore conceivable that a user could erroneously try to use this > functionality in these instances, and would prefer to not perform any move > at all should that occur. > > This series therefore avoids any move of subsequent VMAs should the first > be multi-VMA move ineligble and the input span exceeds that of the first > VMA. > > We also add detailed test logic to assert that multi VMA move with > ineligible VMAs functions as expected. > > > Andrew - I think this should go in as a hot-fix for 6.17, as it would be > better to change the semantics here before the functionality appears in a > released kernel. > > Lorenzo Stoakes (3): > mm/mremap: allow multi-VMA move when filesystem uses > thp_get_unmapped_area > mm/mremap: catch invalid multi VMA moves earlier > selftests/mm: add test for invalid multi VMA operations > > mm/mremap.c | 40 ++-- > tools/testing/selftests/mm/mremap_test.c | 264 ++++++++++++++++++++++- > 2 files changed, 284 insertions(+), 20 deletions(-) > > -- > 2.50.1 Cheers, Lorenzo ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier 2025-08-03 11:11 [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes ` (3 preceding siblings ...) 2025-08-08 13:19 ` [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes @ 2025-08-12 4:01 ` Andrew Morton 2025-08-12 5:32 ` Lorenzo Stoakes 4 siblings, 1 reply; 14+ messages in thread From: Andrew Morton @ 2025-08-12 4:01 UTC (permalink / raw) To: Lorenzo Stoakes Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel On Sun, 3 Aug 2025 12:11:20 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > Subject: [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier I missed this series until just now, because the Subject: innovation fooled me. My pattern recognizer saw "[PATCH 6.17 ..." and said "that's an LTS patch". Because that's precisely what they do. By the million. I can't say I find this change valuable, really. Now wondering which other patches I missed. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier 2025-08-12 4:01 ` Andrew Morton @ 2025-08-12 5:32 ` Lorenzo Stoakes 0 siblings, 0 replies; 14+ messages in thread From: Lorenzo Stoakes @ 2025-08-12 5:32 UTC (permalink / raw) To: Andrew Morton Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand, Mike Rapoport, Suren Baghdasaryan, Michal Hocko, linux-mm, linux-kernel On Mon, Aug 11, 2025 at 09:01:58PM -0700, Andrew Morton wrote: > On Sun, 3 Aug 2025 12:11:20 +0100 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > > Subject: [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier > > I missed this series until just now, because the Subject: innovation > fooled me. > > My pattern recognizer saw "[PATCH 6.17 ..." and said "that's an LTS > patch". Because that's precisely what they do. By the million. > > I can't say I find this change valuable, really. > > Now wondering which other patches I missed. Yes sorry, just wanted to highlight it as hotfix and it seemed sensible (at the time)... ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-08-16 7:53 UTC | newest] Thread overview: 14+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-08-03 11:11 [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes 2025-08-03 11:11 ` [PATCH 6.17 1/3] mm/mremap: allow multi-VMA move when filesystem uses thp_get_unmapped_area Lorenzo Stoakes 2025-08-08 13:38 ` Vlastimil Babka 2025-08-03 11:11 ` [PATCH 6.17 2/3] mm/mremap: catch invalid multi VMA moves earlier Lorenzo Stoakes 2025-08-08 14:19 ` Vlastimil Babka 2025-08-08 14:34 ` Lorenzo Stoakes 2025-08-08 14:46 ` Lorenzo Stoakes 2025-08-08 14:43 ` Lorenzo Stoakes 2025-08-08 17:17 ` Vlastimil Babka 2025-08-16 7:52 ` Lorenzo Stoakes 2025-08-03 11:11 ` [PATCH 6.17 3/3] selftests/mm: add test for invalid multi VMA operations Lorenzo Stoakes 2025-08-08 13:19 ` [PATCH 6.17 0/3] mm/mremap: allow multi-VMA move for huge folio, find ineligible earlier Lorenzo Stoakes 2025-08-12 4:01 ` Andrew Morton 2025-08-12 5:32 ` Lorenzo Stoakes
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).