* cleanup vfree and vunmap
@ 2023-01-19 10:02 Christoph Hellwig
2023-01-19 10:02 ` [PATCH 01/10] vmalloc: reject vmap with VM_FLUSH_RESET_PERMS Christoph Hellwig
` (10 more replies)
0 siblings, 11 replies; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
Hi all,
this little series untangles the vfree and vunmap code path a bit.
Note that it depends on 'Revert "remoteproc: qcom_q6v5_mss: map/unmap metadata
region before/after use"' in linux-next.
Diffstat:
vmalloc.c | 304 +++++++++++++++++++++++++++-----------------------------------
1 file changed, 134 insertions(+), 170 deletions(-)
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 01/10] vmalloc: reject vmap with VM_FLUSH_RESET_PERMS
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:46 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 02/10] mm: remove __vfree Christoph Hellwig
` (9 subsequent siblings)
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
VM_FLUSH_RESET_PERMS is just for use with vmalloc as it is tied to freeing
the underlying pages.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 0781c5a8e0e73d..6957d15d526e46 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2883,6 +2883,9 @@ void *vmap(struct page **pages, unsigned int count,
might_sleep();
+ if (WARN_ON_ONCE(flags & VM_FLUSH_RESET_PERMS))
+ return NULL;
+
/*
* Your top guard is someone else's bottom guard. Not having a top
* guard compromises someone else's mappings too.
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 02/10] mm: remove __vfree
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
2023-01-19 10:02 ` [PATCH 01/10] vmalloc: reject vmap with VM_FLUSH_RESET_PERMS Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:47 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 03/10] mm: remove __vfree_deferred Christoph Hellwig
` (8 subsequent siblings)
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
__vfree is a subset of vfree that just skips a few checks, and which is
only used by vfree and an error cleanup path. Fold __vfree into vfree
and switch the only other caller to call vfree() instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 18 ++++++------------
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6957d15d526e46..b989828b45109a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2801,14 +2801,6 @@ void vfree_atomic(const void *addr)
__vfree_deferred(addr);
}
-static void __vfree(const void *addr)
-{
- if (unlikely(in_interrupt()))
- __vfree_deferred(addr);
- else
- __vunmap(addr, 1);
-}
-
/**
* vfree - Release memory allocated by vmalloc()
* @addr: Memory base address
@@ -2836,8 +2828,10 @@ void vfree(const void *addr)
if (!addr)
return;
-
- __vfree(addr);
+ if (unlikely(in_interrupt()))
+ __vfree_deferred(addr);
+ else
+ __vunmap(addr, 1);
}
EXPORT_SYMBOL(vfree);
@@ -3104,7 +3098,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
/*
* If not enough pages were obtained to accomplish an
- * allocation request, free them via __vfree() if any.
+ * allocation request, free them via vfree() if any.
*/
if (area->nr_pages != nr_small_pages) {
warn_alloc(gfp_mask, NULL,
@@ -3144,7 +3138,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
return area->addr;
fail:
- __vfree(area->addr);
+ vfree(area->addr);
return NULL;
}
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 03/10] mm: remove __vfree_deferred
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
2023-01-19 10:02 ` [PATCH 01/10] vmalloc: reject vmap with VM_FLUSH_RESET_PERMS Christoph Hellwig
2023-01-19 10:02 ` [PATCH 02/10] mm: remove __vfree Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:47 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 04/10] mm: move vmalloc_init and free_work down in vmalloc.c Christoph Hellwig
` (7 subsequent siblings)
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
Fold __vfree_deferred into vfree_atomic, and call vfree_atomic early on
from vfree if called from interrupt context so that the extra low-level
helper can be avoided.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 43 +++++++++++++++++--------------------------
1 file changed, 17 insertions(+), 26 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b989828b45109a..fafb6227f4428f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2769,20 +2769,6 @@ static void __vunmap(const void *addr, int deallocate_pages)
kfree(area);
}
-static inline void __vfree_deferred(const void *addr)
-{
- /*
- * Use raw_cpu_ptr() because this can be called from preemptible
- * context. Preemption is absolutely fine here, because the llist_add()
- * implementation is lockless, so it works even if we are adding to
- * another cpu's list. schedule_work() should be fine with this too.
- */
- struct vfree_deferred *p = raw_cpu_ptr(&vfree_deferred);
-
- if (llist_add((struct llist_node *)addr, &p->list))
- schedule_work(&p->wq);
-}
-
/**
* vfree_atomic - release memory allocated by vmalloc()
* @addr: memory base address
@@ -2792,13 +2778,19 @@ static inline void __vfree_deferred(const void *addr)
*/
void vfree_atomic(const void *addr)
{
- BUG_ON(in_nmi());
+ struct vfree_deferred *p = raw_cpu_ptr(&vfree_deferred);
+ BUG_ON(in_nmi());
kmemleak_free(addr);
- if (!addr)
- return;
- __vfree_deferred(addr);
+ /*
+ * Use raw_cpu_ptr() because this can be called from preemptible
+ * context. Preemption is absolutely fine here, because the llist_add()
+ * implementation is lockless, so it works even if we are adding to
+ * another cpu's list. schedule_work() should be fine with this too.
+ */
+ if (addr && llist_add((struct llist_node *)addr, &p->list))
+ schedule_work(&p->wq);
}
/**
@@ -2820,17 +2812,16 @@ void vfree_atomic(const void *addr)
*/
void vfree(const void *addr)
{
- BUG_ON(in_nmi());
+ if (unlikely(in_interrupt())) {
+ vfree_atomic(addr);
+ return;
+ }
+ BUG_ON(in_nmi());
kmemleak_free(addr);
+ might_sleep();
- might_sleep_if(!in_interrupt());
-
- if (!addr)
- return;
- if (unlikely(in_interrupt()))
- __vfree_deferred(addr);
- else
+ if (addr)
__vunmap(addr, 1);
}
EXPORT_SYMBOL(vfree);
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 04/10] mm: move vmalloc_init and free_work down in vmalloc.c
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
` (2 preceding siblings ...)
2023-01-19 10:02 ` [PATCH 03/10] mm: remove __vfree_deferred Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:48 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 05/10] mm: call vfree instead of __vunmap from delayed_vfree_work Christoph Hellwig
` (6 subsequent siblings)
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
Move these two functions around a bit to avoid forward declarations.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 105 +++++++++++++++++++++++++--------------------------
1 file changed, 52 insertions(+), 53 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fafb6227f4428f..daeb28b54663d5 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -89,17 +89,6 @@ struct vfree_deferred {
};
static DEFINE_PER_CPU(struct vfree_deferred, vfree_deferred);
-static void __vunmap(const void *, int);
-
-static void free_work(struct work_struct *w)
-{
- struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq);
- struct llist_node *t, *llnode;
-
- llist_for_each_safe(llnode, t, llist_del_all(&p->list))
- __vunmap((void *)llnode, 1);
-}
-
/*** Page table manipulation functions ***/
static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
phys_addr_t phys_addr, pgprot_t prot,
@@ -2449,48 +2438,6 @@ static void vmap_init_free_space(void)
}
}
-void __init vmalloc_init(void)
-{
- struct vmap_area *va;
- struct vm_struct *tmp;
- int i;
-
- /*
- * Create the cache for vmap_area objects.
- */
- vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC);
-
- for_each_possible_cpu(i) {
- struct vmap_block_queue *vbq;
- struct vfree_deferred *p;
-
- vbq = &per_cpu(vmap_block_queue, i);
- spin_lock_init(&vbq->lock);
- INIT_LIST_HEAD(&vbq->free);
- p = &per_cpu(vfree_deferred, i);
- init_llist_head(&p->list);
- INIT_WORK(&p->wq, free_work);
- }
-
- /* Import existing vmlist entries. */
- for (tmp = vmlist; tmp; tmp = tmp->next) {
- va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT);
- if (WARN_ON_ONCE(!va))
- continue;
-
- va->va_start = (unsigned long)tmp->addr;
- va->va_end = va->va_start + tmp->size;
- va->vm = tmp;
- insert_vmap_area(va, &vmap_area_root, &vmap_area_list);
- }
-
- /*
- * Now we can initialize a free vmap space.
- */
- vmap_init_free_space();
- vmap_initialized = true;
-}
-
static inline void setup_vmalloc_vm_locked(struct vm_struct *vm,
struct vmap_area *va, unsigned long flags, const void *caller)
{
@@ -2769,6 +2716,15 @@ static void __vunmap(const void *addr, int deallocate_pages)
kfree(area);
}
+static void delayed_vfree_work(struct work_struct *w)
+{
+ struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq);
+ struct llist_node *t, *llnode;
+
+ llist_for_each_safe(llnode, t, llist_del_all(&p->list))
+ __vunmap((void *)llnode, 1);
+}
+
/**
* vfree_atomic - release memory allocated by vmalloc()
* @addr: memory base address
@@ -4315,3 +4271,46 @@ static int __init proc_vmalloc_init(void)
module_init(proc_vmalloc_init);
#endif
+
+void __init vmalloc_init(void)
+{
+ struct vmap_area *va;
+ struct vm_struct *tmp;
+ int i;
+
+ /*
+ * Create the cache for vmap_area objects.
+ */
+ vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC);
+
+ for_each_possible_cpu(i) {
+ struct vmap_block_queue *vbq;
+ struct vfree_deferred *p;
+
+ vbq = &per_cpu(vmap_block_queue, i);
+ spin_lock_init(&vbq->lock);
+ INIT_LIST_HEAD(&vbq->free);
+ p = &per_cpu(vfree_deferred, i);
+ init_llist_head(&p->list);
+ INIT_WORK(&p->wq, delayed_vfree_work);
+ }
+
+ /* Import existing vmlist entries. */
+ for (tmp = vmlist; tmp; tmp = tmp->next) {
+ va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT);
+ if (WARN_ON_ONCE(!va))
+ continue;
+
+ va->va_start = (unsigned long)tmp->addr;
+ va->va_end = va->va_start + tmp->size;
+ va->vm = tmp;
+ insert_vmap_area(va, &vmap_area_root, &vmap_area_list);
+ }
+
+ /*
+ * Now we can initialize a free vmap space.
+ */
+ vmap_init_free_space();
+ vmap_initialized = true;
+}
+
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 05/10] mm: call vfree instead of __vunmap from delayed_vfree_work
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
` (3 preceding siblings ...)
2023-01-19 10:02 ` [PATCH 04/10] mm: move vmalloc_init and free_work down in vmalloc.c Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:48 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 06/10] mm: move __remove_vm_area out of va_remove_mappings Christoph Hellwig
` (5 subsequent siblings)
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
This adds an extra, never taken, in_interrupt() branch, but will allow
to cut down the maze of vfree helpers.
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index daeb28b54663d5..3c07520b8b821b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2722,7 +2722,7 @@ static void delayed_vfree_work(struct work_struct *w)
struct llist_node *t, *llnode;
llist_for_each_safe(llnode, t, llist_del_all(&p->list))
- __vunmap((void *)llnode, 1);
+ vfree(llnode);
}
/**
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 06/10] mm: move __remove_vm_area out of va_remove_mappings
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
` (4 preceding siblings ...)
2023-01-19 10:02 ` [PATCH 05/10] mm: call vfree instead of __vunmap from delayed_vfree_work Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:48 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 07/10] mm: use remove_vm_area in __vunmap Christoph Hellwig
` (4 subsequent siblings)
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
__remove_vm_area is the only part of va_remove_mappings that requires
a vmap_area. Move the call out to the caller and only pass the vm_struct
to va_remove_mappings.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 3c07520b8b821b..09c6fcfdaeb7c9 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2614,18 +2614,15 @@ static inline void set_area_direct_map(const struct vm_struct *area,
set_direct_map(area->pages[i]);
}
-/* Handle removing and resetting vm mappings related to the VA's vm_struct. */
-static void va_remove_mappings(struct vmap_area *va, int deallocate_pages)
+/* Handle removing and resetting vm mappings related to the vm_struct. */
+static void va_remove_mappings(struct vm_struct *area, int deallocate_pages)
{
- struct vm_struct *area = va->vm;
unsigned long start = ULONG_MAX, end = 0;
unsigned int page_order = vm_area_page_order(area);
int flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
int flush_dmap = 0;
int i;
- __remove_vm_area(va);
-
/* If this is not VM_FLUSH_RESET_PERMS memory, no need for the below. */
if (!flush_reset)
return;
@@ -2691,7 +2688,8 @@ static void __vunmap(const void *addr, int deallocate_pages)
kasan_poison_vmalloc(area->addr, get_vm_area_size(area));
- va_remove_mappings(va, deallocate_pages);
+ __remove_vm_area(va);
+ va_remove_mappings(area, deallocate_pages);
if (deallocate_pages) {
int i;
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 07/10] mm: use remove_vm_area in __vunmap
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
` (5 preceding siblings ...)
2023-01-19 10:02 ` [PATCH 06/10] mm: move __remove_vm_area out of va_remove_mappings Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:49 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 08/10] mm: move debug checks from __vunmap to remove_vm_area Christoph Hellwig
` (3 subsequent siblings)
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
Use the common helper to find and remove a vmap_area instead of open
coding it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 33 ++++++++++++---------------------
1 file changed, 12 insertions(+), 21 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 09c6fcfdaeb7c9..096633ba89965a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2571,20 +2571,6 @@ struct vm_struct *find_vm_area(const void *addr)
return va->vm;
}
-static struct vm_struct *__remove_vm_area(struct vmap_area *va)
-{
- struct vm_struct *vm;
-
- if (!va || !va->vm)
- return NULL;
-
- vm = va->vm;
- kasan_free_module_shadow(vm);
- free_unmap_vmap_area(va);
-
- return vm;
-}
-
/**
* remove_vm_area - find and remove a continuous kernel virtual area
* @addr: base address
@@ -2597,10 +2583,18 @@ static struct vm_struct *__remove_vm_area(struct vmap_area *va)
*/
struct vm_struct *remove_vm_area(const void *addr)
{
+ struct vmap_area *va;
+ struct vm_struct *vm;
+
might_sleep();
- return __remove_vm_area(
- find_unlink_vmap_area((unsigned long) addr));
+ va = find_unlink_vmap_area((unsigned long)addr);
+ if (!va || !va->vm)
+ return NULL;
+ vm = va->vm;
+ kasan_free_module_shadow(vm);
+ free_unmap_vmap_area(va);
+ return vm;
}
static inline void set_area_direct_map(const struct vm_struct *area,
@@ -2666,7 +2660,6 @@ static void va_remove_mappings(struct vm_struct *area, int deallocate_pages)
static void __vunmap(const void *addr, int deallocate_pages)
{
struct vm_struct *area;
- struct vmap_area *va;
if (!addr)
return;
@@ -2675,20 +2668,18 @@ static void __vunmap(const void *addr, int deallocate_pages)
addr))
return;
- va = find_unlink_vmap_area((unsigned long)addr);
- if (unlikely(!va)) {
+ area = remove_vm_area(addr);
+ if (unlikely(!area)) {
WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
addr);
return;
}
- area = va->vm;
debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
kasan_poison_vmalloc(area->addr, get_vm_area_size(area));
- __remove_vm_area(va);
va_remove_mappings(area, deallocate_pages);
if (deallocate_pages) {
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 08/10] mm: move debug checks from __vunmap to remove_vm_area
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
` (6 preceding siblings ...)
2023-01-19 10:02 ` [PATCH 07/10] mm: use remove_vm_area in __vunmap Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:49 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 09/10] mm: split __vunmap Christoph Hellwig
` (2 subsequent siblings)
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
All these checks apply to the free_vm_area interface as well, so move
them to the common routine.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 096633ba89965a..4cb189bdd51499 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2588,11 +2588,20 @@ struct vm_struct *remove_vm_area(const void *addr)
might_sleep();
+ if (WARN(!PAGE_ALIGNED(addr), "Trying to vfree() bad address (%p)\n",
+ addr))
+ return NULL;
+
va = find_unlink_vmap_area((unsigned long)addr);
if (!va || !va->vm)
return NULL;
vm = va->vm;
+
+ debug_check_no_locks_freed(vm->addr, get_vm_area_size(vm));
+ debug_check_no_obj_freed(vm->addr, get_vm_area_size(vm));
kasan_free_module_shadow(vm);
+ kasan_poison_vmalloc(vm->addr, get_vm_area_size(vm));
+
free_unmap_vmap_area(va);
return vm;
}
@@ -2664,10 +2673,6 @@ static void __vunmap(const void *addr, int deallocate_pages)
if (!addr)
return;
- if (WARN(!PAGE_ALIGNED(addr), "Trying to vfree() bad address (%p)\n",
- addr))
- return;
-
area = remove_vm_area(addr);
if (unlikely(!area)) {
WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
@@ -2675,11 +2680,6 @@ static void __vunmap(const void *addr, int deallocate_pages)
return;
}
- debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
- debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
-
- kasan_poison_vmalloc(area->addr, get_vm_area_size(area));
-
va_remove_mappings(area, deallocate_pages);
if (deallocate_pages) {
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 09/10] mm: split __vunmap
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
` (7 preceding siblings ...)
2023-01-19 10:02 ` [PATCH 08/10] mm: move debug checks from __vunmap to remove_vm_area Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:50 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 10/10] mm: refactor va_remove_mappings Christoph Hellwig
2023-01-19 16:45 ` cleanup vfree and vunmap Uladzislau Rezki
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
vunmap only needs to find and free the vmap_area and vm_strut, so open
code that there and merge the rest of the code into vfree.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 85 ++++++++++++++++++++++++++--------------------------
1 file changed, 42 insertions(+), 43 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 4cb189bdd51499..791d906d7e407c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2666,45 +2666,6 @@ static void va_remove_mappings(struct vm_struct *area, int deallocate_pages)
set_area_direct_map(area, set_direct_map_default_noflush);
}
-static void __vunmap(const void *addr, int deallocate_pages)
-{
- struct vm_struct *area;
-
- if (!addr)
- return;
-
- area = remove_vm_area(addr);
- if (unlikely(!area)) {
- WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
- addr);
- return;
- }
-
- va_remove_mappings(area, deallocate_pages);
-
- if (deallocate_pages) {
- int i;
-
- for (i = 0; i < area->nr_pages; i++) {
- struct page *page = area->pages[i];
-
- BUG_ON(!page);
- mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
- /*
- * High-order allocs for huge vmallocs are split, so
- * can be freed as an array of order-0 allocations
- */
- __free_pages(page, 0);
- cond_resched();
- }
- atomic_long_sub(area->nr_pages, &nr_vmalloc_pages);
-
- kvfree(area->pages);
- }
-
- kfree(area);
-}
-
static void delayed_vfree_work(struct work_struct *w)
{
struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq);
@@ -2757,6 +2718,9 @@ void vfree_atomic(const void *addr)
*/
void vfree(const void *addr)
{
+ struct vm_struct *vm;
+ int i;
+
if (unlikely(in_interrupt())) {
vfree_atomic(addr);
return;
@@ -2766,8 +2730,32 @@ void vfree(const void *addr)
kmemleak_free(addr);
might_sleep();
- if (addr)
- __vunmap(addr, 1);
+ if (!addr)
+ return;
+
+ vm = remove_vm_area(addr);
+ if (unlikely(!vm)) {
+ WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
+ addr);
+ return;
+ }
+
+ va_remove_mappings(vm, true);
+ for (i = 0; i < vm->nr_pages; i++) {
+ struct page *page = vm->pages[i];
+
+ BUG_ON(!page);
+ mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
+ /*
+ * High-order allocs for huge vmallocs are split, so
+ * can be freed as an array of order-0 allocations
+ */
+ __free_pages(page, 0);
+ cond_resched();
+ }
+ atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
+ kvfree(vm->pages);
+ kfree(vm);
}
EXPORT_SYMBOL(vfree);
@@ -2782,10 +2770,21 @@ EXPORT_SYMBOL(vfree);
*/
void vunmap(const void *addr)
{
+ struct vm_struct *vm;
+
BUG_ON(in_interrupt());
might_sleep();
- if (addr)
- __vunmap(addr, 0);
+
+ if (!addr)
+ return;
+ vm = remove_vm_area(addr);
+ if (unlikely(!vm)) {
+ WARN(1, KERN_ERR "Trying to vunmap() nonexistent vm area (%p)\n",
+ addr);
+ return;
+ }
+ WARN_ON_ONCE(vm->flags & VM_FLUSH_RESET_PERMS);
+ kfree(vm);
}
EXPORT_SYMBOL(vunmap);
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 10/10] mm: refactor va_remove_mappings
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
` (8 preceding siblings ...)
2023-01-19 10:02 ` [PATCH 09/10] mm: split __vunmap Christoph Hellwig
@ 2023-01-19 10:02 ` Christoph Hellwig
2023-01-19 18:50 ` Uladzislau Rezki
2023-01-19 16:45 ` cleanup vfree and vunmap Uladzislau Rezki
10 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-19 10:02 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki; +Cc: linux-mm
Move the VM_FLUSH_RESET_PERMS to the caller and rename the function
to better describe what it is doing.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 27 ++++++++-------------------
1 file changed, 8 insertions(+), 19 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 791d906d7e407c..f41be986b01e4e 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2617,35 +2617,23 @@ static inline void set_area_direct_map(const struct vm_struct *area,
set_direct_map(area->pages[i]);
}
-/* Handle removing and resetting vm mappings related to the vm_struct. */
-static void va_remove_mappings(struct vm_struct *area, int deallocate_pages)
+/*
+ * Flush the vm mapping and reset the direct map.
+ */
+static void vm_reset_perms(struct vm_struct *area)
{
unsigned long start = ULONG_MAX, end = 0;
unsigned int page_order = vm_area_page_order(area);
- int flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
int flush_dmap = 0;
int i;
- /* If this is not VM_FLUSH_RESET_PERMS memory, no need for the below. */
- if (!flush_reset)
- return;
-
- /*
- * If not deallocating pages, just do the flush of the VM area and
- * return.
- */
- if (!deallocate_pages) {
- vm_unmap_aliases();
- return;
- }
-
/*
- * If execution gets here, flush the vm mapping and reset the direct
- * map. Find the start and end range of the direct mappings to make sure
+ * Find the start and end range of the direct mappings to make sure that
* the vm_unmap_aliases() flush includes the direct map.
*/
for (i = 0; i < area->nr_pages; i += 1U << page_order) {
unsigned long addr = (unsigned long)page_address(area->pages[i]);
+
if (addr) {
unsigned long page_size;
@@ -2740,7 +2728,8 @@ void vfree(const void *addr)
return;
}
- va_remove_mappings(vm, true);
+ if (unlikely(vm->flags & VM_FLUSH_RESET_PERMS))
+ vm_reset_perms(vm);
for (i = 0; i < vm->nr_pages; i++) {
struct page *page = vm->pages[i];
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: cleanup vfree and vunmap
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
` (9 preceding siblings ...)
2023-01-19 10:02 ` [PATCH 10/10] mm: refactor va_remove_mappings Christoph Hellwig
@ 2023-01-19 16:45 ` Uladzislau Rezki
10 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 16:45 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
Hello!
> Hi all,
>
> this little series untangles the vfree and vunmap code path a bit.
>
> Note that it depends on 'Revert "remoteproc: qcom_q6v5_mss: map/unmap metadata
> region before/after use"' in linux-next.
>
> Diffstat:
> vmalloc.c | 304 +++++++++++++++++++++++++++-----------------------------------
> 1 file changed, 134 insertions(+), 170 deletions(-)
>
I will go trough this refactoring tomorrow and get back with some comments.
Thanks!
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 01/10] vmalloc: reject vmap with VM_FLUSH_RESET_PERMS
2023-01-19 10:02 ` [PATCH 01/10] vmalloc: reject vmap with VM_FLUSH_RESET_PERMS Christoph Hellwig
@ 2023-01-19 18:46 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:46 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:17AM +0100, Christoph Hellwig wrote:
> VM_FLUSH_RESET_PERMS is just for use with vmalloc as it is tied to freeing
> the underlying pages.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 0781c5a8e0e73d..6957d15d526e46 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2883,6 +2883,9 @@ void *vmap(struct page **pages, unsigned int count,
>
> might_sleep();
>
> + if (WARN_ON_ONCE(flags & VM_FLUSH_RESET_PERMS))
> + return NULL;
> +
> /*
> * Your top guard is someone else's bottom guard. Not having a top
> * guard compromises someone else's mappings too.
> --
> 2.39.0
>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 02/10] mm: remove __vfree
2023-01-19 10:02 ` [PATCH 02/10] mm: remove __vfree Christoph Hellwig
@ 2023-01-19 18:47 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:47 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:18AM +0100, Christoph Hellwig wrote:
> __vfree is a subset of vfree that just skips a few checks, and which is
> only used by vfree and an error cleanup path. Fold __vfree into vfree
> and switch the only other caller to call vfree() instead.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 18 ++++++------------
> 1 file changed, 6 insertions(+), 12 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6957d15d526e46..b989828b45109a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2801,14 +2801,6 @@ void vfree_atomic(const void *addr)
> __vfree_deferred(addr);
> }
>
> -static void __vfree(const void *addr)
> -{
> - if (unlikely(in_interrupt()))
> - __vfree_deferred(addr);
> - else
> - __vunmap(addr, 1);
> -}
> -
> /**
> * vfree - Release memory allocated by vmalloc()
> * @addr: Memory base address
> @@ -2836,8 +2828,10 @@ void vfree(const void *addr)
>
> if (!addr)
> return;
> -
> - __vfree(addr);
> + if (unlikely(in_interrupt()))
> + __vfree_deferred(addr);
> + else
> + __vunmap(addr, 1);
> }
> EXPORT_SYMBOL(vfree);
>
> @@ -3104,7 +3098,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>
> /*
> * If not enough pages were obtained to accomplish an
> - * allocation request, free them via __vfree() if any.
> + * allocation request, free them via vfree() if any.
> */
> if (area->nr_pages != nr_small_pages) {
> warn_alloc(gfp_mask, NULL,
> @@ -3144,7 +3138,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
> return area->addr;
>
> fail:
> - __vfree(area->addr);
> + vfree(area->addr);
> return NULL;
> }
>
> --
> 2.39.0
>
Makes sense to me.
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 03/10] mm: remove __vfree_deferred
2023-01-19 10:02 ` [PATCH 03/10] mm: remove __vfree_deferred Christoph Hellwig
@ 2023-01-19 18:47 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:47 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:19AM +0100, Christoph Hellwig wrote:
> Fold __vfree_deferred into vfree_atomic, and call vfree_atomic early on
> from vfree if called from interrupt context so that the extra low-level
> helper can be avoided.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 43 +++++++++++++++++--------------------------
> 1 file changed, 17 insertions(+), 26 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index b989828b45109a..fafb6227f4428f 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2769,20 +2769,6 @@ static void __vunmap(const void *addr, int deallocate_pages)
> kfree(area);
> }
>
> -static inline void __vfree_deferred(const void *addr)
> -{
> - /*
> - * Use raw_cpu_ptr() because this can be called from preemptible
> - * context. Preemption is absolutely fine here, because the llist_add()
> - * implementation is lockless, so it works even if we are adding to
> - * another cpu's list. schedule_work() should be fine with this too.
> - */
> - struct vfree_deferred *p = raw_cpu_ptr(&vfree_deferred);
> -
> - if (llist_add((struct llist_node *)addr, &p->list))
> - schedule_work(&p->wq);
> -}
> -
> /**
> * vfree_atomic - release memory allocated by vmalloc()
> * @addr: memory base address
> @@ -2792,13 +2778,19 @@ static inline void __vfree_deferred(const void *addr)
> */
> void vfree_atomic(const void *addr)
> {
> - BUG_ON(in_nmi());
> + struct vfree_deferred *p = raw_cpu_ptr(&vfree_deferred);
>
> + BUG_ON(in_nmi());
> kmemleak_free(addr);
>
> - if (!addr)
> - return;
> - __vfree_deferred(addr);
> + /*
> + * Use raw_cpu_ptr() because this can be called from preemptible
> + * context. Preemption is absolutely fine here, because the llist_add()
> + * implementation is lockless, so it works even if we are adding to
> + * another cpu's list. schedule_work() should be fine with this too.
> + */
> + if (addr && llist_add((struct llist_node *)addr, &p->list))
> + schedule_work(&p->wq);
> }
>
> /**
> @@ -2820,17 +2812,16 @@ void vfree_atomic(const void *addr)
> */
> void vfree(const void *addr)
> {
> - BUG_ON(in_nmi());
> + if (unlikely(in_interrupt())) {
> + vfree_atomic(addr);
> + return;
> + }
>
> + BUG_ON(in_nmi());
> kmemleak_free(addr);
> + might_sleep();
>
> - might_sleep_if(!in_interrupt());
> -
> - if (!addr)
> - return;
> - if (unlikely(in_interrupt()))
> - __vfree_deferred(addr);
> - else
> + if (addr)
> __vunmap(addr, 1);
> }
> EXPORT_SYMBOL(vfree);
> --
> 2.39.0
>
Such folding makes sense to me.
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 04/10] mm: move vmalloc_init and free_work down in vmalloc.c
2023-01-19 10:02 ` [PATCH 04/10] mm: move vmalloc_init and free_work down in vmalloc.c Christoph Hellwig
@ 2023-01-19 18:48 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:48 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:20AM +0100, Christoph Hellwig wrote:
> Move these two functions around a bit to avoid forward declarations.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 105 +++++++++++++++++++++++++--------------------------
> 1 file changed, 52 insertions(+), 53 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index fafb6227f4428f..daeb28b54663d5 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -89,17 +89,6 @@ struct vfree_deferred {
> };
> static DEFINE_PER_CPU(struct vfree_deferred, vfree_deferred);
>
> -static void __vunmap(const void *, int);
> -
> -static void free_work(struct work_struct *w)
> -{
> - struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq);
> - struct llist_node *t, *llnode;
> -
> - llist_for_each_safe(llnode, t, llist_del_all(&p->list))
> - __vunmap((void *)llnode, 1);
> -}
> -
> /*** Page table manipulation functions ***/
> static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> phys_addr_t phys_addr, pgprot_t prot,
> @@ -2449,48 +2438,6 @@ static void vmap_init_free_space(void)
> }
> }
>
> -void __init vmalloc_init(void)
> -{
> - struct vmap_area *va;
> - struct vm_struct *tmp;
> - int i;
> -
> - /*
> - * Create the cache for vmap_area objects.
> - */
> - vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC);
> -
> - for_each_possible_cpu(i) {
> - struct vmap_block_queue *vbq;
> - struct vfree_deferred *p;
> -
> - vbq = &per_cpu(vmap_block_queue, i);
> - spin_lock_init(&vbq->lock);
> - INIT_LIST_HEAD(&vbq->free);
> - p = &per_cpu(vfree_deferred, i);
> - init_llist_head(&p->list);
> - INIT_WORK(&p->wq, free_work);
> - }
> -
> - /* Import existing vmlist entries. */
> - for (tmp = vmlist; tmp; tmp = tmp->next) {
> - va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT);
> - if (WARN_ON_ONCE(!va))
> - continue;
> -
> - va->va_start = (unsigned long)tmp->addr;
> - va->va_end = va->va_start + tmp->size;
> - va->vm = tmp;
> - insert_vmap_area(va, &vmap_area_root, &vmap_area_list);
> - }
> -
> - /*
> - * Now we can initialize a free vmap space.
> - */
> - vmap_init_free_space();
> - vmap_initialized = true;
> -}
> -
> static inline void setup_vmalloc_vm_locked(struct vm_struct *vm,
> struct vmap_area *va, unsigned long flags, const void *caller)
> {
> @@ -2769,6 +2716,15 @@ static void __vunmap(const void *addr, int deallocate_pages)
> kfree(area);
> }
>
> +static void delayed_vfree_work(struct work_struct *w)
> +{
> + struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq);
> + struct llist_node *t, *llnode;
> +
> + llist_for_each_safe(llnode, t, llist_del_all(&p->list))
> + __vunmap((void *)llnode, 1);
> +}
> +
> /**
> * vfree_atomic - release memory allocated by vmalloc()
> * @addr: memory base address
> @@ -4315,3 +4271,46 @@ static int __init proc_vmalloc_init(void)
> module_init(proc_vmalloc_init);
>
> #endif
> +
> +void __init vmalloc_init(void)
> +{
> + struct vmap_area *va;
> + struct vm_struct *tmp;
> + int i;
> +
> + /*
> + * Create the cache for vmap_area objects.
> + */
> + vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC);
> +
> + for_each_possible_cpu(i) {
> + struct vmap_block_queue *vbq;
> + struct vfree_deferred *p;
> +
> + vbq = &per_cpu(vmap_block_queue, i);
> + spin_lock_init(&vbq->lock);
> + INIT_LIST_HEAD(&vbq->free);
> + p = &per_cpu(vfree_deferred, i);
> + init_llist_head(&p->list);
> + INIT_WORK(&p->wq, delayed_vfree_work);
> + }
> +
> + /* Import existing vmlist entries. */
> + for (tmp = vmlist; tmp; tmp = tmp->next) {
> + va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT);
> + if (WARN_ON_ONCE(!va))
> + continue;
> +
> + va->va_start = (unsigned long)tmp->addr;
> + va->va_end = va->va_start + tmp->size;
> + va->vm = tmp;
> + insert_vmap_area(va, &vmap_area_root, &vmap_area_list);
> + }
> +
> + /*
> + * Now we can initialize a free vmap space.
> + */
> + vmap_init_free_space();
> + vmap_initialized = true;
> +}
> +
> --
> 2.39.0
>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 05/10] mm: call vfree instead of __vunmap from delayed_vfree_work
2023-01-19 10:02 ` [PATCH 05/10] mm: call vfree instead of __vunmap from delayed_vfree_work Christoph Hellwig
@ 2023-01-19 18:48 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:48 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:21AM +0100, Christoph Hellwig wrote:
> This adds an extra, never taken, in_interrupt() branch, but will allow
> to cut down the maze of vfree helpers.
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index daeb28b54663d5..3c07520b8b821b 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2722,7 +2722,7 @@ static void delayed_vfree_work(struct work_struct *w)
> struct llist_node *t, *llnode;
>
> llist_for_each_safe(llnode, t, llist_del_all(&p->list))
> - __vunmap((void *)llnode, 1);
> + vfree(llnode);
> }
>
> /**
> --
> 2.39.0
>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 06/10] mm: move __remove_vm_area out of va_remove_mappings
2023-01-19 10:02 ` [PATCH 06/10] mm: move __remove_vm_area out of va_remove_mappings Christoph Hellwig
@ 2023-01-19 18:48 ` Uladzislau Rezki
2023-01-20 7:41 ` Christoph Hellwig
0 siblings, 1 reply; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:48 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:22AM +0100, Christoph Hellwig wrote:
> __remove_vm_area is the only part of va_remove_mappings that requires
> a vmap_area. Move the call out to the caller and only pass the vm_struct
> to va_remove_mappings.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 3c07520b8b821b..09c6fcfdaeb7c9 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2614,18 +2614,15 @@ static inline void set_area_direct_map(const struct vm_struct *area,
> set_direct_map(area->pages[i]);
> }
>
> -/* Handle removing and resetting vm mappings related to the VA's vm_struct. */
> -static void va_remove_mappings(struct vmap_area *va, int deallocate_pages)
> +/* Handle removing and resetting vm mappings related to the vm_struct. */
> +static void va_remove_mappings(struct vm_struct *area, int deallocate_pages)
> {
> - struct vm_struct *area = va->vm;
> unsigned long start = ULONG_MAX, end = 0;
> unsigned int page_order = vm_area_page_order(area);
> int flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
> int flush_dmap = 0;
> int i;
>
> - __remove_vm_area(va);
> -
> /* If this is not VM_FLUSH_RESET_PERMS memory, no need for the below. */
> if (!flush_reset)
> return;
> @@ -2691,7 +2688,8 @@ static void __vunmap(const void *addr, int deallocate_pages)
>
> kasan_poison_vmalloc(area->addr, get_vm_area_size(area));
>
> - va_remove_mappings(va, deallocate_pages);
> + __remove_vm_area(va);
> + va_remove_mappings(area, deallocate_pages);
>
> if (deallocate_pages) {
> int i;
> --
> 2.39.0
>
A small nit here. IMHO, a va_remove_mappings() should be
renamed back to vm_remove_mappings() since after this patch
it starts deal with "struct vm_struct".
OK. After checking all patches this function will be renamed
anyway to vm_reset_perms().
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 07/10] mm: use remove_vm_area in __vunmap
2023-01-19 10:02 ` [PATCH 07/10] mm: use remove_vm_area in __vunmap Christoph Hellwig
@ 2023-01-19 18:49 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:49 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:23AM +0100, Christoph Hellwig wrote:
> Use the common helper to find and remove a vmap_area instead of open
> coding it.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 33 ++++++++++++---------------------
> 1 file changed, 12 insertions(+), 21 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 09c6fcfdaeb7c9..096633ba89965a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2571,20 +2571,6 @@ struct vm_struct *find_vm_area(const void *addr)
> return va->vm;
> }
>
> -static struct vm_struct *__remove_vm_area(struct vmap_area *va)
> -{
> - struct vm_struct *vm;
> -
> - if (!va || !va->vm)
> - return NULL;
> -
> - vm = va->vm;
> - kasan_free_module_shadow(vm);
> - free_unmap_vmap_area(va);
> -
> - return vm;
> -}
> -
> /**
> * remove_vm_area - find and remove a continuous kernel virtual area
> * @addr: base address
> @@ -2597,10 +2583,18 @@ static struct vm_struct *__remove_vm_area(struct vmap_area *va)
> */
> struct vm_struct *remove_vm_area(const void *addr)
> {
> + struct vmap_area *va;
> + struct vm_struct *vm;
> +
> might_sleep();
>
> - return __remove_vm_area(
> - find_unlink_vmap_area((unsigned long) addr));
> + va = find_unlink_vmap_area((unsigned long)addr);
> + if (!va || !va->vm)
> + return NULL;
> + vm = va->vm;
> + kasan_free_module_shadow(vm);
> + free_unmap_vmap_area(va);
> + return vm;
> }
>
> static inline void set_area_direct_map(const struct vm_struct *area,
> @@ -2666,7 +2660,6 @@ static void va_remove_mappings(struct vm_struct *area, int deallocate_pages)
> static void __vunmap(const void *addr, int deallocate_pages)
> {
> struct vm_struct *area;
> - struct vmap_area *va;
>
> if (!addr)
> return;
> @@ -2675,20 +2668,18 @@ static void __vunmap(const void *addr, int deallocate_pages)
> addr))
> return;
>
> - va = find_unlink_vmap_area((unsigned long)addr);
> - if (unlikely(!va)) {
> + area = remove_vm_area(addr);
> + if (unlikely(!area)) {
> WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
> addr);
> return;
> }
>
> - area = va->vm;
> debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
> debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
>
> kasan_poison_vmalloc(area->addr, get_vm_area_size(area));
>
> - __remove_vm_area(va);
> va_remove_mappings(area, deallocate_pages);
>
> if (deallocate_pages) {
> --
> 2.39.0
>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 08/10] mm: move debug checks from __vunmap to remove_vm_area
2023-01-19 10:02 ` [PATCH 08/10] mm: move debug checks from __vunmap to remove_vm_area Christoph Hellwig
@ 2023-01-19 18:49 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:49 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:24AM +0100, Christoph Hellwig wrote:
> All these checks apply to the free_vm_area interface as well, so move
> them to the common routine.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 18 +++++++++---------
> 1 file changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 096633ba89965a..4cb189bdd51499 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2588,11 +2588,20 @@ struct vm_struct *remove_vm_area(const void *addr)
>
> might_sleep();
>
> + if (WARN(!PAGE_ALIGNED(addr), "Trying to vfree() bad address (%p)\n",
> + addr))
> + return NULL;
> +
> va = find_unlink_vmap_area((unsigned long)addr);
> if (!va || !va->vm)
> return NULL;
> vm = va->vm;
> +
> + debug_check_no_locks_freed(vm->addr, get_vm_area_size(vm));
> + debug_check_no_obj_freed(vm->addr, get_vm_area_size(vm));
> kasan_free_module_shadow(vm);
> + kasan_poison_vmalloc(vm->addr, get_vm_area_size(vm));
> +
> free_unmap_vmap_area(va);
> return vm;
> }
> @@ -2664,10 +2673,6 @@ static void __vunmap(const void *addr, int deallocate_pages)
> if (!addr)
> return;
>
> - if (WARN(!PAGE_ALIGNED(addr), "Trying to vfree() bad address (%p)\n",
> - addr))
> - return;
> -
> area = remove_vm_area(addr);
> if (unlikely(!area)) {
> WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
> @@ -2675,11 +2680,6 @@ static void __vunmap(const void *addr, int deallocate_pages)
> return;
> }
>
> - debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
> - debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
> -
> - kasan_poison_vmalloc(area->addr, get_vm_area_size(area));
> -
> va_remove_mappings(area, deallocate_pages);
>
> if (deallocate_pages) {
> --
> 2.39.0
>
Looks good.
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 09/10] mm: split __vunmap
2023-01-19 10:02 ` [PATCH 09/10] mm: split __vunmap Christoph Hellwig
@ 2023-01-19 18:50 ` Uladzislau Rezki
2023-01-20 7:42 ` Christoph Hellwig
0 siblings, 1 reply; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:50 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:25AM +0100, Christoph Hellwig wrote:
> vunmap only needs to find and free the vmap_area and vm_strut, so open
> code that there and merge the rest of the code into vfree.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 85 ++++++++++++++++++++++++++--------------------------
> 1 file changed, 42 insertions(+), 43 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 4cb189bdd51499..791d906d7e407c 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2666,45 +2666,6 @@ static void va_remove_mappings(struct vm_struct *area, int deallocate_pages)
> set_area_direct_map(area, set_direct_map_default_noflush);
> }
>
> -static void __vunmap(const void *addr, int deallocate_pages)
> -{
> - struct vm_struct *area;
> -
> - if (!addr)
> - return;
> -
> - area = remove_vm_area(addr);
> - if (unlikely(!area)) {
> - WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
> - addr);
> - return;
> - }
> -
> - va_remove_mappings(area, deallocate_pages);
> -
> - if (deallocate_pages) {
> - int i;
> -
> - for (i = 0; i < area->nr_pages; i++) {
> - struct page *page = area->pages[i];
> -
> - BUG_ON(!page);
> - mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
> - /*
> - * High-order allocs for huge vmallocs are split, so
> - * can be freed as an array of order-0 allocations
> - */
> - __free_pages(page, 0);
> - cond_resched();
> - }
> - atomic_long_sub(area->nr_pages, &nr_vmalloc_pages);
> -
> - kvfree(area->pages);
> - }
> -
> - kfree(area);
> -}
> -
> static void delayed_vfree_work(struct work_struct *w)
> {
> struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq);
> @@ -2757,6 +2718,9 @@ void vfree_atomic(const void *addr)
> */
> void vfree(const void *addr)
> {
> + struct vm_struct *vm;
> + int i;
> +
> if (unlikely(in_interrupt())) {
> vfree_atomic(addr);
> return;
> @@ -2766,8 +2730,32 @@ void vfree(const void *addr)
> kmemleak_free(addr);
> might_sleep();
>
> - if (addr)
> - __vunmap(addr, 1);
> + if (!addr)
> + return;
> +
> + vm = remove_vm_area(addr);
> + if (unlikely(!vm)) {
> + WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
> + addr);
> + return;
> + }
> +
> + va_remove_mappings(vm, true);
> + for (i = 0; i < vm->nr_pages; i++) {
> + struct page *page = vm->pages[i];
> +
> + BUG_ON(!page);
> + mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
> + /*
> + * High-order allocs for huge vmallocs are split, so
> + * can be freed as an array of order-0 allocations
> + */
> + __free_pages(page, 0);
> + cond_resched();
> + }
> + atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
> + kvfree(vm->pages);
> + kfree(vm);
> }
> EXPORT_SYMBOL(vfree);
>
> @@ -2782,10 +2770,21 @@ EXPORT_SYMBOL(vfree);
> */
> void vunmap(const void *addr)
> {
> + struct vm_struct *vm;
> +
> BUG_ON(in_interrupt());
> might_sleep();
> - if (addr)
> - __vunmap(addr, 0);
> +
> + if (!addr)
> + return;
> + vm = remove_vm_area(addr);
> + if (unlikely(!vm)) {
> + WARN(1, KERN_ERR "Trying to vunmap() nonexistent vm area (%p)\n",
> + addr);
> + return;
> + }
> + WARN_ON_ONCE(vm->flags & VM_FLUSH_RESET_PERMS);
> + kfree(vm);
> }
> EXPORT_SYMBOL(vunmap);
>
> --
> 2.39.0
>
After this patch same check in the end of the vunmap() becomes odd
because we fail a vmap() call on its entry if VM_FLUSH_RESET_PERMS
flag is set. See the [1] patch in this series.
Is there any reason for such duplication?
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 10/10] mm: refactor va_remove_mappings
2023-01-19 10:02 ` [PATCH 10/10] mm: refactor va_remove_mappings Christoph Hellwig
@ 2023-01-19 18:50 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-19 18:50 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Andrew Morton, Uladzislau Rezki, linux-mm
On Thu, Jan 19, 2023 at 11:02:26AM +0100, Christoph Hellwig wrote:
> Move the VM_FLUSH_RESET_PERMS to the caller and rename the function
> to better describe what it is doing.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> mm/vmalloc.c | 27 ++++++++-------------------
> 1 file changed, 8 insertions(+), 19 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 791d906d7e407c..f41be986b01e4e 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2617,35 +2617,23 @@ static inline void set_area_direct_map(const struct vm_struct *area,
> set_direct_map(area->pages[i]);
> }
>
> -/* Handle removing and resetting vm mappings related to the vm_struct. */
> -static void va_remove_mappings(struct vm_struct *area, int deallocate_pages)
> +/*
> + * Flush the vm mapping and reset the direct map.
> + */
> +static void vm_reset_perms(struct vm_struct *area)
> {
> unsigned long start = ULONG_MAX, end = 0;
> unsigned int page_order = vm_area_page_order(area);
> - int flush_reset = area->flags & VM_FLUSH_RESET_PERMS;
> int flush_dmap = 0;
> int i;
>
> - /* If this is not VM_FLUSH_RESET_PERMS memory, no need for the below. */
> - if (!flush_reset)
> - return;
> -
> - /*
> - * If not deallocating pages, just do the flush of the VM area and
> - * return.
> - */
> - if (!deallocate_pages) {
> - vm_unmap_aliases();
> - return;
> - }
> -
> /*
> - * If execution gets here, flush the vm mapping and reset the direct
> - * map. Find the start and end range of the direct mappings to make sure
> + * Find the start and end range of the direct mappings to make sure that
> * the vm_unmap_aliases() flush includes the direct map.
> */
> for (i = 0; i < area->nr_pages; i += 1U << page_order) {
> unsigned long addr = (unsigned long)page_address(area->pages[i]);
> +
> if (addr) {
> unsigned long page_size;
>
> @@ -2740,7 +2728,8 @@ void vfree(const void *addr)
> return;
> }
>
> - va_remove_mappings(vm, true);
> + if (unlikely(vm->flags & VM_FLUSH_RESET_PERMS))
> + vm_reset_perms(vm);
> for (i = 0; i < vm->nr_pages; i++) {
> struct page *page = vm->pages[i];
>
> --
> 2.39.0
>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 06/10] mm: move __remove_vm_area out of va_remove_mappings
2023-01-19 18:48 ` Uladzislau Rezki
@ 2023-01-20 7:41 ` Christoph Hellwig
2023-01-20 11:32 ` Uladzislau Rezki
0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-20 7:41 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Christoph Hellwig, Andrew Morton, linux-mm
On Thu, Jan 19, 2023 at 07:48:48PM +0100, Uladzislau Rezki wrote:
> A small nit here. IMHO, a va_remove_mappings() should be
> renamed back to vm_remove_mappings() since after this patch
> it starts deal with "struct vm_struct".
>
> OK. After checking all patches this function will be renamed
> anyway to vm_reset_perms().
І could rename it. It's not that much more churn given that
the prototype is touched anyway.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 09/10] mm: split __vunmap
2023-01-19 18:50 ` Uladzislau Rezki
@ 2023-01-20 7:42 ` Christoph Hellwig
2023-01-20 11:32 ` Uladzislau Rezki
0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-20 7:42 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Christoph Hellwig, Andrew Morton, linux-mm
On Thu, Jan 19, 2023 at 07:50:09PM +0100, Uladzislau Rezki wrote:
> After this patch same check in the end of the vunmap() becomes odd
> because we fail a vmap() call on its entry if VM_FLUSH_RESET_PERMS
> flag is set. See the [1] patch in this series.
>
> Is there any reason for such duplication?
Mostly just documentation for me to explain why no flushing is needed.
But I can drop it.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 06/10] mm: move __remove_vm_area out of va_remove_mappings
2023-01-20 7:41 ` Christoph Hellwig
@ 2023-01-20 11:32 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-20 11:32 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Uladzislau Rezki, Andrew Morton, linux-mm
On Fri, Jan 20, 2023 at 08:41:37AM +0100, Christoph Hellwig wrote:
> On Thu, Jan 19, 2023 at 07:48:48PM +0100, Uladzislau Rezki wrote:
> > A small nit here. IMHO, a va_remove_mappings() should be
> > renamed back to vm_remove_mappings() since after this patch
> > it starts deal with "struct vm_struct".
> >
> > OK. After checking all patches this function will be renamed
> > anyway to vm_reset_perms().
>
> І could rename it. It's not that much more churn given that
> the prototype is touched anyway.
>
A new vm_reset_perms() name matches the VM_FLUSH_RESET_PERMS flag
what, i think, is totally fine.
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 09/10] mm: split __vunmap
2023-01-20 7:42 ` Christoph Hellwig
@ 2023-01-20 11:32 ` Uladzislau Rezki
0 siblings, 0 replies; 28+ messages in thread
From: Uladzislau Rezki @ 2023-01-20 11:32 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Uladzislau Rezki, Andrew Morton, linux-mm
On Fri, Jan 20, 2023 at 08:42:17AM +0100, Christoph Hellwig wrote:
> On Thu, Jan 19, 2023 at 07:50:09PM +0100, Uladzislau Rezki wrote:
> > After this patch same check in the end of the vunmap() becomes odd
> > because we fail a vmap() call on its entry if VM_FLUSH_RESET_PERMS
> > flag is set. See the [1] patch in this series.
> >
> > Is there any reason for such duplication?
>
> Mostly just documentation for me to explain why no flushing is needed.
> But I can drop it.
>
Thanks.
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 09/10] mm: split __vunmap
2023-01-21 7:10 Christoph Hellwig
@ 2023-01-21 7:10 ` Christoph Hellwig
2023-01-23 10:47 ` David Hildenbrand
0 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2023-01-21 7:10 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki
Cc: Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, kasan-dev, linux-mm
vunmap only needs to find and free the vmap_area and vm_strut, so open
code that there and merge the rest of the code into vfree.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
mm/vmalloc.c | 84 +++++++++++++++++++++++++---------------------------
1 file changed, 41 insertions(+), 43 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 5b432508319a4f..6bd811e4b7561d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2666,45 +2666,6 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
set_area_direct_map(area, set_direct_map_default_noflush);
}
-static void __vunmap(const void *addr, int deallocate_pages)
-{
- struct vm_struct *area;
-
- if (!addr)
- return;
-
- area = remove_vm_area(addr);
- if (unlikely(!area)) {
- WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
- addr);
- return;
- }
-
- vm_remove_mappings(area, deallocate_pages);
-
- if (deallocate_pages) {
- int i;
-
- for (i = 0; i < area->nr_pages; i++) {
- struct page *page = area->pages[i];
-
- BUG_ON(!page);
- mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
- /*
- * High-order allocs for huge vmallocs are split, so
- * can be freed as an array of order-0 allocations
- */
- __free_pages(page, 0);
- cond_resched();
- }
- atomic_long_sub(area->nr_pages, &nr_vmalloc_pages);
-
- kvfree(area->pages);
- }
-
- kfree(area);
-}
-
static void delayed_vfree_work(struct work_struct *w)
{
struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq);
@@ -2757,6 +2718,9 @@ void vfree_atomic(const void *addr)
*/
void vfree(const void *addr)
{
+ struct vm_struct *vm;
+ int i;
+
if (unlikely(in_interrupt())) {
vfree_atomic(addr);
return;
@@ -2766,8 +2730,32 @@ void vfree(const void *addr)
kmemleak_free(addr);
might_sleep();
- if (addr)
- __vunmap(addr, 1);
+ if (!addr)
+ return;
+
+ vm = remove_vm_area(addr);
+ if (unlikely(!vm)) {
+ WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
+ addr);
+ return;
+ }
+
+ vm_remove_mappings(vm, true);
+ for (i = 0; i < vm->nr_pages; i++) {
+ struct page *page = vm->pages[i];
+
+ BUG_ON(!page);
+ mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
+ /*
+ * High-order allocs for huge vmallocs are split, so
+ * can be freed as an array of order-0 allocations
+ */
+ __free_pages(page, 0);
+ cond_resched();
+ }
+ atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
+ kvfree(vm->pages);
+ kfree(vm);
}
EXPORT_SYMBOL(vfree);
@@ -2782,10 +2770,20 @@ EXPORT_SYMBOL(vfree);
*/
void vunmap(const void *addr)
{
+ struct vm_struct *vm;
+
BUG_ON(in_interrupt());
might_sleep();
- if (addr)
- __vunmap(addr, 0);
+
+ if (!addr)
+ return;
+ vm = remove_vm_area(addr);
+ if (unlikely(!vm)) {
+ WARN(1, KERN_ERR "Trying to vunmap() nonexistent vm area (%p)\n",
+ addr);
+ return;
+ }
+ kfree(vm);
}
EXPORT_SYMBOL(vunmap);
--
2.39.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 09/10] mm: split __vunmap
2023-01-21 7:10 ` [PATCH 09/10] mm: split __vunmap Christoph Hellwig
@ 2023-01-23 10:47 ` David Hildenbrand
0 siblings, 0 replies; 28+ messages in thread
From: David Hildenbrand @ 2023-01-23 10:47 UTC (permalink / raw)
To: Christoph Hellwig, Andrew Morton, Uladzislau Rezki
Cc: Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, kasan-dev, linux-mm
On 21.01.23 08:10, Christoph Hellwig wrote:
> vunmap only needs to find and free the vmap_area and vm_strut, so open
s/vm_strut/vm_struct/
> code that there and merge the rest of the code into vfree.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2023-01-23 10:47 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-19 10:02 cleanup vfree and vunmap Christoph Hellwig
2023-01-19 10:02 ` [PATCH 01/10] vmalloc: reject vmap with VM_FLUSH_RESET_PERMS Christoph Hellwig
2023-01-19 18:46 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 02/10] mm: remove __vfree Christoph Hellwig
2023-01-19 18:47 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 03/10] mm: remove __vfree_deferred Christoph Hellwig
2023-01-19 18:47 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 04/10] mm: move vmalloc_init and free_work down in vmalloc.c Christoph Hellwig
2023-01-19 18:48 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 05/10] mm: call vfree instead of __vunmap from delayed_vfree_work Christoph Hellwig
2023-01-19 18:48 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 06/10] mm: move __remove_vm_area out of va_remove_mappings Christoph Hellwig
2023-01-19 18:48 ` Uladzislau Rezki
2023-01-20 7:41 ` Christoph Hellwig
2023-01-20 11:32 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 07/10] mm: use remove_vm_area in __vunmap Christoph Hellwig
2023-01-19 18:49 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 08/10] mm: move debug checks from __vunmap to remove_vm_area Christoph Hellwig
2023-01-19 18:49 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 09/10] mm: split __vunmap Christoph Hellwig
2023-01-19 18:50 ` Uladzislau Rezki
2023-01-20 7:42 ` Christoph Hellwig
2023-01-20 11:32 ` Uladzislau Rezki
2023-01-19 10:02 ` [PATCH 10/10] mm: refactor va_remove_mappings Christoph Hellwig
2023-01-19 18:50 ` Uladzislau Rezki
2023-01-19 16:45 ` cleanup vfree and vunmap Uladzislau Rezki
-- strict thread matches above, loose matches on Subject: below --
2023-01-21 7:10 Christoph Hellwig
2023-01-21 7:10 ` [PATCH 09/10] mm: split __vunmap Christoph Hellwig
2023-01-23 10:47 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).