linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation
  2022-10-14 20:58 [PATCH v4 10/17] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Guenter Roeck
@ 2022-10-15  4:34 ` Hyeonggon Yoo
  0 siblings, 0 replies; 2+ messages in thread
From: Hyeonggon Yoo @ 2022-10-15  4:34 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, linux-mm,
	linux-kernel

After commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than
order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2)
requests to buddy like SLUB does.

SLAB has been using kmalloc caches to allocate freelist_idx_t array for
off slab caches. But after the commit, freelist_size can be bigger than
KMALLOC_MAX_CACHE_SIZE.

Instead of using pointer to kmalloc cache, use kmalloc_node() and only
check if the kmalloc cache is off slab during calculate_slab_order().
If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens
as it allocates freelist_idx_t array directly from buddy.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Fixes: d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator")
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---

@Guenter:
	This fixes the issue on my emulation.
	Can you please test this on your environment?

 include/linux/slab_def.h |  1 -
 mm/slab.c                | 37 +++++++++++++++++++------------------
 2 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index e24c9aff6fed..f0ffad6a3365 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -33,7 +33,6 @@ struct kmem_cache {
 
 	size_t colour;			/* cache colouring range */
 	unsigned int colour_off;	/* colour offset */
-	struct kmem_cache *freelist_cache;
 	unsigned int freelist_size;
 
 	/* constructor func */
diff --git a/mm/slab.c b/mm/slab.c
index a5486ff8362a..d1f6e2c64c2e 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1619,7 +1619,7 @@ static void slab_destroy(struct kmem_cache *cachep, struct slab *slab)
 	 * although actual page can be freed in rcu context
 	 */
 	if (OFF_SLAB(cachep))
-		kmem_cache_free(cachep->freelist_cache, freelist);
+		kfree(freelist);
 }
 
 /*
@@ -1671,21 +1671,27 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
 		if (flags & CFLGS_OFF_SLAB) {
 			struct kmem_cache *freelist_cache;
 			size_t freelist_size;
+			size_t freelist_cache_size;
 
 			freelist_size = num * sizeof(freelist_idx_t);
-			freelist_cache = kmalloc_slab(freelist_size, 0u);
-			if (!freelist_cache)
-				continue;
-
-			/*
-			 * Needed to avoid possible looping condition
-			 * in cache_grow_begin()
-			 */
-			if (OFF_SLAB(freelist_cache))
-				continue;
+			if (freelist_size > KMALLOC_MAX_CACHE_SIZE) {
+				freelist_cache_size = PAGE_SIZE << get_order(freelist_size);
+			} else {
+				freelist_cache = kmalloc_slab(freelist_size, 0u);
+				if (!freelist_cache)
+					continue;
+				freelist_cache_size = freelist_cache->size;
+
+				/*
+				 * Needed to avoid possible looping condition
+				 * in cache_grow_begin()
+				 */
+				if (OFF_SLAB(freelist_cache))
+					continue;
+			}
 
 			/* check if off slab has enough benefit */
-			if (freelist_cache->size > cachep->size / 2)
+			if (freelist_cache_size > cachep->size / 2)
 				continue;
 		}
 
@@ -2061,11 +2067,6 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
 		cachep->flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
 #endif
 
-	if (OFF_SLAB(cachep)) {
-		cachep->freelist_cache =
-			kmalloc_slab(cachep->freelist_size, 0u);
-	}
-
 	err = setup_cpu_cache(cachep, gfp);
 	if (err) {
 		__kmem_cache_release(cachep);
@@ -2292,7 +2293,7 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
 		freelist = NULL;
 	else if (OFF_SLAB(cachep)) {
 		/* Slab management obj is off-slab. */
-		freelist = kmem_cache_alloc_node(cachep->freelist_cache,
+		freelist = kmalloc_node(cachep->freelist_size,
 					      local_flags, nodeid);
 	} else {
 		/* We will use last bytes at the slab for freelist */
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation
@ 2022-10-15 11:47 Guenter Roeck
  0 siblings, 0 replies; 2+ messages in thread
From: Guenter Roeck @ 2022-10-15 11:47 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, linux-mm,
	linux-kernel

On Sat, Oct 15, 2022 at 01:34:29PM +0900, Hyeonggon Yoo wrote:
> After commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than
> order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2)
> requests to buddy like SLUB does.
> 
> SLAB has been using kmalloc caches to allocate freelist_idx_t array for
> off slab caches. But after the commit, freelist_size can be bigger than
> KMALLOC_MAX_CACHE_SIZE.
> 
> Instead of using pointer to kmalloc cache, use kmalloc_node() and only
> check if the kmalloc cache is off slab during calculate_slab_order().
> If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens
> as it allocates freelist_idx_t array directly from buddy.
> 
> Reported-by: Guenter Roeck <linux@roeck-us.net>
> Fixes: d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator")
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
> 
> @Guenter:
> 	This fixes the issue on my emulation.
> 	Can you please test this on your environment?

Yes, that fixes the problem for me.

Tested-by: Guenter Roeck <linux@roeck-us.net>

Thanks,
Guenter

> 
>  include/linux/slab_def.h |  1 -
>  mm/slab.c                | 37 +++++++++++++++++++------------------
>  2 files changed, 19 insertions(+), 19 deletions(-)
> 
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index e24c9aff6fed..f0ffad6a3365 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -33,7 +33,6 @@ struct kmem_cache {
>  
>  	size_t colour;			/* cache colouring range */
>  	unsigned int colour_off;	/* colour offset */
> -	struct kmem_cache *freelist_cache;
>  	unsigned int freelist_size;
>  
>  	/* constructor func */
> diff --git a/mm/slab.c b/mm/slab.c
> index a5486ff8362a..d1f6e2c64c2e 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1619,7 +1619,7 @@ static void slab_destroy(struct kmem_cache *cachep, struct slab *slab)
>  	 * although actual page can be freed in rcu context
>  	 */
>  	if (OFF_SLAB(cachep))
> -		kmem_cache_free(cachep->freelist_cache, freelist);
> +		kfree(freelist);
>  }
>  
>  /*
> @@ -1671,21 +1671,27 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
>  		if (flags & CFLGS_OFF_SLAB) {
>  			struct kmem_cache *freelist_cache;
>  			size_t freelist_size;
> +			size_t freelist_cache_size;
>  
>  			freelist_size = num * sizeof(freelist_idx_t);
> -			freelist_cache = kmalloc_slab(freelist_size, 0u);
> -			if (!freelist_cache)
> -				continue;
> -
> -			/*
> -			 * Needed to avoid possible looping condition
> -			 * in cache_grow_begin()
> -			 */
> -			if (OFF_SLAB(freelist_cache))
> -				continue;
> +			if (freelist_size > KMALLOC_MAX_CACHE_SIZE) {
> +				freelist_cache_size = PAGE_SIZE << get_order(freelist_size);
> +			} else {
> +				freelist_cache = kmalloc_slab(freelist_size, 0u);
> +				if (!freelist_cache)
> +					continue;
> +				freelist_cache_size = freelist_cache->size;
> +
> +				/*
> +				 * Needed to avoid possible looping condition
> +				 * in cache_grow_begin()
> +				 */
> +				if (OFF_SLAB(freelist_cache))
> +					continue;
> +			}
>  
>  			/* check if off slab has enough benefit */
> -			if (freelist_cache->size > cachep->size / 2)
> +			if (freelist_cache_size > cachep->size / 2)
>  				continue;
>  		}
>  
> @@ -2061,11 +2067,6 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
>  		cachep->flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
>  #endif
>  
> -	if (OFF_SLAB(cachep)) {
> -		cachep->freelist_cache =
> -			kmalloc_slab(cachep->freelist_size, 0u);
> -	}
> -
>  	err = setup_cpu_cache(cachep, gfp);
>  	if (err) {
>  		__kmem_cache_release(cachep);
> @@ -2292,7 +2293,7 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
>  		freelist = NULL;
>  	else if (OFF_SLAB(cachep)) {
>  		/* Slab management obj is off-slab. */
> -		freelist = kmem_cache_alloc_node(cachep->freelist_cache,
> +		freelist = kmalloc_node(cachep->freelist_size,
>  					      local_flags, nodeid);
>  	} else {
>  		/* We will use last bytes at the slab for freelist */
> -- 
> 2.32.0


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-10-15 11:47 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-15 11:47 [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation Guenter Roeck
  -- strict thread matches above, loose matches on Subject: below --
2022-10-14 20:58 [PATCH v4 10/17] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Guenter Roeck
2022-10-15  4:34 ` [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation Hyeonggon Yoo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).