linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] slub: create new ___slab_alloc function that can be called with irqs disabled
@ 2015-08-28 19:43 Christoph Lameter
  2015-08-29  9:59 ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 2+ messages in thread
From: Christoph Lameter @ 2015-08-28 19:43 UTC (permalink / raw)
  To: akpm; +Cc: Jesper Dangaard Brouer, linux-mm


Bulk alloc needs a function like that because it enables interrupts before
calling __slab_alloc which promptly disables them again using the expensive
local_irq_save().

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c	2015-08-28 14:33:01.520254081 -0500
+++ linux/mm/slub.c	2015-08-28 14:34:21.050215578 -0500
@@ -2300,23 +2300,15 @@ static inline void *get_freelist(struct
  * And if we were unable to get a new slab from the partial slab lists then
  * we need to allocate a new slab. This is the slowest path since it involves
  * a call to the page allocator and the setup of a new slab.
+ *
+ * Version of __slab_alloc to use when we know that interrupts are
+ * already disabled (which is the case for bulk allocation).
  */
-static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
+static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 			  unsigned long addr, struct kmem_cache_cpu *c)
 {
 	void *freelist;
 	struct page *page;
-	unsigned long flags;
-
-	local_irq_save(flags);
-#ifdef CONFIG_PREEMPT
-	/*
-	 * We may have been preempted and rescheduled on a different
-	 * cpu before disabling interrupts. Need to reload cpu area
-	 * pointer.
-	 */
-	c = this_cpu_ptr(s->cpu_slab);
-#endif

 	page = c->page;
 	if (!page)
@@ -2374,7 +2366,6 @@ load_freelist:
 	VM_BUG_ON(!c->page->frozen);
 	c->freelist = get_freepointer(s, freelist);
 	c->tid = next_tid(c->tid);
-	local_irq_restore(flags);
 	return freelist;

 new_slab:
@@ -2391,7 +2382,6 @@ new_slab:

 	if (unlikely(!freelist)) {
 		slab_out_of_memory(s, gfpflags, node);
-		local_irq_restore(flags);
 		return NULL;
 	}

@@ -2407,11 +2397,35 @@ new_slab:
 	deactivate_slab(s, page, get_freepointer(s, freelist));
 	c->page = NULL;
 	c->freelist = NULL;
-	local_irq_restore(flags);
 	return freelist;
 }

 /*
+ * Another one that disabled interrupt and compensates for possible
+ * cpu changes by refetching the per cpu area pointer.
+ */
+static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
+			  unsigned long addr, struct kmem_cache_cpu *c)
+{
+	void *p;
+	unsigned long flags;
+
+	local_irq_save(flags);
+#ifdef CONFIG_PREEMPT
+	/*
+	 * We may have been preempted and rescheduled on a different
+	 * cpu before disabling interrupts. Need to reload cpu area
+	 * pointer.
+	 */
+	c = this_cpu_ptr(s->cpu_slab);
+#endif
+
+	p = ___slab_alloc(s, gfpflags, node, addr, c);
+	local_irq_restore(flags);
+	return p;
+}
+
+/*
  * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
  * have the fastpath folded into their functions. So no function call
  * overhead for requests that can be satisfied on the fastpath.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] slub: create new ___slab_alloc function that can be called with irqs disabled
  2015-08-28 19:43 [PATCH] slub: create new ___slab_alloc function that can be called with irqs disabled Christoph Lameter
@ 2015-08-29  9:59 ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 2+ messages in thread
From: Jesper Dangaard Brouer @ 2015-08-29  9:59 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: akpm, linux-mm, brouer


I like it, as I have a similar patch.  The difference is just that I
don't introduce a extra function call, but instead enable/disable IRQ
at call site of __slab_alloc().

But hopefully the compiler will inline __slab_alloc() to avoid the
extra level of function call indirection (extra cost 7 cycles see[1]).
Coding wise your solution is more clean.

On Fri, 28 Aug 2015 14:43:26 -0500 (CDT) Christoph Lameter <cl@linux.com> wrote:

> Bulk alloc needs a function like that because it enables interrupts before
> calling __slab_alloc which promptly disables them again using the expensive
> local_irq_save().
> 
> Signed-off-by: Christoph Lameter <cl@linux.com>

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>

Hint measure overhead on your own CPU:
 [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/time_bench_sample.c

 
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c	2015-08-28 14:33:01.520254081 -0500
> +++ linux/mm/slub.c	2015-08-28 14:34:21.050215578 -0500
> @@ -2300,23 +2300,15 @@ static inline void *get_freelist(struct
>   * And if we were unable to get a new slab from the partial slab lists then
>   * we need to allocate a new slab. This is the slowest path since it involves
>   * a call to the page allocator and the setup of a new slab.
> + *
> + * Version of __slab_alloc to use when we know that interrupts are
> + * already disabled (which is the case for bulk allocation).
>   */
> -static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> +static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  			  unsigned long addr, struct kmem_cache_cpu *c)
>  {
>  	void *freelist;
>  	struct page *page;
> -	unsigned long flags;
> -
> -	local_irq_save(flags);
> -#ifdef CONFIG_PREEMPT
> -	/*
> -	 * We may have been preempted and rescheduled on a different
> -	 * cpu before disabling interrupts. Need to reload cpu area
> -	 * pointer.
> -	 */
> -	c = this_cpu_ptr(s->cpu_slab);
> -#endif
> 
>  	page = c->page;
>  	if (!page)
> @@ -2374,7 +2366,6 @@ load_freelist:
>  	VM_BUG_ON(!c->page->frozen);
>  	c->freelist = get_freepointer(s, freelist);
>  	c->tid = next_tid(c->tid);
> -	local_irq_restore(flags);
>  	return freelist;
> 
>  new_slab:
> @@ -2391,7 +2382,6 @@ new_slab:
> 
>  	if (unlikely(!freelist)) {
>  		slab_out_of_memory(s, gfpflags, node);
> -		local_irq_restore(flags);
>  		return NULL;
>  	}
> 
> @@ -2407,11 +2397,35 @@ new_slab:
>  	deactivate_slab(s, page, get_freepointer(s, freelist));
>  	c->page = NULL;
>  	c->freelist = NULL;
> -	local_irq_restore(flags);
>  	return freelist;
>  }
> 
>  /*
> + * Another one that disabled interrupt and compensates for possible
> + * cpu changes by refetching the per cpu area pointer.
> + */
> +static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> +			  unsigned long addr, struct kmem_cache_cpu *c)

Compiler will hopefully inline this call.

> +{
> +	void *p;
> +	unsigned long flags;
> +
> +	local_irq_save(flags);
> +#ifdef CONFIG_PREEMPT
> +	/*
> +	 * We may have been preempted and rescheduled on a different
> +	 * cpu before disabling interrupts. Need to reload cpu area
> +	 * pointer.
> +	 */
> +	c = this_cpu_ptr(s->cpu_slab);
> +#endif
> +
> +	p = ___slab_alloc(s, gfpflags, node, addr, c);
> +	local_irq_restore(flags);
> +	return p;
> +}
[...]


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-08-29  9:59 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-28 19:43 [PATCH] slub: create new ___slab_alloc function that can be called with irqs disabled Christoph Lameter
2015-08-29  9:59 ` Jesper Dangaard Brouer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).