linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: linux-mm@kvack.org, Christoph Lameter <cl@linux.com>,
	Vladimir Davydov <vdavydov@virtuozzo.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	brouer@redhat.com
Subject: Re: [PATCH 01/10] slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk
Date: Fri, 8 Jan 2016 12:05:14 +0100	[thread overview]
Message-ID: <20160108120514.22798bf5@redhat.com> (raw)
In-Reply-To: <20160108025839.GB14457@js1304-P5Q-DELUXE>

On Fri, 8 Jan 2016 11:58:39 +0900
Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:

> On Thu, Jan 07, 2016 at 03:03:38PM +0100, Jesper Dangaard Brouer wrote:
> > This change is primarily an attempt to make it easier to realize the
> > optimizations the compiler performs in-case CONFIG_MEMCG_KMEM is not
> > enabled.
> > 
> > Performance wise, even when CONFIG_MEMCG_KMEM is compiled in, the
> > overhead is zero. This is because, as long as no process have
> > enabled kmem cgroups accounting, the assignment is replaced by
> > asm-NOP operations.  This is possible because memcg_kmem_enabled()
> > uses a static_key_false() construct.
> > 
> > It also helps readability as it avoid accessing the p[] array like:
> > p[size - 1] which "expose" that the array is processed backwards
> > inside helper function build_detached_freelist().
> 
> That part is cleande up but overall code doesn't looks readable to me.

True, I also don't like my "*s" indirection, even-though it gets removed
in the compiled code.

> How about below change?

Looks more C readable, but I have to verify that the compiler can still
realize the optimization, of just using "s" directly, when
CONFIG_MEMCG_KMEM is not activated.  

> ---------------------->8------------------
>  struct detached_freelist {
> +       struct kmem_cache *s;
>         struct page *page;
>         void *tail;
>         void *freelist;
>         int cnt;
>  }

I'll likely place "s" at another point, as 16 bytes alignment of this
struct can influence performance on Intel CPUs.  (e.g. freelist+cnt
gets updated at almost the same time, and were 16 bytes aligned before)


> @@ -2852,8 +2853,11 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
>         if (!object)
>                 return 0;
>  
> +       /* Support for memcg */
> +       df->s = cache_from_obj(s, object);
> +
>         /* Start new detached freelist */
> -       set_freepointer(s, object, NULL);
> +       set_freepointer(df.s, object, NULL);

Not compile tested ;-) ... df->s

>         df->page = virt_to_head_page(object);
>         df->tail = object;
>         df->freelist = object;
> @@ -2868,7 +2872,7 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
>                 /* df->page is always set at this point */
>                 if (df->page == virt_to_head_page(object)) {
>                         /* Opportunity build freelist */
> -                       set_freepointer(s, object, df->freelist);
> +                       set_freepointer(df.s, object, df->freelist);
>                         df->freelist = object;
>                         df->cnt++;
>                         p[size] = NULL; /* mark object processed */
> @@ -2889,23 +2893,19 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
>  
>  
>  /* Note that interrupts must be enabled when calling this function. */
> -void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
> +void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
>  {
>         if (WARN_ON(!size))
>                 return;
>  
>         do {
>                 struct detached_freelist df;
> -               struct kmem_cache *s;
> -
> -               /* Support for memcg */
> -               s = cache_from_obj(orig_s, p[size - 1]);
>  
>                 size = build_detached_freelist(s, size, p, &df);
>                 if (unlikely(!df.page))
>                         continue;
>  
> -               slab_free(s, df.page, df.freelist, df.tail, df.cnt, _RET_IP_);
> +               slab_free(df.s, df.page, df.freelist, df.tail, df.cnt, _RET_IP_);

Argh... line will be 81 chars wide...

>         } while (likely(size));
>  }
>  EXPORT_SYMBOL(kmem_cache_free_bulk);

I'll try it out. Thanks for your suggestion.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-01-08 11:05 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-07 14:03 [PATCH 00/10] MM: More bulk API work Jesper Dangaard Brouer
2016-01-07 14:03 ` [PATCH 01/10] slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk Jesper Dangaard Brouer
2016-01-07 15:54   ` Christoph Lameter
2016-01-07 17:41     ` Jesper Dangaard Brouer
2016-01-08  2:58   ` Joonsoo Kim
2016-01-08 11:05     ` Jesper Dangaard Brouer [this message]
2016-01-07 14:03 ` [PATCH 02/10] mm/slab: move SLUB alloc hooks to common mm/slab.h Jesper Dangaard Brouer
2016-01-07 14:03 ` [PATCH 03/10] mm: fault-inject take over bootstrap kmem_cache check Jesper Dangaard Brouer
2016-01-07 14:03 ` [PATCH 04/10] slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB Jesper Dangaard Brouer
2016-01-08  3:05   ` Joonsoo Kim
2016-01-07 14:03 ` [PATCH 05/10] mm: kmemcheck skip object if slab allocation failed Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 06/10] slab: use slab_post_alloc_hook in SLAB allocator shared with SLUB Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 07/10] slab: implement bulk alloc in SLAB allocator Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 08/10] slab: avoid running debug SLAB code with IRQs disabled for alloc_bulk Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 09/10] slab: implement bulk free in SLAB allocator Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 10/10] mm: new API kfree_bulk() for SLAB+SLUB allocators Jesper Dangaard Brouer
2016-01-08  3:03   ` Joonsoo Kim
2016-01-08 11:20     ` Jesper Dangaard Brouer
2016-01-07 18:54 ` [PATCH 00/10] MM: More bulk API work Linus Torvalds
2016-01-12 15:13 ` [PATCH V2 00/11] " Jesper Dangaard Brouer
2016-01-12 15:13   ` [PATCH V2 01/11] slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk Jesper Dangaard Brouer
2016-01-12 15:13   ` [PATCH V2 02/11] mm/slab: move SLUB alloc hooks to common mm/slab.h Jesper Dangaard Brouer
2016-01-12 15:14   ` [PATCH V2 03/11] mm: fault-inject take over bootstrap kmem_cache check Jesper Dangaard Brouer
2016-01-12 15:14   ` [PATCH V2 04/11] slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB Jesper Dangaard Brouer
2016-01-12 15:14   ` [PATCH V2 05/11] mm: kmemcheck skip object if slab allocation failed Jesper Dangaard Brouer
2016-01-12 15:14   ` [PATCH V2 06/11] slab: use slab_post_alloc_hook in SLAB allocator shared with SLUB Jesper Dangaard Brouer
2016-01-12 15:15   ` [PATCH V2 07/11] slab: implement bulk alloc in SLAB allocator Jesper Dangaard Brouer
2016-01-12 15:15   ` [PATCH V2 08/11] slab: avoid running debug SLAB code with IRQs disabled for alloc_bulk Jesper Dangaard Brouer
2016-01-12 15:15   ` [PATCH V2 09/11] slab: implement bulk free in SLAB allocator Jesper Dangaard Brouer
2016-01-12 15:16   ` [PATCH V2 10/11] mm: new API kfree_bulk() for SLAB+SLUB allocators Jesper Dangaard Brouer
2016-01-12 15:16   ` [PATCH V2 11/11] mm: fix some spelling Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160108120514.22798bf5@redhat.com \
    --to=brouer@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-mm@kvack.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vdavydov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).