public inbox for b.a.t.m.a.n@lists.open-mesh.org
 help / color / mirror / Atom feed
From: "Linus Lüssing" <linus.luessing@c0d3.blue>
To: The list for a Better Approach To Mobile Ad-hoc Networking
	<b.a.t.m.a.n@lists.open-mesh.org>
Subject: Re: [B.A.T.M.A.N.] kmalloc() vs. kmem_cache_alloc() for global TT?
Date: Sun, 15 May 2016 14:06:26 +0200	[thread overview]
Message-ID: <20160515120626.GG4375@otheros> (raw)
In-Reply-To: <2911091.OSPWlnrdxm@sven-edge>

On Sun, May 15, 2016 at 01:27:39PM +0200, Sven Eckelmann wrote:
> On Saturday 14 May 2016 16:51:29 Linus Lüssing wrote:
> > Hi,
> > 
> > Is anyone familiar with the implications of using kmalloc() vs.
> > kmem_cache_alloc()? Not just allocation speed, but also RAM
> > fragmentation is something I'm currently wondering about.
> 
> Yes, it should reduce the effects of allocating differently sized objects 
> (which can create smaller regions of memory which cannot be used anymore). But 
> my guess is that SLAB isn't that bad because it already has some caches for 
> differently sized memory regions.

Yes, I tested the following patchset [0] yesterday, created a few
hundred clients in x86 VMs and obsevered the output of /proc/slabinfo.

* tt_global_entry uses kmalloc-node cache (objsize 192), same size for
  a batadv_tt_global_cache
* tt_orig_list_entry uses a kmalloc-64, same size for a
  batadv_tt_orig_cache

(sizeof(tt-global) -> 144, sizeof(orig-entry) -> 56,
sizeof(tt-common) -> 64, sizeof(tt_local) -> 80)

So indeed it looks like there might not be a difference in using
one of the predefined or a custom cache fragmentation wise. And
the wasted space seems to be the same (if I'm not misinterpreting the output
of slabinfo).

On the other hand it seems common to use custom caches for
larger amounts of frequently changing objects of the same type.
Filesystems seem to use them regularly.

> 
> I think we should check if this helps by first testing it with the main TT 
> objects. I've send an RFC patch [1]. Unfortunately, I am not aware of any nice 
> tools to really check the size of the available, continuous memory chunks. The 
> only thing partially interesting I know about is /proc/slabinfo which shows 
> you the state of the available caches (including the slab page caches).

Ok, yes, that's what I had looked at yesterday, too. I'll check
whether I can get some guys from Freifunk Rhein-Neckar or Freifunk
Hamburg to test these patches and whether they make a difference
for them.

> 
> Kind regards,
> 	Sven 
> 
> [1] https://patchwork.open-mesh.org/patch/16200/

[0] https://git.open-mesh.org/batman-adv.git/shortlog/refs/heads/linus/kmem-cache

  reply	other threads:[~2016-05-15 12:06 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-14 14:51 [B.A.T.M.A.N.] kmalloc() vs. kmem_cache_alloc() for global TT? Linus Lüssing
2016-05-14 14:54 ` Linus Lüssing
2016-05-15 11:27 ` Sven Eckelmann
2016-05-15 12:06   ` Linus Lüssing [this message]
2016-05-15 12:15     ` Sven Eckelmann
2016-05-15 12:17       ` Sven Eckelmann
2016-05-15 12:37       ` Linus Lüssing
2016-05-15 12:53         ` Sven Eckelmann
2016-05-15 12:41     ` Linus Lüssing
2016-05-15 20:50       ` Sven Eckelmann
2016-05-15 21:26         ` Linus Lüssing
2016-05-15 22:06           ` Sven Eckelmann
2016-05-24  0:14 ` Linus Lüssing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160515120626.GG4375@otheros \
    --to=linus.luessing@c0d3.blue \
    --cc=b.a.t.m.a.n@lists.open-mesh.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox