public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dipankar Sarma <dipankar@in.ibm.com>
To: Mala Anand <manand@us.ibm.com>
Cc: BALBIR SINGH <balbir.singh@wipro.com>,
	linux-kernel@vger.kernel.org, lse-tech@lists.sourceforge.net,
	lse-tech-admin@lists.sourceforge.net,
	Paul McKenney <Paul.McKenney@us.ibm.com>,
	Rusty Russell <rusty@rustcorp.com.au>
Subject: Re: [Lse-tech] Re: [RFC] Dynamic percpu data allocator
Date: Thu, 30 May 2002 23:25:13 +0530	[thread overview]
Message-ID: <20020530232513.C3575@in.ibm.com> (raw)
In-Reply-To: <OF6BEB750B.90A03073-ON85256BC9.0041372C@raleigh.ibm.com>

On Thu, May 30, 2002 at 08:56:36AM -0500, Mala Anand wrote:
>                                                                                                                                                
>                       dipankar@beaverton.ibm.co                                                                                                
>                       m                                To:       BALBIR SINGH <balbir.singh@wipro.com>                                         
> 
> >The per-cpu data allocator allocates one copy for *each* CPU.
> >It uses the slab allocator underneath. Eventually, when/if we have
> >per-cpu/numa-node slab allocation, the per-cpu data allocator
> >can allocate every CPU's copy from memory closest to it.
> 
> Does this mean that memory allocation will happen in "each" CPU?
> Do slab allocator allocate the memory in each cpu? Your per-cpu
> data allocator sounds like the hot list skbs that are in the tcpip stack
> in the sense it is one level above the slab allocator and the list is
> kept per cpu.  If slab allocator is fixed for per cpu, do you still
> need this per-cpu data allocator?

Actually I don't know for sure what plans are afoot to fix the slab allocator
for per-cpu. One plan I heard about was allocating from per-cpu pools
rather than per-cpu copies. My requirements are similar to
the hot list skbs. I want to do this -

	int *ctrp1, *ctrp2;
	
	ctrp1 = kmalloc_percpu(sizeof(*ctrp1), GFP_ATOMIC);
	if (ctrp1 == NULL) {
		/* recover */
	}
	ctrp2 = kmalloc_percpu(sizeof(*ctrp2), GFP_ATOMIC);
	if (ctrp2 == NULL) {
		/* recover */
	}

	*per_cpu_ptr(ctrp1, smp_processor_id())++;
	this_cpu_ptr(ctrp2)++;

Now I can allocate by making ctrp1/ctrp2 point to an array
of NR_CPUS and kmalloc() memory for each CPU's copy of the
int. This is simple and will work. 

	void **ptrs = kmalloc(sizeof(*ptrs) * NR_CPUS, flags);

	if (!ptrs) return NULL;
	for (i = 0; i < NR_CPUS; i++) {
	      ptrs[i] = kmalloc(size, flags);
	      if (!ptrs[i])
		      goto unwind_oom;
	}


However I would like to use kmalloc_percpu() for allocating very 
small objects - typlically integer counters or small structures
to be used as per-cpu counters for things like dst entries and dentries.
kmalloc will waste the rest of the cache line for such small objects.
The alternative is to use a layer of code to interleave small objects
and save on space.


   CPU #0          CPU#1

 ---------       ---------         Start of cache line
   *ctrp1         *ctrp1
   *ctrp2         *ctrp2

   .               .
   .               .
   .               .
   .               .
   .               .

 ---------       ----------        End of cache line

I have an allocator that interleaves objects like this if they can be fitted
into size that is a factor of SMP_CACHE_BYTES. 

I hope someone can tell me that I don't even have to do this. Otherwise
I will go ahead and do my thing.

Thanks
-- 
Dipankar Sarma  <dipankar@in.ibm.com> http://lse.sourceforge.net
Linux Technology Center, IBM Software Lab, Bangalore, India.

  reply	other threads:[~2002-05-30 17:51 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-05-30 13:56 [Lse-tech] Re: [RFC] Dynamic percpu data allocator Mala Anand
2002-05-30 17:55 ` Dipankar Sarma [this message]
2002-05-31  7:57   ` BALBIR SINGH
2002-05-31  8:40     ` Dipankar Sarma
  -- strict thread matches above, loose matches on Subject: below --
2002-06-04 21:11 Paul McKenney
2002-06-04 12:05 Mala Anand
2002-06-03 19:12 Mala Anand
2002-06-03 19:48 ` Dipankar Sarma
2002-05-24  6:13 Dipankar Sarma
2002-05-24  8:38 ` [Lse-tech] " BALBIR SINGH
2002-05-24  9:13   ` Dipankar Sarma
2002-05-24 11:59     ` BALBIR SINGH
2002-05-24 14:38   ` Martin J. Bligh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20020530232513.C3575@in.ibm.com \
    --to=dipankar@in.ibm.com \
    --cc=Paul.McKenney@us.ibm.com \
    --cc=balbir.singh@wipro.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lse-tech-admin@lists.sourceforge.net \
    --cc=lse-tech@lists.sourceforge.net \
    --cc=manand@us.ibm.com \
    --cc=rusty@rustcorp.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox