public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: "Yu, Fenghua" <fenghua.yu@intel.com>,
	akpm@linux-foundation.org, "Siddha,
	Suresh B" <suresh.b.siddha@intel.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/2] Add percpu smp cacheline align section
Date: Mon, 7 May 2007 10:11:29 -0700	[thread overview]
Message-ID: <20070507171129.GA21638@linux-os.sc.intel.com> (raw)
In-Reply-To: <Pine.LNX.4.64.0705050951040.27326@schroedinger.engr.sgi.com>

On Sat, May 05, 2007 at 09:52:27AM -0700, Christoph Lameter wrote:
> On Fri, 4 May 2007, Fenghua Yu wrote:
>
> > This is follow-up for Suresh's runqueue align in smp patch at:
> > [1]http://www.uwsg.iu.edu/hypermail/linux/kernel/0704.1/0340.html
> >
> > The patches place all of smp cacheline aligned percpu data into
> > .data.percpu.cacheline_aligned_in_smp. Other percpu data is still in
> > data.percpu section. The patches can reduce cache line access in SMP and
> > reduce alignment gap waste. The patches also define PERCPU macro for
> > vmlinux.lds.S for code clean up.
>
> Ummm... The per cpu area is for exclusive use of a particular processor.
> If there is contention in the per cpu area then a data object needs to be
> removed from the per cpu area because the object is *not* accessed only
> from a certain cpu.

Christoph, This data(that is being accessed by other cpus) also needs
to be defined for each cpu and as such it is getting appended (and
clearly seperated in a different section) to the data which is accessed
only by the local cpu.

Not sure what your concern is.

thanks,
suresh

  reply	other threads:[~2007-05-07 17:13 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <33E1C72C74DBE747B7B59C1740F7443701A2F0AB@orsmsx417.amr.corp.intel.com>
2007-05-05  0:12 ` [PATCH 1/2] Define percpu smp cacheline align interface Fenghua Yu
2007-05-07 22:58   ` Fenghua Yu
2007-05-15 23:42     ` Andrew Morton
2007-05-05  0:12 ` [PATCH 0/2] Add percpu smp cacheline align section Fenghua Yu
2007-05-05 16:52   ` Christoph Lameter
2007-05-07 17:11     ` Siddha, Suresh B [this message]
2007-05-07 17:30       ` Christoph Lameter
2007-05-07 17:46         ` Siddha, Suresh B
2007-05-07 18:13           ` Yu, Fenghua
2007-05-15  0:12           ` [PATCH 2/2] Use the new percpu interface for shared data -- version 2 Fenghua Yu
     [not found]           ` <20070515001255.GA27978@linux-os.sc.intel.com>
2007-05-15  0:22             ` [PATCH 1/2] Define " Fenghua Yu
2007-05-05  0:12 ` [PATCH 2/2] Call percpu smp cacheline algin interface Fenghua Yu
2007-05-07 22:59   ` Fenghua Yu
2007-05-09 20:33     ` Andrew Morton
2007-05-09 22:16       ` Yu, Fenghua
2007-05-09 22:53         ` Andrew Morton
2007-05-09 22:56           ` Christoph Lameter
2007-05-09 23:06             ` Siddha, Suresh B
2007-05-09 23:10             ` Yu, Fenghua
2007-05-09 23:36               ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070507171129.GA21638@linux-os.sc.intel.com \
    --to=suresh.b.siddha@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=clameter@sgi.com \
    --cc=fenghua.yu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox