public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Alex Shi <alex.shi@intel.com>
Cc: tglx@linutronix.de, hpa@zytor.com, mingo@redhat.com,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	x86@kernel.org, asit.k.mallick@intel.com
Subject: Re: change last level cache alignment on x86?
Date: Fri, 2 Mar 2012 09:12:09 +0100	[thread overview]
Message-ID: <20120302081208.GA24504@elte.hu> (raw)
In-Reply-To: <1330673425.21053.1503.camel@debian>


* Alex Shi <alex.shi@intel.com> wrote:

> On Thu, 2012-03-01 at 16:33 +0800, Alex,Shi wrote:
> > Currently last level defined in kernel is still 128 bytes, but actually
> > I checked intel's core2, NHM, SNB, atom, serial platforms, all of them
> > are using 64 bytes. 
> > I did not get detailed info on AMD platforms. Guess someone like to give
> > the info here. So, Is if it possible to do the similar following changes
> > to use 64 byte cache alignment in kernel?
> > 
> > ===
> > diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
> > index 3c57033..f342a5a 100644
> > --- a/arch/x86/Kconfig.cpu
> > +++ b/arch/x86/Kconfig.cpu
> > @@ -303,7 +303,7 @@ config X86_GENERIC
> >  config X86_INTERNODE_CACHE_SHIFT
> >  	int
> >  	default "12" if X86_VSMP
> > -	default "7" if NUMA
> > +	default "7" if NUMA && (MPENTIUM4)
> >  	default X86_L1_CACHE_SHIFT
> >  
> >  config X86_CMPXCHG
> 
> In arch/x86/include/asm/cache.h, the INTERNODE_CACHE_SHIFT macro will
> transfer to '__cacheline_aligned_in_smp' finally. 
> 
> #ifdef CONFIG_X86_VSMP
> #ifdef CONFIG_SMP
> #define __cacheline_aligned_in_smp                                      \
>         __attribute__((__aligned__(INTERNODE_CACHE_BYTES)))             \
>         __page_aligned_data
> #endif
> #endif

Note the #ifdef CONFIG_X86_VSMP - so the 128 bytes does not 
actually transform into __cacheline_aligned_in_smp.

> look at the following contents in Kconfig.cpu, I wondering if 
> it is possible to remove 'default "7" if NUMA' line. Then a 
> thin and fit cache alignment will be potential helpful on 
> performance. Anyone like to give some comments?

>  config X86_INTERNODE_CACHE_SHIFT
>         int
>         default "12" if X86_VSMP
> -       default "7" if NUMA
>         default X86_L1_CACHE_SHIFT

Yes, removing that line would be fine I think - I think it was 
copied from the old L1 alignment of 128 bytes (which was a P4 
artifact when that CPU was the dominant platform - that's not 
been the case for a long time already).

Could you please also do a before/after build of an x86 
defconfig with NUMA enabled and see what the alignments in the 
before/after System.map are?

Thanks,

	Ingo

  reply	other threads:[~2012-03-02  8:12 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-01  8:33 change last level cache alignment on x86? Alex,Shi
2012-03-02  7:30 ` Alex Shi
2012-03-02  8:12   ` Ingo Molnar [this message]
2012-03-02 14:42     ` Alex Shi
2012-03-02 15:25       ` Ingo Molnar
2012-03-03 11:30         ` Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120302081208.GA24504@elte.hu \
    --to=mingo@elte.hu \
    --cc=alex.shi@intel.com \
    --cc=asit.k.mallick@intel.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox