public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: keith <kmannth@us.ibm.com>
To: devnull@us.ibm.com, djwong <djwong@us.ibm.com>,
	Chris McDermott <lcm@us.ibm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: SELinux performance issue with large systems (32 cpus)
Date: Tue, 23 Nov 2004 11:22:05 -0800	[thread overview]
Message-ID: <1101237725.27848.301.camel@knk> (raw)

I work with i386 16-way systems.  When hyperthreading is enabled they
have 32 "cpus".  I recently did a quick performance check on with 2.6
(as it turn out with a SElinux enabled) doing kernel builds.  

My basic test was timing kernel makes using -j16 and -j32.  I saw
tremendous differences between these two times. 

 for -j16  
real	0m52.450s
user	6m24.572s
sys	2m25.331s

 for -j32
real	2m56.743s
user	9m28.781s
sys	73m50.536s 
   
This performance was not seen without hyperthreading.  I have only seen
this on 16-way with HT.  2.6.10-rc1 was used to evaluate the problem. 

Notice the system time goes through the roof.  From 2.5 min to almost 74
min!  We looked at schedstat data and tried various scheduler / numa
options without much to point at.  We then did some oprofiling and saw 

31999102 83.2007  _spin_lock_irqsave

for make -j32.  

After some lock profiling (keeping track of what locks were last used
and how many cycles were spent waiting) it became quite clean the the
avc_lock was to blame.  The avc_lock is a SELinux lock.  

The theory was proved by booting with selinux=0.  The performance with
make -j32 is now within an acceptable level (within 20%) when compared
to a make -j16. 

It appears that SELinux only scales to somewhere between 16 and 32
cpus.  I now very little about SELinux and it's workings but I wanted to
report what I have seen on my system.  I can't say I am really happy
about this performance. 

I would like to thank Derrick Wong and Rick Lindsley for helping to
identify this issue.  

Keith Mannthey 
LTC xSeries 

Please cc me as I am not a regular subscriber to this list. 
 


             reply	other threads:[~2004-11-23 19:25 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-11-23 19:22 keith [this message]
2004-11-23 20:52 ` SELinux performance issue with large systems (32 cpus) Chris Wright
2004-11-23 21:59   ` keith
2004-11-23 20:54 ` Stephen Smalley
2004-11-23 20:56 ` Valdis.Kletnieks

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1101237725.27848.301.camel@knk \
    --to=kmannth@us.ibm.com \
    --cc=devnull@us.ibm.com \
    --cc=djwong@us.ibm.com \
    --cc=lcm@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox