From: Jack Steiner <steiner@sgi.com>
To: Manfred Spraul <manfred@dbl.q-ag.de>
Cc: linux-kernel@vger.kernel.org, lse-tech@lists.sourceforge.net
Subject: Re: [Lse-tech] [RFC, PATCH] 1/5 rcu lock update: Add per-cpu batch counter
Date: Tue, 25 May 2004 15:32:15 -0500 [thread overview]
Message-ID: <20040525203215.GB5127@sgi.com> (raw)
In-Reply-To: <200405250535.i4P5ZKAQ017591@dbl.q-ag.de>
On Tue, May 25, 2004 at 07:35:20AM +0200, Manfred Spraul wrote:
> Hi,
>
> Step one for reducing cacheline trashing within rcupdate.c:
>
> The current code uses the rcu_cpu_mask bitmap both for keeping track
> of the cpus that haven't gone through a quiescent state and for
> checking if a cpu should look for quiescent states. The bitmap
> is frequently changed and the check is done by polling -
> together this causes cache line trashing.
>
.....
It looks like the patch fixes the problem that I saw.
I ran a 2.6.6+rcupatch kernel on a 512p system. Previously, this system
showed ALL cpus becoming ~50% busy when running a "ls" loop on a single
cpu. This behavior is now fixed.
The following shows the system overhead on cpus 0-9 (higher cpus are
similar):
idle system
CPU 0 1 2 3 4 5 6 7 8 9
0.70 0.21 0.19 0.17 0.21 0.20 0.20 0.19 0.18 0.19
0.72 0.21 0.18 0.17 0.21 0.21 0.21 0.20 0.18 0.19
0.70 0.20 0.18 0.17 0.22 0.20 0.19 0.20 0.19 0.20
0.71 0.19 0.18 0.17 0.21 0.20 0.19 0.20 0.18 0.19
0.71 0.23 0.19 0.17 0.21 0.20 0.19 0.20 0.19 0.20
runnin "ls" loop on cpu 4 (the number for cpu4 is the ls command - NOT overhead)
CPU 0 1 2 3 4 5 6 7 8 9
0.83 2.08 0.31 0.29 97.87 1.25 0.32 0.32 0.32 0.33
0.88 0.52 0.30 0.28 96.46 1.32 0.32 0.23 0.31 0.30
0.84 1.27 0.31 0.29 97.15 1.38 0.33 0.31 0.31 0.31
0.83 2.81 0.32 0.30 98.61 1.14 0.31 0.33 0.32 0.37
0.84 2.32 0.32 0.40 97.91 1.43 0.33 0.43 0.32 0.36
There is a small increase in system overhead on all cpus but not the 50% seen
earlier.
I dont understand, however, why the overhead increased on cpus 1 & 5. This
may be a test anomaly but I suspect something else. I'll look into that later.
I also noticed one other anomaly. /proc/interrupts shows the number of interrupts
for each cpu. Under most circumstances, /proc/interrupts shows 1024 timer
interrupts/sec on each cpu. When the "ls" script is running, the number
of timer interrupts/sec on the cpu running "ls" drops to ~650. I'm not
sure why (perhaps running too long somewhere with ints disabled).
I'll look further....
--
Thanks
Jack Steiner (steiner@sgi.com) 651-683-5302
Principal Engineer SGI - Silicon Graphics, Inc.
next prev parent reply other threads:[~2004-05-25 20:32 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-05-25 5:35 [RFC, PATCH] 1/5 rcu lock update: Add per-cpu batch counter Manfred Spraul
2004-05-25 20:32 ` Jack Steiner [this message]
2004-05-26 18:31 ` [Lse-tech] " Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040525203215.GB5127@sgi.com \
--to=steiner@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lse-tech@lists.sourceforge.net \
--cc=manfred@dbl.q-ag.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox