From: Rusty Russell <rusty@rustcorp.com.au>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Pavel Machek <pavel@ucw.cz>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@osdl.org>
Subject: Re: [PATCH 0/6] Per-processor private data areas for i386
Date: Fri, 29 Sep 2006 10:22:22 +1000 [thread overview]
Message-ID: <1159489343.6241.18.camel@localhost.localdomain> (raw)
In-Reply-To: <451ADEE4.4010508@goop.org>
On Wed, 2006-09-27 at 13:28 -0700, Jeremy Fitzhardinge wrote:
> Pavel Machek wrote:
> > So we have 4% slowdown...
> >
>
> Yes, that would be the worst-case slowdown in the hot-cache case.
> Rearranging the layout of the GDT would remove any theoretical
> cold-cache slowdown (I haven't measured if there's any impact in practice).
>
> Rusty has also done more comprehensive benchmarks with his variant of
> this patch series, and found no statistically interesting performance
> difference. Which is pretty much what I would expect, since it doesn't
> increase cache-misses at all.
OK, here are my null-syscall results. This is on a Intel(R) Pentium(R)
4 CPU 3.00GHz (stepping 9), single processor (SMP kernel).
I did three sets of tests: before, with saving/restoring %gs, with using
%gs for per-cpu vars and current and smp_processor_id().
Before:
swarm5.0:Simple syscall: 0.3734 microseconds
swarm5.1:Simple syscall: 0.3734 microseconds
swarm5.2:Simple syscall: 0.3734 microseconds
swarm5.3:Simple syscall: 0.3734 microseconds
With saving/restoring %gs: (per-cpu was same)
swarm5.4:Simple syscall: 0.3801 microseconds
swarm5.5:Simple syscall: 0.3801 microseconds
swarm5.6:Simple syscall: 0.3804 microseconds
swarm5.7:Simple syscall: 0.3801 microseconds
That's a 6.5ns cost for saving and restoring gs, and other lmbench
syscall benchmarks reflected similar differences where the noise didn't
overwhelm them.
On kernbench, the differences were in the noise.
Strangely, I see a 4% drop on fork+exec when I used gs for per-cpu vars,
which I am now investigating (71.0831 usec before, 71.1725 usec with
saving, 73.7458 usec with per-cpu!).
Cheers,
Rusty.
--
Help! Save Australia from the worst of the DMCA: http://linux.org.au/law
prev parent reply other threads:[~2006-09-29 0:22 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-09-25 18:45 [PATCH 0/6] Per-processor private data areas for i386 jeremy
2006-09-25 18:45 ` [PATCH 1/6] Initialize the per-CPU data area jeremy
2006-09-25 20:49 ` Andi Kleen
2006-09-25 20:59 ` Jeremy Fitzhardinge
2006-09-25 21:05 ` Andi Kleen
2006-09-25 21:33 ` Jeremy Fitzhardinge
2006-09-25 18:45 ` [PATCH 2/6] Use %gs as the PDA base-segment in the kernel jeremy
2006-09-25 18:45 ` [PATCH 3/6] Fix places where using %gs changes the usermode ABI jeremy
2006-09-25 18:45 ` [PATCH 4/6] Update sys_vm86 to cope with changed pt_regs and %gs usage jeremy
2006-09-25 18:45 ` [PATCH 5/6] Implement smp_processor_id() with the PDA jeremy
2006-09-25 18:45 ` [PATCH 6/6] Implement "current" " jeremy
2006-09-27 19:46 ` [PATCH 0/6] Per-processor private data areas for i386 Pavel Machek
2006-09-27 20:28 ` Jeremy Fitzhardinge
2006-09-27 20:28 ` Jeremy Fitzhardinge
2006-09-29 0:22 ` Rusty Russell [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1159489343.6241.18.camel@localhost.localdomain \
--to=rusty@rustcorp.com.au \
--cc=akpm@osdl.org \
--cc=jeremy@goop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pavel@ucw.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox