public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Hansen <haveblue@us.ibm.com>
To: "Bond, Andrew" <Andrew.Bond@hp.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: TPC-C benchmark used standard RH kernel
Date: Thu, 19 Sep 2002 13:40:52 -0700	[thread overview]
Message-ID: <3D8A3654.50201@us.ibm.com> (raw)
In-Reply-To: 45B36A38D959B44CB032DA427A6E106402D09E43@cceexc18.americas.cpqcorp.net

Bond, Andrew wrote:
 > This isn't as recent as I would like, but it will give you an idea.
 > Top 75 from readprofile.  This run was not using bigpages though.
 >
 > 00000000 total                                      7872   0.0066
 > c0105400 default_idle                               1367  21.3594
 > c012ea20 find_vma_prev                               462   2.2212
 > c0142840 create_bounce                               378   1.1250
 > c0142540 bounce_end_io_read                          332   0.9881
 > c0197740 __make_request                              256   0.1290
 > c012af20 zap_page_range                              231   0.1739
 > c012e9a0 find_vma                                    214   1.6719
 > c012e780 avl_rebalance                               160   0.4762
 > c0118d80 schedule                                    157   0.1609
 > c010ba50 do_gettimeofday                             145   1.0069
 > c0130c30 __find_lock_page                            144   0.4500
 > c0119150 __wake_up                                   142   0.9861
 > c01497c0 end_buffer_io_kiobuf_async                  140   0.6250
 > c0113020 flush_tlb_mm                                128   1.0000
 > c0168000 proc_pid_stat                               125   0.2003

Forgive my complete ignorane about TPC-C...  Why do you have so much 
idle time?  Are you I/O bound? (with that many disks, I sure hope not 
:) )  Or is it as simple as leaving profiling running for a bit before 
or after the benchmark was run?

Earlier, I got a little over-excited because I thinking that the 
machines under test were 8-ways, but it looks like the DL580 is a 
4xPIII-Xeon, and you have 8 of them.  I know you haven't published it, 
but do you do any testing on 8-ways?

For most of our work (Specweb, dbench, plain kernel compiles), the 
kernel tends to blow up a lot worse at 8 CPUs than 4.  It really dies 
on the 32-way NUMA-Qs, but that's a whole other story...

-- 
Dave Hansen
haveblue@us.ibm.com


  reply	other threads:[~2002-09-19 20:36 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-09-19 19:27 TPC-C benchmark used standard RH kernel Bond, Andrew
2002-09-19 20:40 ` Dave Hansen [this message]
2002-09-20 17:20   ` Mike Anderson
2002-09-20 17:31     ` Jens Axboe
2002-09-20 18:05       ` Mike Anderson
2002-09-20 20:56 ` William Lee Irwin III
2002-09-20 22:07   ` Martin J. Bligh
  -- strict thread matches above, loose matches on Subject: below --
2002-09-20 17:43 Bond, Andrew
2002-09-19 21:18 Bond, Andrew
2002-09-19 21:48 ` Dave Hansen
2002-09-19 19:15 Bond, Andrew
2002-09-19 17:15 Bond, Andrew
2002-09-19 18:14 ` Dave Hansen
2002-09-19 19:05 ` Martin J. Bligh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3D8A3654.50201@us.ibm.com \
    --to=haveblue@us.ibm.com \
    --cc=Andrew.Bond@hp.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox