From: jamal <hadi@cyberus.ca>
To: David Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org, Robert.Olsson@data.slu.se,
shemminger@linux-foundation.org, kaber@trash.net
Subject: Re: fscked clock sources revisited
Date: Tue, 07 Aug 2007 09:19:52 -0400 [thread overview]
Message-ID: <1186492792.5163.94.camel@localhost> (raw)
In-Reply-To: <1185848076.5162.39.camel@localhost>
On Mon, 2007-30-07 at 22:14 -0400, jamal wrote:
> I am going to test with hpet when i get the chance
Couldnt figure how to turn on/off hpet, so didnt test.
> and perhaps turn off all the other sources if nothing good comes out; i
> need my numbers ;->
Here are some numbers that make the mystery even more interesting. This
is with kernel 2.6.22-rc4. Repeating with kernel 2.6.23-rc1 didnt show
anything different. I went back to test on 2.6.22-rc4 because it is the
base for my batching patches - and since those drove me to this test, i
wanted something that reduces variables when comparing with batching.
I picked udp for this test because i can select different packet sizes.
i used iperf. The sender is a dual opteron with tg3. The receiver is a
dual xeon.
The default HZ is 250. Each packet size was run 3 times with different
clock sources. The experiment made sure that the receiver wasnt a
bottleneck (increased socket buffer sizes etc)
Packet | jiffies (1/250) | tsc | acpi_pm
-------------------------|---------------|---------------
64 | 141, 145, 142 | 131, 136, 130 | 103, 104, 110
128 | 256, 256, 256 | 274, 260, 269 | 216, 206, 220
512 | 513, 513, 513 | 886, 886, 886 | 828, 814, 806
1280 | 684, 684, 684 | 951, 951, 951 | 951, 951, 951
So i was wrong to declare jiffies as being good. The last batch of
experiments were based on only 64 byte UDP. Clearly as packet size goes
up, the results are worse with jiffies.
At this point, i decided to recompile the kernel with HZ=1000 and the
observations show that the jiffies results are improved.
Packet | jiffies (1/250) | tsc | acpi_pm
-------------------------|---------------|---------------
64 | 145, 135, 135 | 131, 137, 139 | 110, 110, 108
128 | 257, 257, 257 | 270, 264, 250 | 218, 216, 217
512 | 819, 776, 819 | 886, 886, 886 | 841, 824, 846
1280 | 855, 855, 855 | 951, 950, 951 | 951, 951, 951
Still not as good as the other two at large packet sizes.
For this machine: The ideal clock source would be jiffies with
HZ=1000 upto about 100 bytes then change to tsc. Of course i could pick
tsc but people have dissed it so far - i probably didnt hit the
condition where it goes into deep slumber.
Any insights? This makes it hard to quantify batching experimental
improvements as i feel it could be architecture or worse machine
dependent.
cheers,
jamal
prev parent reply other threads:[~2007-08-07 13:19 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-07-31 1:10 fscked clock sources revisited jamal
2007-07-31 1:10 ` Arjan van de Ven
2007-07-31 1:17 ` jamal
2007-07-31 1:23 ` jamal
2007-07-31 2:07 ` Arjan van de Ven
2007-07-31 1:37 ` David Miller
2007-07-31 2:03 ` Arjan van de Ven
2007-07-31 2:14 ` jamal
2007-08-07 13:19 ` jamal [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1186492792.5163.94.camel@localhost \
--to=hadi@cyberus.ca \
--cc=Robert.Olsson@data.slu.se \
--cc=davem@davemloft.net \
--cc=kaber@trash.net \
--cc=netdev@vger.kernel.org \
--cc=shemminger@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).