linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tim Moore <linux-raid@nsr500.net>
To: linux-raid@vger.kernel.org
Cc: Mark Hahn <hahn@physics.mcmaster.ca>
Subject: Re: raid5 performance - 2.4.28
Date: Mon, 10 Jan 2005 08:26:34 -0800	[thread overview]
Message-ID: <41E2ACBA.6030503@nsr500.net> (raw)
In-Reply-To: <Pine.LNX.4.44.0501091639260.13461-100000@coffee.psychology.mcmaster.ca>



Mark Hahn wrote:
>>Here's a data point in favor of raw horsepower when considering
>>software raid performance.
> 
> 
> mostly sw r5 write performance, right?

Correct.  Writes increased by 3X, Rewrites by 50%, Reads about the same.

>>Athlon K7 (18u) @ 840MHz, 1GB PC133, Abit KA7
>>Athlon XP 2800 @ 2075MHz, 1GB PC2700, Asus A7V400-MX
> 
> 
> so your dram bandwidth (measured by stream, say) went from maybe 
> .8 GB/s to around 2 GB/s.  do you still have boot logs from the 
> older configuration around?  it would be interesting to know 
> the in-cache checksumming speed gain, ie:
> 
> raid5: using function: pIII_sse (3128.000 MB/sec)

The Abit KA7 was the first consumer mobo to use leading+trailing mem clock 
and bank interleaving, so memory speed has only slightly more than doubled:

Athlon slot-A @ 850MHz + PC133 SDRAM
------------------------------------
kernel: raid5: measuring checksumming speed
kernel:    8regs     :  1285.600 MB/sec
kernel:    32regs    :   780.800 MB/sec
kernel:    pII_mmx   :  1972.400 MB/sec
kernel:    p5_mmx    :  2523.600 MB/sec
kernel: raid5: using function: p5_mmx (2523.600 MB/sec)
kernel: md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27

Athlon XP @ 2075MHz + PC2700 DDR
--------------------------------
kernel: raid5: measuring checksumming speed
kernel:    8regs     :  3172.800 MB/sec
kernel:    32regs    :  1932.400 MB/sec
kernel:    pIII_sse  :  3490.800 MB/sec
kernel:    pII_mmx   :  4868.400 MB/sec
kernel:    p5_mmx    :  6229.200 MB/sec
kernel: raid5: using function: pIII_sse (3490.800 MB/sec)
kernel: md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27

I'm also experimenting with this patch to see if the xor hardwire for 
modern intel/AMD architectures is still valid.  With the old processor 
p5_mmx was always picked and always within a few MB/s.  The new XP is all 
over the map.

pre-patch: always pIII_sse (35xx)
post-patch: always p5_mmx (62xx)

--- ./include/asm-i386/xor.h.orig       Fri Aug  2 17:39:45 2002
+++ ./include/asm-i386/xor.h    Sun Jan  9 22:32:37 2005
@@ -876,3 +876,8 @@
     deals with a load to a line that is being prefetched.  */
  #define XOR_SELECT_TEMPLATE(FASTEST) \
         (cpu_has_xmm ? &xor_block_pIII_sse : FASTEST)
+
+/* This may have been true in 1998, but lets try what appears to be
+   nearly 4x faster */
+#define XOR_SELECT_TEMPLATE(FASTEST) \
+       (cpu_has_xmm ? &xor_block_p5_mmx : FASTEST)


       reply	other threads:[~2005-01-10 16:26 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <Pine.LNX.4.44.0501091639260.13461-100000@coffee.psychology.mcmaster.ca>
2005-01-10 16:26 ` Tim Moore [this message]
     [not found] <Pine.LNX.4.44.0501101600250.28809-100000@coffee.psychology.mcmaster.ca>
2005-01-12  5:18 ` raid5 performance - 2.4.28 Tim Moore
2005-01-09  5:28 Tim Moore

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=41E2ACBA.6030503@nsr500.net \
    --to=linux-raid@nsr500.net \
    --cc=hahn@physics.mcmaster.ca \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).