linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris <cmtimegn@gmail.com>
To: linux-raid@vger.kernel.org
Subject: Raid 10 LVM JFS Seeking performance help
Date: Thu, 17 Dec 2009 15:49:18 -0800	[thread overview]
Message-ID: <7eea85cf0912171549u58a4c64foc4cb96a388ddbd06@mail.gmail.com> (raw)

I have a pair of servers serving 10MB-100MB files.  Each server has
12x 7200 SaS 750GB Drives.  When I look at iostat I see the avgrq-sz
is 8.0 always.  I think this has to do with the fact my LVM PE Size is
4096 with JFS on top of that.    Best I can tell the fact I have so
many rrqm/s is not great and the reason I have that many is because my
avgrq-sz is 8.0.  I have been trying to grasp how I should come up
with the best chunk and PE for more performance.

Switch from n2 to f2 raid10?
How do I calculate where I need to go from here with Chunk Size and PE size?


Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz
avgqu-sz   await  svctm  %util
sdf            1934.00     0.00 123.00  0.00     8.00     0.00
133.27     3.66   29.76   7.05  86.70
sdg            1765.00     0.00 117.00  0.00     7.45     0.00
130.32     3.09   29.61   7.46  87.30
sdh            1744.00     0.00 83.00  0.00     6.50     0.00   160.48
    3.31   38.47  10.89  90.40
sdi            2369.00     0.00 109.00  0.00     9.50     0.00
178.42     5.30   47.83   8.65  94.30
sdj            1867.00     0.00 90.00  0.00     6.89     0.00   156.89
    1.89   21.70   8.83  79.50
sdk            1574.00     0.00 74.00  0.00     6.49     0.00   179.57
    2.45   34.11  11.74  86.90
sdl            2437.00     0.00 105.00  0.00     9.10     0.00
177.52     4.66   41.79   8.35  87.70
sdm            1259.00     0.00 102.00  0.00     5.22     0.00
104.86     2.19   21.34   7.99  81.50
sdn            2096.00     0.00 114.00  0.00     8.88     0.00
159.51     4.95   45.06   8.33  95.00
sdo            1835.00     0.00 106.00  0.00     7.09     0.00
137.06     2.71   24.00   8.19  86.80
sdp            1431.00     0.00 113.00  0.00     5.92     0.00
107.33     4.32   38.82   8.24  93.10
sdq            2068.00     0.00 138.00  0.00     8.39     0.00
124.46    10.18   71.28   7.04  97.10
md2               0.00     0.00 23671.00  0.00    92.46     0.00
8.00     0.00    0.00   0.00   0.00
dm-7              0.00     0.00 23671.00  0.00    92.46     0.00
8.00  1006.46   42.07   0.04 100.10

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sdf             112.00     10784.00         0.00      10784          0
sdg             111.00     10464.00         0.00      10464          0
sdh             104.00     11520.00         0.00      11520          0
sdi              98.00     14280.00         0.00      14280          0
sdj              89.00     14200.00         0.00      14200          0
sdk              79.00      6328.00         0.00       6328          0
sdl             113.00     11296.00         0.00      11296          0
sdm              74.00      7504.00         0.00       7504          0
sdn             109.00     11840.00         0.00      11840          0
sdo             113.00     15488.00         0.00      15488          0
sdp             107.00      9928.00         0.00       9928          0
sdq             109.00     10656.00         0.00      10656          0
md2           16937.00    135496.00         0.00     135496          0
dm-7          16937.00    135496.00         0.00     135496          0


Personalities : [raid10]
md2 : active raid10 sdf[0] sdq[11] sdp[10] sdo[9] sdn[8] sdm[7] sdl[6]
sdk[5] sdj[4] sdi[3] sdh[2] sdg[1]
      4395442176 blocks super 1.2 1024K chunks 2 near-copies [12/12]
[UUUUUUUUUUUU]


  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               storage
  PV Size               4.09 TB / not usable 4.00 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              1073105
  Free PE               0
  Allocated PE          1073105
  PV UUID               SwNPeb-QHqH-evb3-sdDM-el7V-
NfQl-HJijzQ


12x   Vendor: SEAGATE  Model: ST3750630SS

             reply	other threads:[~2009-12-17 23:49 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-12-17 23:49 Chris [this message]
2009-12-21 12:56 ` Raid 10 LVM JFS Seeking performance help Goswin von Brederlow
2009-12-28 21:23   ` Chris

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7eea85cf0912171549u58a4c64foc4cb96a388ddbd06@mail.gmail.com \
    --to=cmtimegn@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).