linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Raid1: sdb has a lot mor work then sda
@ 2012-04-24 14:46 Daniel Spannbauer
       [not found] ` <CABYL=To=tK8a0DVjvYeyuA2K4xChE3hv7Ppx+VvBTYfeZGgsjQ@mail.gmail.com>
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Daniel Spannbauer @ 2012-04-24 14:46 UTC (permalink / raw)
  To: linux-raid

Hello,

at the moment I'm trying to find a bottleneck on my lamp-server with
opensuse 11.4 (Kernel 2.6.37.6)

Sometimes the system has a very poor performance because of a high io-wait.
If I watch the systems disk-access with "atop -dD" I can see that sda is
the most of the time at a load of 10%, sdb sometimes is at 100% or
higher at the same time.

In my opinion in a Raid1-System both disk should have nearly the same load.

Or may I wrong whis this?

Both harddisks where changed 3 Weeks ago, /proc/mdstat shows that the
rebuild was successfull and the array is functional. Today I've changed
th SATA-Cable and the Port on the Maindboard of sdb, but the behaviour
is still the same. Both disks passed the extended smart-self-test.

Any ideas about that?

Regards

Daniel



-- 
Daniel Spannbauer                         Software Entwicklung
marco Systemanalyse und Entwicklung GmbH  Tel   +49 8333 9233-27 Fax -11
Rechbergstr. 4 - 6, D 87727 Babenhausen   Mobil +49 171 4033220
http://www.marco.de/                      Email ds@marco.de
Geschäftsführer Martin Reuter             HRB 171775 Amtsgericht München
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Raid1: sdb has a lot mor work then sda
       [not found] ` <CABYL=To=tK8a0DVjvYeyuA2K4xChE3hv7Ppx+VvBTYfeZGgsjQ@mail.gmail.com>
@ 2012-04-24 15:02   ` Daniel Spannbauer
  2012-04-24 15:04     ` Roberto Spadim
  0 siblings, 1 reply; 5+ messages in thread
From: Daniel Spannbauer @ 2012-04-24 15:02 UTC (permalink / raw)
  To: linux-raid

Hello,

here is a screendump of a atop -dD in such a situation.

Regards

Daniel




Am 04/24/2012 05:00 PM, schrieb Roberto Spadim:
> it´s a read, a write, or both operation?
> about load i think it´s a very big read (sdb) with a very low write (sda)
> the disk balance on raid1 is based on nearest (current position - next
> read position) disk
> on a sequencial read only one disk is better used
> 
> i checked that raid1 'have this problem' but it´s a hardware limit, i
> didn´t found a solution, maybe others friends can found
> 
> Em 24 de abril de 2012 11:46, Daniel Spannbauer <ds@marco.de
> <mailto:ds@marco.de>> escreveu:
> 
>     Hello,
> 
>     at the moment I'm trying to find a bottleneck on my lamp-server with
>     opensuse 11.4 (Kernel 2.6.37.6)
> 
>     Sometimes the system has a very poor performance because of a high
>     io-wait.
>     If I watch the systems disk-access with "atop -dD" I can see that sda is
>     the most of the time at a load of 10%, sdb sometimes is at 100% or
>     higher at the same time.
> 
>     In my opinion in a Raid1-System both disk should have nearly the
>     same load.
> 
>     Or may I wrong whis this?
> 
>     Both harddisks where changed 3 Weeks ago, /proc/mdstat shows that the
>     rebuild was successfull and the array is functional. Today I've changed
>     th SATA-Cable and the Port on the Maindboard of sdb, but the behaviour
>     is still the same. Both disks passed the extended smart-self-test.
> 
>     Any ideas about that?
> 
>     Regards
> 
>     Daniel
> 
> 
> 
>     --
>     Daniel Spannbauer                         Software Entwicklung
>     marco Systemanalyse und Entwicklung GmbH  Tel   +49 8333 9233-27
>     <tel:%2B49%208333%209233-27> Fax -11
>     Rechbergstr. 4 - 6, D 87727 Babenhausen   Mobil +49 171 4033220
>     <tel:%2B49%20171%204033220>
>     http://www.marco.de/                      Email ds@marco.de
>     <mailto:ds@marco.de>
>     Geschäftsführer Martin Reuter             HRB 171775 Amtsgericht München
>     --
>     To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>     the body of a message to majordomo@vger.kernel.org
>     <mailto:majordomo@vger.kernel.org>
>     More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> 
> -- 
> Roberto Spadim
> Spadim Technology / SPAEmpresarial


-- 
Daniel Spannbauer                         Software Entwicklung
marco Systemanalyse und Entwicklung GmbH  Tel   +49 8333 9233-27 Fax -11
Rechbergstr. 4 - 6, D 87727 Babenhausen   Mobil +49 171 4033220
http://www.marco.de/                      Email ds@marco.de
Geschäftsführer Martin Reuter             HRB 171775 Amtsgericht München
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Raid1: sdb has a lot mor work then sda
  2012-04-24 14:46 Raid1: sdb has a lot mor work then sda Daniel Spannbauer
       [not found] ` <CABYL=To=tK8a0DVjvYeyuA2K4xChE3hv7Ppx+VvBTYfeZGgsjQ@mail.gmail.com>
@ 2012-04-24 15:03 ` Roberto Spadim
  2012-04-25  7:47 ` Kay Diederichs
  2 siblings, 0 replies; 5+ messages in thread
From: Roberto Spadim @ 2012-04-24 15:03 UTC (permalink / raw)
  To: Daniel Spannbauer; +Cc: linux-raid

it´s a read, a write, or both operation?
about load i think it´s a very big read (sdb) with a very low write (sda)
the disk balance on raid1 is based on nearest (current position - next
read position) disk
on a sequencial read only one disk is better used

i checked that raid1 'have this problem' but it´s a hardware limit, i
didn´t found a solution yet , maybe others friends can found
i made some read balance method some time ago, and the results is
constant 1% of performace improvement on mixed read/write scenario,
but i didn´t saw it as a good performace improvement since it must
have many parameters to configure (time of disk seeks), and the load
isn´t proportional, sometimes a disk use is very bigger than the other
disk

saw raid1 as a parallel/secure work, not as a performace tweak

--
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Raid1: sdb has a lot mor work then sda
  2012-04-24 15:02   ` Daniel Spannbauer
@ 2012-04-24 15:04     ` Roberto Spadim
  0 siblings, 0 replies; 5+ messages in thread
From: Roberto Spadim @ 2012-04-24 15:04 UTC (permalink / raw)
  To: Daniel Spannbauer; +Cc: linux-raid

ops, sorry may email sent two messages

-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Raid1: sdb has a lot mor work then sda
  2012-04-24 14:46 Raid1: sdb has a lot mor work then sda Daniel Spannbauer
       [not found] ` <CABYL=To=tK8a0DVjvYeyuA2K4xChE3hv7Ppx+VvBTYfeZGgsjQ@mail.gmail.com>
  2012-04-24 15:03 ` Roberto Spadim
@ 2012-04-25  7:47 ` Kay Diederichs
  2 siblings, 0 replies; 5+ messages in thread
From: Kay Diederichs @ 2012-04-25  7:47 UTC (permalink / raw)
  Cc: linux-raid

On 04/24/2012 04:46 PM, Daniel Spannbauer wrote:
> Hello,
>
> at the moment I'm trying to find a bottleneck on my lamp-server with
> opensuse 11.4 (Kernel 2.6.37.6)
>
> Sometimes the system has a very poor performance because of a high io-wait.
> If I watch the systems disk-access with "atop -dD" I can see that sda is
> the most of the time at a load of 10%, sdb sometimes is at 100% or
> higher at the same time.
>
> In my opinion in a Raid1-System both disk should have nearly the same load.
>
> Or may I wrong whis this?
>
> Both harddisks where changed 3 Weeks ago, /proc/mdstat shows that the
> rebuild was successfull and the array is functional. Today I've changed
> th SATA-Cable and the Port on the Maindboard of sdb, but the behaviour
> is still the same. Both disks passed the extended smart-self-test.
>
> Any ideas about that?
>
> Regards
>
> Daniel

Are the disks more or less the same in all SMART attributes (from 
smartctl -a)?

I have seen such a behaviour in the case of one disk being marginal, 
and/or having a couple of bad spots which need several trials to be 
read. This always also resulted in longer "smartctl -t long" times than 
usual, but did not always show up clearly in the SMART output (or I 
looked at the wrong attributes).

I'd try to exchange the disk.

HTH,

Kay





^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-04-25  7:47 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-04-24 14:46 Raid1: sdb has a lot mor work then sda Daniel Spannbauer
     [not found] ` <CABYL=To=tK8a0DVjvYeyuA2K4xChE3hv7Ppx+VvBTYfeZGgsjQ@mail.gmail.com>
2012-04-24 15:02   ` Daniel Spannbauer
2012-04-24 15:04     ` Roberto Spadim
2012-04-24 15:03 ` Roberto Spadim
2012-04-25  7:47 ` Kay Diederichs

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).