linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* raid1 sector size
@ 2011-02-06 16:19 Roberto Spadim
  2011-02-06 16:49 ` Roberto Spadim
  0 siblings, 1 reply; 17+ messages in thread
From: Roberto Spadim @ 2011-02-06 16:19 UTC (permalink / raw)
  To: Linux-RAID

hi guys, there's a way to increase sector size?
instead of 512bytes use 4096bytes?

check:
md0: 512bytes
sda: 4096bytes
sdb: 4096bytes

i want a 512bytes at md0
why? i'm trying to reduce IOPS (more bytes to read = less io / second)
and maybe increase read rate

-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: raid1 sector size
  2011-02-06 16:19 raid1 sector size Roberto Spadim
@ 2011-02-06 16:49 ` Roberto Spadim
  2011-02-06 18:03   ` Jérôme Poulin
                     ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Roberto Spadim @ 2011-02-06 16:49 UTC (permalink / raw)
  To: Linux-RAID

the problem (benchmark) i don't know if it's a cpu intensive use
that's blocking some i/o (try to reduce IOPS on md raid1), or a
problem at read logic (check if disk is ok)

[root@myhost block]# dd if=/dev/sda of=/dev/zero bs=8196
^C80106+0 registros de entrada
80105+0 registros de saída
656540580 bytes (657 MB) copiados, 5,38142 s, 122 MB/s

[root@myhost block]# dd if=/dev/md0 of=/dev/zero bs=8196
^C52851+0 registros de entrada
52850+0 registros de saída
433158600 bytes (433 MB) copiados, 10,4623 s, 41,4 MB/s


on iostat -d 1 -k
i see that just sda is running (i'm using the normal unpatched kernel
2.6.37), cpu is: using htop (i coudn't select and copy, maybe sum>100%
but it's a mean value)

using md0:
CPU:	1.3%	sy 67.3	ni:	0	si 36%		wa: 0%

using  sda:
CPU:	1.3%	sy 33.6	ni:	0	si 8,4%		wa: 50%

maybe cpu don't have time to wait i/o (sda showed wa: 50%)
anyone could help me if i'm undestanding it right?
the best block size for my harddisk is 8196

md0 is in sync

2011/2/6 Roberto Spadim <roberto@spadim.com.br>:
> hi guys, there's a way to increase sector size?
> instead of 512bytes use 4096bytes?
>
> check:
> md0: 512bytes
> sda: 4096bytes
> sdb: 4096bytes
>
> i want a 512bytes at md0
> why? i'm trying to reduce IOPS (more bytes to read = less io / second)
> and maybe increase read rate
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: raid1 sector size
  2011-02-06 16:49 ` Roberto Spadim
@ 2011-02-06 18:03   ` Jérôme Poulin
  2011-02-06 22:30   ` Stan Hoeppner
  2011-02-07 23:44   ` Periodic RebuildStarted event Martin Cracauer
  2 siblings, 0 replies; 17+ messages in thread
From: Jérôme Poulin @ 2011-02-06 18:03 UTC (permalink / raw)
  To: Roberto Spadim; +Cc: Linux-RAID

On Sun, Feb 6, 2011 at 11:49 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
> the problem (benchmark) i don't know if it's a cpu intensive use
> that's blocking some i/o (try to reduce IOPS on md raid1), or a
> problem at read logic (check if disk is ok)
>
> [root@myhost block]# dd if=/dev/sda of=/dev/zero bs=8196
> ^C80106+0 registros de entrada
> 80105+0 registros de saída
> 656540580 bytes (657 MB) copiados, 5,38142 s, 122 MB/s
>
> [root@myhost block]# dd if=/dev/md0 of=/dev/zero bs=8196
> ^C52851+0 registros de entrada
> 52850+0 registros de saída
> 433158600 bytes (433 MB) copiados, 10,4623 s, 41,4 MB/s
>

You could try with some computer friendly bs= value, like 8192.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: raid1 sector size
  2011-02-06 16:49 ` Roberto Spadim
  2011-02-06 18:03   ` Jérôme Poulin
@ 2011-02-06 22:30   ` Stan Hoeppner
  2011-02-07  1:21     ` Roberto Spadim
  2011-02-07 23:44   ` Periodic RebuildStarted event Martin Cracauer
  2 siblings, 1 reply; 17+ messages in thread
From: Stan Hoeppner @ 2011-02-06 22:30 UTC (permalink / raw)
  To: Roberto Spadim; +Cc: Linux-RAID

Roberto Spadim put forth on 2/6/2011 10:49 AM:

> [root@myhost block]# dd if=/dev/sda of=/dev/zero bs=8196

> [root@myhost block]# dd if=/dev/md0 of=/dev/zero bs=8196

/dev/zero is a read source, not a write target.  I'm surprised this doesn't bomb
out.  Use /dev/null for output.

-- 
Stan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: raid1 sector size
  2011-02-06 22:30   ` Stan Hoeppner
@ 2011-02-07  1:21     ` Roberto Spadim
  0 siblings, 0 replies; 17+ messages in thread
From: Roberto Spadim @ 2011-02-07  1:21 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: Linux-RAID

=] it works heeh =]
but the problem is the same, speed is low on /dev/md0 than /dev/sda

2011/2/6 Stan Hoeppner <stan@hardwarefreak.com>:
> Roberto Spadim put forth on 2/6/2011 10:49 AM:
>
>> [root@myhost block]# dd if=/dev/sda of=/dev/zero bs=8196
>
>> [root@myhost block]# dd if=/dev/md0 of=/dev/zero bs=8196
>
> /dev/zero is a read source, not a write target.  I'm surprised this doesn't bomb
> out.  Use /dev/null for output.
>
> --
> Stan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Periodic RebuildStarted event
  2011-02-06 16:49 ` Roberto Spadim
  2011-02-06 18:03   ` Jérôme Poulin
  2011-02-06 22:30   ` Stan Hoeppner
@ 2011-02-07 23:44   ` Martin Cracauer
  2011-02-07 23:52     ` Roberto Spadim
  2011-02-08  0:25     ` NeilBrown
  2 siblings, 2 replies; 17+ messages in thread
From: Martin Cracauer @ 2011-02-07 23:44 UTC (permalink / raw)
  To: linux-raid

I just got through the RebuildStarted event which seems to be
monthly.  This is being triggered by my Debian config, but before I
nuke it I'd like to know a little more.

If a real disk error happens during this rebuild on a raid5, would the
disk go into regular degraded mode or would it count as a double
fault?

I also noticed that recently all the checks for all the arrays happen
simultaneously.  That's bad because most of them share the same
physical disks.  Am I imagining this or was the system smart enough to
do them one after another until recently?

Do you do period checks? I get lots of device mismatches reported but
apparently that's normal if there's write activity.  The whole thing
sound contra-productive to me and might panic new users.

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <cracauer@cons.org>   http://www.cons.org/cracauer/

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-02-07 23:44   ` Periodic RebuildStarted event Martin Cracauer
@ 2011-02-07 23:52     ` Roberto Spadim
  2011-02-07 23:59       ` Martin Cracauer
  2011-02-08  0:25     ` NeilBrown
  1 sibling, 1 reply; 17+ messages in thread
From: Roberto Spadim @ 2011-02-07 23:52 UTC (permalink / raw)
  To: Martin Cracauer; +Cc: linux-raid

i don´t know if exists, but maybe a internal mdadm command to check
array (if have mirrors or ecc/checksum) with a limited speed (mb/s)
read operation could help with periodic check (instead resync)

2011/2/7 Martin Cracauer <cracauer@cons.org>:
> I just got through the RebuildStarted event which seems to be
> monthly.  This is being triggered by my Debian config, but before I
> nuke it I'd like to know a little more.
>
> If a real disk error happens during this rebuild on a raid5, would the
> disk go into regular degraded mode or would it count as a double
> fault?
>
> I also noticed that recently all the checks for all the arrays happen
> simultaneously.  That's bad because most of them share the same
> physical disks.  Am I imagining this or was the system smart enough to
> do them one after another until recently?
>
> Do you do period checks? I get lots of device mismatches reported but
> apparently that's normal if there's write activity.  The whole thing
> sound contra-productive to me and might panic new users.
>
> Martin
> --
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> Martin Cracauer <cracauer@cons.org>   http://www.cons.org/cracauer/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-02-07 23:52     ` Roberto Spadim
@ 2011-02-07 23:59       ` Martin Cracauer
  2011-02-08  0:28         ` Roberto Spadim
  0 siblings, 1 reply; 17+ messages in thread
From: Martin Cracauer @ 2011-02-07 23:59 UTC (permalink / raw)
  To: Roberto Spadim; +Cc: Martin Cracauer, linux-raid

Roberto Spadim wrote on Mon, Feb 07, 2011 at 09:52:34PM -0200: 
> i don?t know if exists, but maybe a internal mdadm command to check
> array (if have mirrors or ecc/checksum) with a limited speed (mb/s)
> read operation could help with periodic check (instead resync)

I don't experience performance problems from this.  Linux md seems to
be very good at putting the sync at a backburner when there is real
activity.  Just today I benchmarked an array that was in progress of
building a raid5 (initial sync).  Left most of the cycles to the
benchmark and took endless.  That should be configureable, too (for
those who want the sync done with priority but can't kill activity),
but anyway my concern isn't performance.

Martin

> 2011/2/7 Martin Cracauer <cracauer@cons.org>:
> > I just got through the RebuildStarted event which seems to be
> > monthly. ?This is being triggered by my Debian config, but before I
> > nuke it I'd like to know a little more.
> >
> > If a real disk error happens during this rebuild on a raid5, would the
> > disk go into regular degraded mode or would it count as a double
> > fault?
> >
> > I also noticed that recently all the checks for all the arrays happen
> > simultaneously. ?That's bad because most of them share the same
> > physical disks. ?Am I imagining this or was the system smart enough to
> > do them one after another until recently?
> >
> > Do you do period checks? I get lots of device mismatches reported but
> > apparently that's normal if there's write activity. ?The whole thing
> > sound contra-productive to me and might panic new users.
> >
> > Martin
> > --
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > Martin Cracauer <cracauer@cons.org> ? http://www.cons.org/cracauer/
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at ?http://vger.kernel.org/majordomo-info.html
> >
> 
> 
> 
> -- 
> Roberto Spadim
> Spadim Technology / SPAEmpresarial

-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <cracauer@cons.org>   http://www.cons.org/cracauer/

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-02-07 23:44   ` Periodic RebuildStarted event Martin Cracauer
  2011-02-07 23:52     ` Roberto Spadim
@ 2011-02-08  0:25     ` NeilBrown
  2011-03-15  3:16       ` CoolCold
  1 sibling, 1 reply; 17+ messages in thread
From: NeilBrown @ 2011-02-08  0:25 UTC (permalink / raw)
  To: Martin Cracauer; +Cc: linux-raid

On Mon, 7 Feb 2011 18:44:11 -0500 Martin Cracauer <cracauer@cons.org> wrote:

> I just got through the RebuildStarted event which seems to be
> monthly.  This is being triggered by my Debian config, but before I
> nuke it I'd like to know a little more.
> 
> If a real disk error happens during this rebuild on a raid5, would the
> disk go into regular degraded mode or would it count as a double
> fault?

The monthly thing is a 'check', not a 'rebuild'  (yes, the 'monitor' email is
a little misleading).
So a real disk error will be handled correctly.  In fact the main point of a
monthly check is to find and correct these latent read errors.


> 
> I also noticed that recently all the checks for all the arrays happen
> simultaneously.  That's bad because most of them share the same
> physical disks.  Am I imagining this or was the system smart enough to
> do them one after another until recently?

Arrays that share a partition certainly should not be
synced/recovered/checked at the same time (unless you set
sync_force_parallel in sysfs).

If you have evidence that they do I would like to see that evidence.

> 
> Do you do period checks? I get lots of device mismatches reported but
> apparently that's normal if there's write activity.  The whole thing
> sound contra-productive to me and might panic new users.

Periodic checks are a good thing.
Yes, it can cause confusion.  That is not good, but a better approach has not
yet been found.  Patches welcome.

What would be really good is to just do an hour of check every night.  It is
quite possible to get the kernel to do this, but it requires some non-trivial
scripting that no-one has written yet.  You need to record where you are up
to on which array, and when you last did each array.  Then start either the
'next' array at the beginning, or the 'current' array at the current point
(write to sync_min).
Then wait for however long you want, abort the check (write 'idle' to
'sync_action') and find out where it got up to (read sync_min) and record
that for next time.

Great project for someone.....

NeilBrown



> 
> Martin


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-02-07 23:59       ` Martin Cracauer
@ 2011-02-08  0:28         ` Roberto Spadim
  0 siblings, 0 replies; 17+ messages in thread
From: Roberto Spadim @ 2011-02-08  0:28 UTC (permalink / raw)
  To: Martin Cracauer; +Cc: linux-raid

i don´t know how sync is executed, but each write you send to a ssd
make the ssd write period of life smaller
if it´s just a READ command no problem, but a WRITE command (sync) is
a problem for ssd
a check (a sync without WRITE) could be nice... it just need tell us
how many mirrors are unsync and how many bytes(sectors) in each mirror
(fail disk if it´s not sync, start a resync, these features as a
command line option...)

2011/2/7 Martin Cracauer <cracauer@cons.org>:
> Roberto Spadim wrote on Mon, Feb 07, 2011 at 09:52:34PM -0200:
>> i don?t know if exists, but maybe a internal mdadm command to check
>> array (if have mirrors or ecc/checksum) with a limited speed (mb/s)
>> read operation could help with periodic check (instead resync)
>
> I don't experience performance problems from this.  Linux md seems to
> be very good at putting the sync at a backburner when there is real
> activity.  Just today I benchmarked an array that was in progress of
> building a raid5 (initial sync).  Left most of the cycles to the
> benchmark and took endless.  That should be configureable, too (for
> those who want the sync done with priority but can't kill activity),
> but anyway my concern isn't performance.
>
> Martin
>
>> 2011/2/7 Martin Cracauer <cracauer@cons.org>:
>> > I just got through the RebuildStarted event which seems to be
>> > monthly. ?This is being triggered by my Debian config, but before I
>> > nuke it I'd like to know a little more.
>> >
>> > If a real disk error happens during this rebuild on a raid5, would the
>> > disk go into regular degraded mode or would it count as a double
>> > fault?
>> >
>> > I also noticed that recently all the checks for all the arrays happen
>> > simultaneously. ?That's bad because most of them share the same
>> > physical disks. ?Am I imagining this or was the system smart enough to
>> > do them one after another until recently?
>> >
>> > Do you do period checks? I get lots of device mismatches reported but
>> > apparently that's normal if there's write activity. ?The whole thing
>> > sound contra-productive to me and might panic new users.
>> >
>> > Martin
>> > --
>> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> > Martin Cracauer <cracauer@cons.org> ? http://www.cons.org/cracauer/
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>> >
>>
>>
>>
>> --
>> Roberto Spadim
>> Spadim Technology / SPAEmpresarial
>
> --
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> Martin Cracauer <cracauer@cons.org>   http://www.cons.org/cracauer/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-02-08  0:25     ` NeilBrown
@ 2011-03-15  3:16       ` CoolCold
  2011-03-15  3:28         ` Roberto Spadim
  2011-03-15  3:41         ` NeilBrown
  0 siblings, 2 replies; 17+ messages in thread
From: CoolCold @ 2011-03-15  3:16 UTC (permalink / raw)
  To: NeilBrown; +Cc: Martin Cracauer, linux-raid

Hello!

On Tue, Feb 8, 2011 at 3:25 AM, NeilBrown <neilb@suse.de> wrote:
[snip]
> Then start either the
> 'next' array at the beginning, or the 'current' array at the current point
> (write to sync_min).
I couldn't find documentation for sync_min/sync_max sysfs params at
least for repo cloned from
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.37.y.git
coolcold@coolcold:~/gits/linux-2.6.37.y$ grep -qi sync_min
Documentation/md.txt || echo failed find docs
failed find docs

As I could understand from sources - resync_min & resync_max are
expressed in sectors (512bytes?)  and are set to 0 & total sectors on
device accordingly. resync_max value should be divisible by array
chunk size (in sectors) . After setting this values, one can trugger
"check" / "repair" into sync_action.

My basic idea is to use this method to clear pending sectors from
SMART checks and looks like this gonna work, am i right?

> Then wait for however long you want, abort the check (write 'idle' to
> 'sync_action') and find out where it got up to (read sync_min) and record
> that for next time.
>
> NeilBrown
>


-- 
Best regards,
[COOLCOLD-RIPN]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-03-15  3:16       ` CoolCold
@ 2011-03-15  3:28         ` Roberto Spadim
  2011-03-15  3:51           ` CoolCold
  2011-03-15  3:41         ` NeilBrown
  1 sibling, 1 reply; 17+ messages in thread
From: Roberto Spadim @ 2011-03-15  3:28 UTC (permalink / raw)
  To: CoolCold; +Cc: NeilBrown, Martin Cracauer, linux-raid

does mdadm have check? could it have check with a startup position?
could the mdadm report last checked position?
this could help partial checks..

2011/3/15 CoolCold <coolthecold@gmail.com>:
> Hello!
>
> On Tue, Feb 8, 2011 at 3:25 AM, NeilBrown <neilb@suse.de> wrote:
> [snip]
>> Then start either the
>> 'next' array at the beginning, or the 'current' array at the current point
>> (write to sync_min).
> I couldn't find documentation for sync_min/sync_max sysfs params at
> least for repo cloned from
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.37.y.git
> coolcold@coolcold:~/gits/linux-2.6.37.y$ grep -qi sync_min
> Documentation/md.txt || echo failed find docs
> failed find docs
>
> As I could understand from sources - resync_min & resync_max are
> expressed in sectors (512bytes?)  and are set to 0 & total sectors on
> device accordingly. resync_max value should be divisible by array
> chunk size (in sectors) . After setting this values, one can trugger
> "check" / "repair" into sync_action.
>
> My basic idea is to use this method to clear pending sectors from
> SMART checks and looks like this gonna work, am i right?
>
>> Then wait for however long you want, abort the check (write 'idle' to
>> 'sync_action') and find out where it got up to (read sync_min) and record
>> that for next time.
>>
>> NeilBrown
>>
>
>
> --
> Best regards,
> [COOLCOLD-RIPN]
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-03-15  3:16       ` CoolCold
  2011-03-15  3:28         ` Roberto Spadim
@ 2011-03-15  3:41         ` NeilBrown
  2011-03-15  4:21           ` CoolCold
  1 sibling, 1 reply; 17+ messages in thread
From: NeilBrown @ 2011-03-15  3:41 UTC (permalink / raw)
  To: CoolCold; +Cc: Martin Cracauer, linux-raid

On Tue, 15 Mar 2011 06:16:22 +0300 CoolCold <coolthecold@gmail.com> wrote:

> Hello!
> 
> On Tue, Feb 8, 2011 at 3:25 AM, NeilBrown <neilb@suse.de> wrote:
> [snip]
> > Then start either the
> > 'next' array at the beginning, or the 'current' array at the current point
> > (write to sync_min).
> I couldn't find documentation for sync_min/sync_max sysfs params at
> least for repo cloned from
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.37.y.git
> coolcold@coolcold:~/gits/linux-2.6.37.y$ grep -qi sync_min
> Documentation/md.txt || echo failed find docs
> failed find docs

Yes, sorry about that.

> 
> As I could understand from sources - resync_min & resync_max are
> expressed in sectors (512bytes?)  and are set to 0 & total sectors on
> device accordingly. resync_max value should be divisible by array
> chunk size (in sectors) . After setting this values, one can trugger
> "check" / "repair" into sync_action.

Yes - sectors (multiples of 512 bytes)
Yes - 0 and a big number.  sync_max is actually set to MAX_LONG rather than
      the actual total number of sectors.

Yes - one can trigger 'check' or 'repair' and it will obey these limits.
When it reaches 'sync_max' it will pause rather than complete.  You can
use 'select' or 'poll' on "sync_completed" to wait for that number to
reach sync_max.  Then you can either increase sync_max, or can write
"idle" to "sync_action".

> 
> My basic idea is to use this method to clear pending sectors from
> SMART checks and looks like this gonna work, am i right?
> 

I don't know exactly what "pending sectors" are, but if they are sectors
which return an error to READ and can be fixed by writing data to them, then
you are right, this should 'clear' any pending sectors.

Of course you will need to be careful about mapping the sector number
from smart to the second number given to 'sync_min'.  Not only must you
adjust for any partition table, but also the 'data offset' of the
md array must be allowed for.

NeilBrown



> > Then wait for however long you want, abort the check (write 'idle' to
> > 'sync_action') and find out where it got up to (read sync_min) and record
> > that for next time.
> >
> > NeilBrown
> >
> 
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-03-15  3:28         ` Roberto Spadim
@ 2011-03-15  3:51           ` CoolCold
  2011-03-15  4:16             ` Roberto Spadim
  0 siblings, 1 reply; 17+ messages in thread
From: CoolCold @ 2011-03-15  3:51 UTC (permalink / raw)
  To: Roberto Spadim; +Cc: NeilBrown, Martin Cracauer, linux-raid

On Tue, Mar 15, 2011 at 6:28 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
> does mdadm have check? could it have check with a startup position?
> could the mdadm report last checked position?
> this could help partial checks..
Testing with loop devices shows it works.
limits are set:
root@nekotaz2:/storage/ovzs/keke# cat /sys/block/md5/md/sync_{min,max}
500000
700000

calling check & viewing status:
root@nekotaz2:/storage/ovzs/keke# echo check > /sys/block/md5/md/sync_action
root@nekotaz2:/storage/ovzs/keke# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md5 : active raid1 loop1[1] loop0[0]
      1048512 blocks [2/2] [UU]
      [====>................]  check = 24.2% (254608/1048512)
finish=5.7min speed=2304K/sec

Here 254608 is 500000/2 = 250000 + some progress for delay in between
of "echo check" & "cat /proc/mdstat"
In result, it stops on 350000 which is exactly 700000/2

root@nekotaz2:/storage/ovzs/keke# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md5 : active raid1 loop1[1] loop0[0]
      1048512 blocks [2/2] [UU]
      [======>..............]  check = 33.3% (350000/1048512)
finish=76.4min speed=151K/sec

Speed & finish time are still being calculated though.

Also, see Nail's message.

>
> 2011/3/15 CoolCold <coolthecold@gmail.com>:
>> Hello!
>>
>> On Tue, Feb 8, 2011 at 3:25 AM, NeilBrown <neilb@suse.de> wrote:
>> [snip]
>>> Then start either the
>>> 'next' array at the beginning, or the 'current' array at the current point
>>> (write to sync_min).
>> I couldn't find documentation for sync_min/sync_max sysfs params at
>> least for repo cloned from
>> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.37.y.git
>> coolcold@coolcold:~/gits/linux-2.6.37.y$ grep -qi sync_min
>> Documentation/md.txt || echo failed find docs
>> failed find docs
>>
>> As I could understand from sources - resync_min & resync_max are
>> expressed in sectors (512bytes?)  and are set to 0 & total sectors on
>> device accordingly. resync_max value should be divisible by array
>> chunk size (in sectors) . After setting this values, one can trugger
>> "check" / "repair" into sync_action.
>>
>> My basic idea is to use this method to clear pending sectors from
>> SMART checks and looks like this gonna work, am i right?
>>
>>> Then wait for however long you want, abort the check (write 'idle' to
>>> 'sync_action') and find out where it got up to (read sync_min) and record
>>> that for next time.
>>>
>>> NeilBrown
>>>
>>
>>
>> --
>> Best regards,
>> [COOLCOLD-RIPN]
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
>



-- 
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-03-15  3:51           ` CoolCold
@ 2011-03-15  4:16             ` Roberto Spadim
  0 siblings, 0 replies; 17+ messages in thread
From: Roberto Spadim @ 2011-03-15  4:16 UTC (permalink / raw)
  Cc: NeilBrown, Martin Cracauer, linux-raid

hum...
could we allow start with a initial position, and stop at any time?
maybe a pause too... (set speed to 0bytes/sec)
with pause we could pause get the actual position, stop, and start
from that position later

2011/3/15 CoolCold <coolthecold@gmail.com>:
> On Tue, Mar 15, 2011 at 6:28 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>> does mdadm have check? could it have check with a startup position?
>> could the mdadm report last checked position?
>> this could help partial checks..
> Testing with loop devices shows it works.
> limits are set:
> root@nekotaz2:/storage/ovzs/keke# cat /sys/block/md5/md/sync_{min,max}
> 500000
> 700000
>
> calling check & viewing status:
> root@nekotaz2:/storage/ovzs/keke# echo check > /sys/block/md5/md/sync_action
> root@nekotaz2:/storage/ovzs/keke# cat /proc/mdstat
> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
> md5 : active raid1 loop1[1] loop0[0]
>      1048512 blocks [2/2] [UU]
>      [====>................]  check = 24.2% (254608/1048512)
> finish=5.7min speed=2304K/sec
>
> Here 254608 is 500000/2 = 250000 + some progress for delay in between
> of "echo check" & "cat /proc/mdstat"
> In result, it stops on 350000 which is exactly 700000/2
>
> root@nekotaz2:/storage/ovzs/keke# cat /proc/mdstat
> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
> md5 : active raid1 loop1[1] loop0[0]
>      1048512 blocks [2/2] [UU]
>      [======>..............]  check = 33.3% (350000/1048512)
> finish=76.4min speed=151K/sec
>
> Speed & finish time are still being calculated though.
>
> Also, see Nail's message.
>
>>
>> 2011/3/15 CoolCold <coolthecold@gmail.com>:
>>> Hello!
>>>
>>> On Tue, Feb 8, 2011 at 3:25 AM, NeilBrown <neilb@suse.de> wrote:
>>> [snip]
>>>> Then start either the
>>>> 'next' array at the beginning, or the 'current' array at the current point
>>>> (write to sync_min).
>>> I couldn't find documentation for sync_min/sync_max sysfs params at
>>> least for repo cloned from
>>> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.37.y.git
>>> coolcold@coolcold:~/gits/linux-2.6.37.y$ grep -qi sync_min
>>> Documentation/md.txt || echo failed find docs
>>> failed find docs
>>>
>>> As I could understand from sources - resync_min & resync_max are
>>> expressed in sectors (512bytes?)  and are set to 0 & total sectors on
>>> device accordingly. resync_max value should be divisible by array
>>> chunk size (in sectors) . After setting this values, one can trugger
>>> "check" / "repair" into sync_action.
>>>
>>> My basic idea is to use this method to clear pending sectors from
>>> SMART checks and looks like this gonna work, am i right?
>>>
>>>> Then wait for however long you want, abort the check (write 'idle' to
>>>> 'sync_action') and find out where it got up to (read sync_min) and record
>>>> that for next time.
>>>>
>>>> NeilBrown
>>>>
>>>
>>>
>>> --
>>> Best regards,
>>> [COOLCOLD-RIPN]
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
>>
>>
>> --
>> Roberto Spadim
>> Spadim Technology / SPAEmpresarial
>>
>
>
>
> --
> Best regards,
> [COOLCOLD-RIPN]
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-03-15  3:41         ` NeilBrown
@ 2011-03-15  4:21           ` CoolCold
  2011-03-15  6:17             ` NeilBrown
  0 siblings, 1 reply; 17+ messages in thread
From: CoolCold @ 2011-03-15  4:21 UTC (permalink / raw)
  To: NeilBrown; +Cc: Martin Cracauer, linux-raid

On Tue, Mar 15, 2011 at 6:41 AM, NeilBrown <neilb@suse.de> wrote:
> On Tue, 15 Mar 2011 06:16:22 +0300 CoolCold <coolthecold@gmail.com> wrote:
>
>> Hello!
>>
>> On Tue, Feb 8, 2011 at 3:25 AM, NeilBrown <neilb@suse.de> wrote:
>> [snip]
>> > Then start either the
>> > 'next' array at the beginning, or the 'current' array at the current point
>> > (write to sync_min).
>> I couldn't find documentation for sync_min/sync_max sysfs params at
>> least for repo cloned from
>> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.37.y.git
>> coolcold@coolcold:~/gits/linux-2.6.37.y$ grep -qi sync_min
>> Documentation/md.txt || echo failed find docs
>> failed find docs
>
> Yes, sorry about that.
May be I can help and create patch for md.txt after this thread? If
yes, it would be nice to get some link for proper patch providing
instructions, never did patches for kernel ;)

>
>>
>> As I could understand from sources - resync_min & resync_max are
>> expressed in sectors (512bytes?)  and are set to 0 & total sectors on
>> device accordingly. resync_max value should be divisible by array
>> chunk size (in sectors) . After setting this values, one can trugger
>> "check" / "repair" into sync_action.
>
> Yes - sectors (multiples of 512 bytes)
> Yes - 0 and a big number.  sync_max is actually set to MAX_LONG rather than
>      the actual total number of sectors.
>
> Yes - one can trigger 'check' or 'repair' and it will obey these limits.
> When it reaches 'sync_max' it will pause rather than complete.  You can
> use 'select' or 'poll' on "sync_completed" to wait for that number to
> reach sync_max.  Then you can either increase sync_max, or can write
> "idle" to "sync_action".
>
>>
>> My basic idea is to use this method to clear pending sectors from
>> SMART checks and looks like this gonna work, am i right?
>>
>
> I don't know exactly what "pending sectors" are, but if they are sectors
> which return an error to READ and can be fixed by writing data to them, then
> you are right, this should 'clear' any pending sectors.
Yes, i meant that kind.
>
> Of course you will need to be careful about mapping the sector number
> from smart to the second number given to 'sync_min'.
I guess you meant "sector" not "second" here?

> Not only must you
> adjust for any partition table, but also the 'data offset' of the
> md array must be allowed for.
So, for 0.9 metadata format offset is always gonna be 0, right?
And if the bad thing happens - bad block with read error is found on
metadata section, will mdadm with --update <something> will be enought
to do force write?

>
> NeilBrown
>
>
>
>> > Then wait for however long you want, abort the check (write 'idle' to
>> > 'sync_action') and find out where it got up to (read sync_min) and record
>> > that for next time.
>> >
>> > NeilBrown
>> >
>>
>>
>
>



-- 
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Periodic RebuildStarted event
  2011-03-15  4:21           ` CoolCold
@ 2011-03-15  6:17             ` NeilBrown
  0 siblings, 0 replies; 17+ messages in thread
From: NeilBrown @ 2011-03-15  6:17 UTC (permalink / raw)
  To: CoolCold; +Cc: Martin Cracauer, linux-raid

On Tue, 15 Mar 2011 07:21:09 +0300 CoolCold <coolthecold@gmail.com> wrote:

> On Tue, Mar 15, 2011 at 6:41 AM, NeilBrown <neilb@suse.de> wrote:
> > On Tue, 15 Mar 2011 06:16:22 +0300 CoolCold <coolthecold@gmail.com> wrote:
> >
> >> Hello!
> >>
> >> On Tue, Feb 8, 2011 at 3:25 AM, NeilBrown <neilb@suse.de> wrote:
> >> [snip]
> >> > Then start either the
> >> > 'next' array at the beginning, or the 'current' array at the current point
> >> > (write to sync_min).
> >> I couldn't find documentation for sync_min/sync_max sysfs params at
> >> least for repo cloned from
> >> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.37.y.git
> >> coolcold@coolcold:~/gits/linux-2.6.37.y$ grep -qi sync_min
> >> Documentation/md.txt || echo failed find docs
> >> failed find docs
> >
> > Yes, sorry about that.
> May be I can help and create patch for md.txt after this thread? If
> yes, it would be nice to get some link for proper patch providing
> instructions, never did patches for kernel ;)

Patches always welcome!!  So yes please.

Have a read through SubmittingPatches in linux/Documentation
(same directory that 'md.txt' is in).
That should get you close enough.

Thanks,
NeilBrown



> 
> >
> >>
> >> As I could understand from sources - resync_min & resync_max are
> >> expressed in sectors (512bytes?)  and are set to 0 & total sectors on
> >> device accordingly. resync_max value should be divisible by array
> >> chunk size (in sectors) . After setting this values, one can trugger
> >> "check" / "repair" into sync_action.
> >
> > Yes - sectors (multiples of 512 bytes)
> > Yes - 0 and a big number.  sync_max is actually set to MAX_LONG rather than
> >      the actual total number of sectors.
> >
> > Yes - one can trigger 'check' or 'repair' and it will obey these limits.
> > When it reaches 'sync_max' it will pause rather than complete.  You can
> > use 'select' or 'poll' on "sync_completed" to wait for that number to
> > reach sync_max.  Then you can either increase sync_max, or can write
> > "idle" to "sync_action".
> >
> >>
> >> My basic idea is to use this method to clear pending sectors from
> >> SMART checks and looks like this gonna work, am i right?
> >>
> >
> > I don't know exactly what "pending sectors" are, but if they are sectors
> > which return an error to READ and can be fixed by writing data to them, then
> > you are right, this should 'clear' any pending sectors.
> Yes, i meant that kind.
> >
> > Of course you will need to be careful about mapping the sector number
> > from smart to the second number given to 'sync_min'.
> I guess you meant "sector" not "second" here?
> 
> > Not only must you
> > adjust for any partition table, but also the 'data offset' of the
> > md array must be allowed for.
> So, for 0.9 metadata format offset is always gonna be 0, right?
> And if the bad thing happens - bad block with read error is found on
> metadata section, will mdadm with --update <something> will be enought
> to do force write?
> 
> >
> > NeilBrown
> >
> >
> >
> >> > Then wait for however long you want, abort the check (write 'idle' to
> >> > 'sync_action') and find out where it got up to (read sync_min) and record
> >> > that for next time.
> >> >
> >> > NeilBrown
> >> >
> >>
> >>
> >
> >
> 
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2011-03-15  6:17 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-02-06 16:19 raid1 sector size Roberto Spadim
2011-02-06 16:49 ` Roberto Spadim
2011-02-06 18:03   ` Jérôme Poulin
2011-02-06 22:30   ` Stan Hoeppner
2011-02-07  1:21     ` Roberto Spadim
2011-02-07 23:44   ` Periodic RebuildStarted event Martin Cracauer
2011-02-07 23:52     ` Roberto Spadim
2011-02-07 23:59       ` Martin Cracauer
2011-02-08  0:28         ` Roberto Spadim
2011-02-08  0:25     ` NeilBrown
2011-03-15  3:16       ` CoolCold
2011-03-15  3:28         ` Roberto Spadim
2011-03-15  3:51           ` CoolCold
2011-03-15  4:16             ` Roberto Spadim
2011-03-15  3:41         ` NeilBrown
2011-03-15  4:21           ` CoolCold
2011-03-15  6:17             ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).