* Resync Every Sunday
@ 2012-07-01 11:20 Jonathan Tripathy
2012-07-01 12:04 ` Jonathan Tripathy
2012-07-01 20:41 ` Keith Keller
0 siblings, 2 replies; 13+ messages in thread
From: Jonathan Tripathy @ 2012-07-01 11:20 UTC (permalink / raw)
To: linux-raid
Hi Everyone,
We have a few servers that use md raid with mdadm. Each server has 4
arrays (md0,md1,md2,md3). md0,1,2 are small and md3 is very large. Every
Sunday at 4:22am, the servers will start to resync. Here is some text
from /var/log/messages for one of the servers:
Jul 1 04:22:01 server1 kernel: md: syncing RAID array md0
Jul 1 04:22:01 server1 kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Jul 1 04:22:01 server1 kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for reconstruction.
Jul 1 04:22:01 server1 kernel: md: using 128k window, over a total of
104320 blocks.
Jul 1 04:22:01 server1 kernel: md: delaying resync of md2 until md0 has
finished resync (they share one or more physical units)
Jul 1 04:22:01 server1 kernel: md: delaying resync of md3 until md0 has
finished resync (they share one or more physical units)
Jul 1 04:22:05 server1 kernel: md: md0: sync done.
Jul 1 04:22:05 server1 kernel: md: delaying resync of md3 until md2 has
finished resync (they share one or more physical units)
Jul 1 04:22:05 server1 kernel: md: delaying resync of md2 until md3 has
finished resync (they share one or more physical units)
Jul 1 04:22:05 server1 kernel: md: syncing RAID array md3
Jul 1 04:22:05 server1 kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Jul 1 04:22:05 server1 kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for reconstruction.
Jul 1 04:22:05 server1 kernel: md: using 128k window, over a total of
1888295936 blocks.
/proc/mdstat shows a progress bar for the array that is currently
"re-syncing" (in the above case, md3). However, the disks in the servers
seem fine, and it always seems to happen in the early hours of Sunday
morning at 4:22am.
The issue gets further complicated as not all arrays are re-synced and I
can seem to find a pattern as to what's selected. All I know is that at
4:22, mdadm will "come alive" and attempt to do re-syncing of some (or
all) of the arrays. On each of the servers, 3 of the arrays are small
and one is large; this leads to the phenomenon that when we wake up on
Sunday morning, a "random" selection of the servers will still be
syncing (as mdadm has decided to "pick" the large md3 array to resync).
Here is output from /var/log/messages on a server that has only decided
to re-sync 2 small arrays (md0 and md2):
Jul 1 04:22:01 server3 kernel: md: syncing RAID array md0
Jul 1 04:22:01 server3 kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Jul 1 04:22:01 server3 kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for reconstruction.
Jul 1 04:22:01 server3 kernel: md: using 128k window, over a total of
104320 blocks.
Jul 1 04:22:01 server3 kernel: md: delaying resync of md2 until md0 has
finished resync (they share one or more physical units)
Jul 1 04:22:02 server3 kernel: md: md0: sync done.
Jul 1 04:22:02 server3 kernel: md: syncing RAID array md2
Jul 1 04:22:02 server3 kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Jul 1 04:22:02 server3 kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for reconstruction.
Jul 1 04:22:02 server3 kernel: md: using 128k window, over a total of
1052160 blocks.
Jul 1 04:22:15 server3 kernel: md: md2: sync done
What's going on? Am I missing something here? Is data on the arrays at
risk? We're using CentOS 5 with mdadm v2.6.9. Kernel version is
2.6.18-274.18.1.el5
Any help is appreciated.
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 11:20 Resync Every Sunday Jonathan Tripathy
@ 2012-07-01 12:04 ` Jonathan Tripathy
2012-07-01 12:44 ` Mikael Abrahamsson
2012-07-01 20:41 ` Keith Keller
1 sibling, 1 reply; 13+ messages in thread
From: Jonathan Tripathy @ 2012-07-01 12:04 UTC (permalink / raw)
To: linux-raid
On 01/07/2012 12:20, Jonathan Tripathy wrote:
> Hi Everyone,
>
> We have a few servers that use md raid with mdadm. Each server has 4
> arrays (md0,md1,md2,md3). md0,1,2 are small and md3 is very large.
> Every Sunday at 4:22am, the servers will start to resync. Here is some
> text from /var/log/messages for one of the servers:
>
> Jul 1 04:22:01 server1 kernel: md: syncing RAID array md0
> Jul 1 04:22:01 server1 kernel: md: minimum _guaranteed_
> reconstruction speed: 1000 KB/sec/disc.
> Jul 1 04:22:01 server1 kernel: md: using maximum available idle IO
> bandwidth (but not more than 200000 KB/sec) for reconstruction.
> Jul 1 04:22:01 server1 kernel: md: using 128k window, over a total of
> 104320 blocks.
> Jul 1 04:22:01 server1 kernel: md: delaying resync of md2 until md0
> has finished resync (they share one or more physical units)
> Jul 1 04:22:01 server1 kernel: md: delaying resync of md3 until md0
> has finished resync (they share one or more physical units)
> Jul 1 04:22:05 server1 kernel: md: md0: sync done.
> Jul 1 04:22:05 server1 kernel: md: delaying resync of md3 until md2
> has finished resync (they share one or more physical units)
> Jul 1 04:22:05 server1 kernel: md: delaying resync of md2 until md3
> has finished resync (they share one or more physical units)
> Jul 1 04:22:05 server1 kernel: md: syncing RAID array md3
> Jul 1 04:22:05 server1 kernel: md: minimum _guaranteed_
> reconstruction speed: 1000 KB/sec/disc.
> Jul 1 04:22:05 server1 kernel: md: using maximum available idle IO
> bandwidth (but not more than 200000 KB/sec) for reconstruction.
> Jul 1 04:22:05 server1 kernel: md: using 128k window, over a total of
> 1888295936 blocks.
>
> /proc/mdstat shows a progress bar for the array that is currently
> "re-syncing" (in the above case, md3). However, the disks in the
> servers seem fine, and it always seems to happen in the early hours of
> Sunday morning at 4:22am.
>
> The issue gets further complicated as not all arrays are re-synced and
> I can seem to find a pattern as to what's selected. All I know is that
> at 4:22, mdadm will "come alive" and attempt to do re-syncing of some
> (or all) of the arrays. On each of the servers, 3 of the arrays are
> small and one is large; this leads to the phenomenon that when we wake
> up on Sunday morning, a "random" selection of the servers will still
> be syncing (as mdadm has decided to "pick" the large md3 array to
> resync).
>
> Here is output from /var/log/messages on a server that has only
> decided to re-sync 2 small arrays (md0 and md2):
>
> Jul 1 04:22:01 server3 kernel: md: syncing RAID array md0
> Jul 1 04:22:01 server3 kernel: md: minimum _guaranteed_
> reconstruction speed: 1000 KB/sec/disc.
> Jul 1 04:22:01 server3 kernel: md: using maximum available idle IO
> bandwidth (but not more than 200000 KB/sec) for reconstruction.
> Jul 1 04:22:01 server3 kernel: md: using 128k window, over a total of
> 104320 blocks.
> Jul 1 04:22:01 server3 kernel: md: delaying resync of md2 until md0
> has finished resync (they share one or more physical units)
> Jul 1 04:22:02 server3 kernel: md: md0: sync done.
> Jul 1 04:22:02 server3 kernel: md: syncing RAID array md2
> Jul 1 04:22:02 server3 kernel: md: minimum _guaranteed_
> reconstruction speed: 1000 KB/sec/disc.
> Jul 1 04:22:02 server3 kernel: md: using maximum available idle IO
> bandwidth (but not more than 200000 KB/sec) for reconstruction.
> Jul 1 04:22:02 server3 kernel: md: using 128k window, over a total of
> 1052160 blocks.
> Jul 1 04:22:15 server3 kernel: md: md2: sync done
>
> What's going on? Am I missing something here? Is data on the arrays at
> risk? We're using CentOS 5 with mdadm v2.6.9. Kernel version is
> 2.6.18-274.18.1.el5
>
> Any help is appreciated.
>
>
Upon further reading, I've discovered that these "resyncs" are due to
the cron raid-checks that occur. However, most of my questions still stand:
- Why aren't all arrays checked?
- Why are the checked arrays different each week? (Although md0 and md2
seem to be favorites!)
- Is data at risk during these check times? If not, why does mdstat
report them are "resyncing" and not simply "checking"?
- Is it safe to disable these checks? Would monitoring the SMART status
of the disks serve as a good substitute?
Any help in answering these questions is appreciated
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 12:04 ` Jonathan Tripathy
@ 2012-07-01 12:44 ` Mikael Abrahamsson
2012-07-01 12:53 ` Jonathan Tripathy
0 siblings, 1 reply; 13+ messages in thread
From: Mikael Abrahamsson @ 2012-07-01 12:44 UTC (permalink / raw)
To: Jonathan Tripathy; +Cc: linux-raid
On Sun, 1 Jul 2012, Jonathan Tripathy wrote:
> - Is it safe to disable these checks? Would monitoring the SMART status of
> the disks serve as a good substitute?
Well, that's a decision you will have to make for yourself. The rationale
behind it is to find latent read errors and correct them while you have
parity already. Another term for this is "data scrubbing", you'll find
quite a lot of discussion on that topic.
Personally, my view is that I make sure all my data are read at least once
a month, I have experienced data loss historically because of lack of
scrubbing.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 12:44 ` Mikael Abrahamsson
@ 2012-07-01 12:53 ` Jonathan Tripathy
0 siblings, 0 replies; 13+ messages in thread
From: Jonathan Tripathy @ 2012-07-01 12:53 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: linux-raid
On 01/07/2012 13:44, Mikael Abrahamsson wrote:
> On Sun, 1 Jul 2012, Jonathan Tripathy wrote:
>
>> - Is it safe to disable these checks? Would monitoring the SMART
>> status of the disks serve as a good substitute?
>
> Well, that's a decision you will have to make for yourself. The
> rationale behind it is to find latent read errors and correct them
> while you have parity already. Another term for this is "data
> scrubbing", you'll find quite a lot of discussion on that topic.
>
> Personally, my view is that I make sure all my data are read at least
> once a month, I have experienced data loss historically because of
> lack of scrubbing.
>
Thanks, I think I'll change it to monthly as well. That seems like a
good compromise.
I'm still very interested in the other questions though. Especially the
one relating to why not all arrays are checked
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 11:20 Resync Every Sunday Jonathan Tripathy
2012-07-01 12:04 ` Jonathan Tripathy
@ 2012-07-01 20:41 ` Keith Keller
2012-07-01 20:44 ` Jonathan Tripathy
1 sibling, 1 reply; 13+ messages in thread
From: Keith Keller @ 2012-07-01 20:41 UTC (permalink / raw)
To: linux-raid
On 2012-07-01, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote:
>
> What's going on? Am I missing something here? Is data on the arrays at
> risk? We're using CentOS 5 with mdadm v2.6.9. Kernel version is
> 2.6.18-274.18.1.el5
As you are running CentOS, check /etc/sysconfig/raid-check. Someone may
have configured certain arrays not to be checked.
--keith
--
kkeller@wombat.san-francisco.ca.us
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 20:41 ` Keith Keller
@ 2012-07-01 20:44 ` Jonathan Tripathy
2012-07-01 21:24 ` Larkin Lowrey
0 siblings, 1 reply; 13+ messages in thread
From: Jonathan Tripathy @ 2012-07-01 20:44 UTC (permalink / raw)
To: Keith Keller; +Cc: linux-raid
On 01/07/2012 21:41, Keith Keller wrote:
> On 2012-07-01, Jonathan Tripathy<jonnyt@abpni.co.uk> wrote:
>> What's going on? Am I missing something here? Is data on the arrays at
>> risk? We're using CentOS 5 with mdadm v2.6.9. Kernel version is
>> 2.6.18-274.18.1.el5
> As you are running CentOS, check /etc/sysconfig/raid-check. Someone may
> have configured certain arrays not to be checked.
>
There is nothing in that file that suggests that some arrays should be
skipped.
ENABLED=yes
CHECK=check
# To check devs /dev/md0 and /dev/md3, use "md0 md3"
CHECK_DEVS=""
REPAIR_DEVS=""
SKIP_DEVS=""
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 20:44 ` Jonathan Tripathy
@ 2012-07-01 21:24 ` Larkin Lowrey
2012-07-01 21:57 ` Jonathan Tripathy
0 siblings, 1 reply; 13+ messages in thread
From: Larkin Lowrey @ 2012-07-01 21:24 UTC (permalink / raw)
To: Jonathan Tripathy; +Cc: Keith Keller, linux-raid
There was a fedora bug in the raid-check script would only queue an
array for check if the array_state was 'clean'. Unfortunately, when the
array is busy performing normal I/O its array_state is 'active'. So, any
arrays which were servicing I/O at the time raid-check was run would not
be checked.
It is quite possible that your CentOS version does not include the fix.
https://bugzilla.redhat.com/show_bug.cgi?id=679843
If it's fixed you should see something like:
# Only perform the checks on idle, healthy arrays, but delay
# actually writing the check field until the next loop so we
# don't switch currently idle arrays to active, which happens
# when two or more arrays are on the same physical disk
array_state=`cat /sys/block/$dev/md/array_state`
if [ "$array_state" != "clean" -a "$array_state" != "active" ]; then
continue
fi
The fix, iirc, was simply the inclusion of '-a "$array_state" !=
"active"' in the 'if' statement above.
--Larkin
On 7/1/2012 3:44 PM, Jonathan Tripathy wrote:
>
> On 01/07/2012 21:41, Keith Keller wrote:
>> On 2012-07-01, Jonathan Tripathy<jonnyt@abpni.co.uk> wrote:
>>> What's going on? Am I missing something here? Is data on the arrays at
>>> risk? We're using CentOS 5 with mdadm v2.6.9. Kernel version is
>>> 2.6.18-274.18.1.el5
>> As you are running CentOS, check /etc/sysconfig/raid-check. Someone may
>> have configured certain arrays not to be checked.
>>
> There is nothing in that file that suggests that some arrays should be
> skipped.
>
> ENABLED=yes
> CHECK=check
> # To check devs /dev/md0 and /dev/md3, use "md0 md3"
> CHECK_DEVS=""
> REPAIR_DEVS=""
> SKIP_DEVS=""
>
> Thanks
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 21:24 ` Larkin Lowrey
@ 2012-07-01 21:57 ` Jonathan Tripathy
2012-07-01 22:01 ` Jonathan Tripathy
0 siblings, 1 reply; 13+ messages in thread
From: Jonathan Tripathy @ 2012-07-01 21:57 UTC (permalink / raw)
To: Larkin Lowrey; +Cc: Keith Keller, linux-raid
On 01/07/2012 22:24, Larkin Lowrey wrote:
> There was a fedora bug in the raid-check script would only queue an
> array for check if the array_state was 'clean'. Unfortunately, when the
> array is busy performing normal I/O its array_state is 'active'. So, any
> arrays which were servicing I/O at the time raid-check was run would not
> be checked.
>
> It is quite possible that your CentOS version does not include the fix.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=679843
>
> If it's fixed you should see something like:
>
> # Only perform the checks on idle, healthy arrays, but delay
> # actually writing the check field until the next loop so we
> # don't switch currently idle arrays to active, which happens
> # when two or more arrays are on the same physical disk
> array_state=`cat /sys/block/$dev/md/array_state`
> if [ "$array_state" != "clean" -a "$array_state" != "active" ]; then
> continue
> fi
>
> The fix, iirc, was simply the inclusion of '-a "$array_state" !=
> "active"' in the 'if' statement above.
>
> --Larkin
Hi Larkin,
This sounds like exactly what I'm experiencing.
Is this 'if' statement supposed to be in the raid-check script? I don't
have any if statement in my raid-check script
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 21:57 ` Jonathan Tripathy
@ 2012-07-01 22:01 ` Jonathan Tripathy
2012-07-02 17:06 ` Larkin Lowrey
0 siblings, 1 reply; 13+ messages in thread
From: Jonathan Tripathy @ 2012-07-01 22:01 UTC (permalink / raw)
To: Larkin Lowrey; +Cc: Keith Keller, linux-raid
On 01/07/2012 22:57, Jonathan Tripathy wrote:
>
> On 01/07/2012 22:24, Larkin Lowrey wrote:
>> There was a fedora bug in the raid-check script would only queue an
>> array for check if the array_state was 'clean'. Unfortunately, when the
>> array is busy performing normal I/O its array_state is 'active'. So, any
>> arrays which were servicing I/O at the time raid-check was run would not
>> be checked.
>>
>> It is quite possible that your CentOS version does not include the fix.
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=679843
>>
>> If it's fixed you should see something like:
>>
>> # Only perform the checks on idle, healthy arrays, but delay
>> # actually writing the check field until the next loop so we
>> # don't switch currently idle arrays to active, which happens
>> # when two or more arrays are on the same physical disk
>> array_state=`cat /sys/block/$dev/md/array_state`
>> if [ "$array_state" != "clean" -a "$array_state" != "active" ]; then
>> continue
>> fi
>>
>> The fix, iirc, was simply the inclusion of '-a "$array_state" !=
>> "active"' in the 'if' statement above.
>>
>> --Larkin
> Hi Larkin,
>
> This sounds like exactly what I'm experiencing.
>
> Is this 'if' statement supposed to be in the raid-check script? I
> don't have any if statement in my raid-check script
>
> Thanks
>
Here is a small part of my 99-raid-check script:
for dev in $active_list; do
echo $SKIP_DEVS | grep -w $dev >/dev/null 2>&1 && continue
if [ -f /sys/block/$dev/md/sync_action ]; then
# Only perform the checks on idle, healthy arrays, but delay
# actually writing the check field until the next loop so we
# don't switch currently idle arrays to active, which happens
# when two or more arrays are on the same physical disk
array_state=`cat /sys/block/$dev/md/array_state`
sync_action=`cat /sys/block/$dev/md/sync_action`
if [ "$array_state" = clean -a "$sync_action" = idle ]; then
ck=""
echo $REPAIR_DEVS | grep -w $dev >/dev/null 2>&1 && ck="repair"
echo $CHECK_DEVS | grep -w $dev >/dev/null 2>&1 && ck="check"
[ -z "$ck" ] && ck=$CHECK
dev_list="$dev_list $dev"
check[$devnum]=$ck
let devnum++
[ "$ck" = "check" ] && check_list="$check_list $dev"
fi
fi
done
So the bug hasn't been fixed in my version then?
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-01 22:01 ` Jonathan Tripathy
@ 2012-07-02 17:06 ` Larkin Lowrey
2012-07-02 21:30 ` Keith Keller
0 siblings, 1 reply; 13+ messages in thread
From: Larkin Lowrey @ 2012-07-02 17:06 UTC (permalink / raw)
To: Jonathan Tripathy; +Cc: linux-raid
On 7/1/2012 5:01 PM, Jonathan Tripathy wrote:
>
> On 01/07/2012 22:57, Jonathan Tripathy wrote:
>>
>> On 01/07/2012 22:24, Larkin Lowrey wrote:
>>> There was a fedora bug in the raid-check script would only queue an
>>> array for check if the array_state was 'clean'. Unfortunately, when the
>>> array is busy performing normal I/O its array_state is 'active'. So,
>>> any
>>> arrays which were servicing I/O at the time raid-check was run would
>>> not
>>> be checked.
>>>
>>> It is quite possible that your CentOS version does not include the fix.
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=679843
>>>
>>> If it's fixed you should see something like:
>>>
>>> # Only perform the checks on idle, healthy arrays, but delay
>>> # actually writing the check field until the next loop so we
>>> # don't switch currently idle arrays to active, which happens
>>> # when two or more arrays are on the same physical disk
>>> array_state=`cat /sys/block/$dev/md/array_state`
>>> if [ "$array_state" != "clean" -a "$array_state" != "active" ]; then
>>> continue
>>> fi
>>>
>>> The fix, iirc, was simply the inclusion of '-a "$array_state" !=
>>> "active"' in the 'if' statement above.
>>>
>>> --Larkin
>> Hi Larkin,
>>
>> This sounds like exactly what I'm experiencing.
>>
>> Is this 'if' statement supposed to be in the raid-check script? I
>> don't have any if statement in my raid-check script
>>
>> Thanks
>>
> Here is a small part of my 99-raid-check script:
>
> for dev in $active_list; do
> echo $SKIP_DEVS | grep -w $dev >/dev/null 2>&1 && continue
> if [ -f /sys/block/$dev/md/sync_action ]; then
> # Only perform the checks on idle, healthy arrays, but delay
> # actually writing the check field until the next loop so we
> # don't switch currently idle arrays to active, which happens
> # when two or more arrays are on the same physical disk
> array_state=`cat /sys/block/$dev/md/array_state`
> sync_action=`cat /sys/block/$dev/md/sync_action`
> if [ "$array_state" = clean -a "$sync_action" = idle ]; then
> ck=""
> echo $REPAIR_DEVS | grep -w $dev >/dev/null 2>&1 &&
> ck="repair"
> echo $CHECK_DEVS | grep -w $dev >/dev/null 2>&1 && ck="check"
> [ -z "$ck" ] && ck=$CHECK
> dev_list="$dev_list $dev"
> check[$devnum]=$ck
> let devnum++
> [ "$ck" = "check" ] && check_list="$check_list $dev"
> fi
> fi
> done
>
> So the bug hasn't been fixed in my version then?
>
> Thanks
That is not the correct logic so your script is out of date. I would
recommend updating your mdadm package via yum. My CentOS 6.2 install has
the correct logic in /usr/sbin/raid-check, which is the new location for
the script. The RPM I have installed is mdadm-3.2.2-9.el6.x86_64.
--Larkin
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-02 17:06 ` Larkin Lowrey
@ 2012-07-02 21:30 ` Keith Keller
2012-07-02 22:55 ` Jonathan Tripathy
0 siblings, 1 reply; 13+ messages in thread
From: Keith Keller @ 2012-07-02 21:30 UTC (permalink / raw)
To: linux-raid
On 2012-07-02, Larkin Lowrey <llowrey@nuclearwinter.com> wrote:
> That is not the correct logic so your script is out of date. I would
> recommend updating your mdadm package via yum. My CentOS 6.2 install has
> the correct logic in /usr/sbin/raid-check, which is the new location for
> the script. The RPM I have installed is mdadm-3.2.2-9.el6.x86_64.
The OP is on CentOS 5, where the latest package is 2.6.9-3. I am not
sure whether that version has the bug fix.
--keith
--
kkeller@wombat.san-francisco.ca.us
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-02 21:30 ` Keith Keller
@ 2012-07-02 22:55 ` Jonathan Tripathy
2012-07-03 3:33 ` Keith Keller
0 siblings, 1 reply; 13+ messages in thread
From: Jonathan Tripathy @ 2012-07-02 22:55 UTC (permalink / raw)
To: Keith Keller; +Cc: linux-raid
On 02/07/2012 22:30, Keith Keller wrote:
> On 2012-07-02, Larkin Lowrey<llowrey@nuclearwinter.com> wrote:
>> That is not the correct logic so your script is out of date. I would
>> recommend updating your mdadm package via yum. My CentOS 6.2 install has
>> the correct logic in /usr/sbin/raid-check, which is the new location for
>> the script. The RPM I have installed is mdadm-3.2.2-9.el6.x86_64.
> The OP is on CentOS 5, where the latest package is 2.6.9-3. I am not
> sure whether that version has the bug fix.
>
> --keith
>
Unfortunately it is not possible for me to upgrade. Would just simply
replacing the raid-check script work? Or is that dangerous?
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Resync Every Sunday
2012-07-02 22:55 ` Jonathan Tripathy
@ 2012-07-03 3:33 ` Keith Keller
0 siblings, 0 replies; 13+ messages in thread
From: Keith Keller @ 2012-07-03 3:33 UTC (permalink / raw)
To: linux-raid
On 2012-07-02, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote:
>
> Unfortunately it is not possible for me to upgrade. Would just simply
> replacing the raid-check script work? Or is that dangerous?
It's ''probably'' safe. You could download the script and diff it
against your existing one to see what the differences actually are; that
might help you decide how safe it is.
In the short run, all that script really does is issue "check" requests
to each array. The rest is mostly window dressing and silly admin
tricks (e.g., checking mismatch_cnt and emitting on stdout if it's
nonzero; the default is not to print output, so there's no email unless
an array has mismatches).
--keith
--
kkeller@wombat.san-francisco.ca.us
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2012-07-03 3:33 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-01 11:20 Resync Every Sunday Jonathan Tripathy
2012-07-01 12:04 ` Jonathan Tripathy
2012-07-01 12:44 ` Mikael Abrahamsson
2012-07-01 12:53 ` Jonathan Tripathy
2012-07-01 20:41 ` Keith Keller
2012-07-01 20:44 ` Jonathan Tripathy
2012-07-01 21:24 ` Larkin Lowrey
2012-07-01 21:57 ` Jonathan Tripathy
2012-07-01 22:01 ` Jonathan Tripathy
2012-07-02 17:06 ` Larkin Lowrey
2012-07-02 21:30 ` Keith Keller
2012-07-02 22:55 ` Jonathan Tripathy
2012-07-03 3:33 ` Keith Keller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).