* mdadm 2.6.4 : How i can check out current status of reshaping ?
@ 2008-02-04 4:08 Andreas-Sokov
2008-02-04 22:48 ` Neil Brown
0 siblings, 1 reply; 8+ messages in thread
From: Andreas-Sokov @ 2008-02-04 4:08 UTC (permalink / raw)
To: linux-raid; +Cc: Neil Brown
Hi linux-raid.
on DEBIAN :
root@raid01:/# mdadm -D /dev/md1
/dev/md1:
Version : 00.91.03
Creation Time : Tue Nov 13 18:42:36 2007
Raid Level : raid5
Array Size : 1465159488 (1397.29 GiB 1500.32 GB)
Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Mon Feb 4 06:51:47 2008
State : clean, degraded
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Delta Devices : 1, (4->5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UUID : 4fbdc8df:07b952cf:7cc6faa0:04676ba5
Events : 0.683598
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
2 8 64 2 active sync /dev/sde
3 8 80 3 active sync /dev/sdf
4 0 0 4 removed
5 8 16 - spare /dev/sdb
root@raid01:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
unused devices: <none>
##############################################################################
But how i can see the status of reshaping ?
Is it reshaped realy ? or may be just hang up ? or may be mdadm nothing do not give in
general ?
How long wait when reshaping will finish ?
##############################################################################
--
Best regards,
Andreas-Sokov
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: mdadm 2.6.4 : How i can check out current status of reshaping ?
2008-02-04 4:08 mdadm 2.6.4 : How i can check out current status of reshaping ? Andreas-Sokov
@ 2008-02-04 22:48 ` Neil Brown
2008-02-05 9:13 ` Re[2]: " Andreas-Sokov
0 siblings, 1 reply; 8+ messages in thread
From: Neil Brown @ 2008-02-04 22:48 UTC (permalink / raw)
To: Andreas-Sokov; +Cc: linux-raid
On Monday February 4, andre.s@j8.com.ru wrote:
>
> root@raid01:/# cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
> md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
> 1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
>
> unused devices: <none>
>
> ##############################################################################
> But how i can see the status of reshaping ?
> Is it reshaped realy ? or may be just hang up ? or may be mdadm nothing do not give in
> general ?
> How long wait when reshaping will finish ?
> ##############################################################################
>
The reshape hasn't restarted.
Did you do that "mdadm -w /dev/md1" like I suggested? If so, what
happened?
Possibly you tried mounting the filesystem before trying the "mdadm
-w". There seems to be a bug such that doing this would cause the
reshape not to restart, and "mdadm -w" would not help any more.
I suggest you:
echo 0 > /sys/module/md_mod/parameters/start_ro
stop the array
mdadm -S /dev/md1
(after unmounting if necessary).
Then assemble the array again.
Then
mdadm -w /dev/md1
just to be sure.
If this doesn't work, please report exactly what you did, exactly what
message you got and exactly where message appeared in the kernel log.
NeilBrown
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re[2]: mdadm 2.6.4 : How i can check out current status of reshaping ?
2008-02-04 22:48 ` Neil Brown
@ 2008-02-05 9:13 ` Andreas-Sokov
2008-02-05 10:10 ` Neil Brown
0 siblings, 1 reply; 8+ messages in thread
From: Andreas-Sokov @ 2008-02-05 9:13 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Hello, Neil.
YOU WROTE : 5 февраля 2008 г., 01:48:33:
> On Monday February 4, andre.s@j8.com.ru wrote:
>>
>> root@raid01:/# cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
>> md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
>> 1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
>>
>> unused devices: <none>
>>
>> ##############################################################################
>> But how i can see the status of reshaping ?
>> Is it reshaped realy ? or may be just hang up ? or may be mdadm nothing do not give in
>> general ?
>> How long wait when reshaping will finish ?
>> ##############################################################################
>>
> The reshape hasn't restarted.
> Did you do that "mdadm -w /dev/md1" like I suggested? If so, what
> happened?
> Possibly you tried mounting the filesystem before trying the "mdadm
> -w". There seems to be a bug such that doing this would cause the
> reshape not to restart, and "mdadm -w" would not help any more.
> I suggest you:
> echo 0 > /sys/module/md_mod/parameters/start_ro
> stop the array
> mdadm -S /dev/md1
> (after unmounting if necessary).
> Then assemble the array again.
> Then
> mdadm -w /dev/md1
> just to be sure.
> If this doesn't work, please report exactly what you did, exactly what
> message you got and exactly where message appeared in the kernel log.
> NeilBrown
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
I read again your latter.
at first time i did not do
echo 0 > /sys/module/md_mod/parameters/start_ro
now i have done this, then
mdadm -S /dev/md1
mdadm /dev/md1 -A /dev/sd[bcdef]
mdadm -w /dev/md1
and i have : after 2 minutes kernel show something
but reshaping during in process still
root@raid01:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
[==>..................] reshape = 10.1% (49591552/488386496) finish=12127.2min speed=602K/sec
unused devices: <none>
root@raid01:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
[==>..................] reshape = 10.1% (49591552/488386496) finish=12259.0min speed=596K/sec
unused devices: <none>
root@raid01:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
[==>..................] reshape = 10.1% (49591552/488386496) finish=12311.7min speed=593K/sec
unused devices: <none>
root@raid01:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid5 sdc[0] sdb[5](S) sdf[3] sde[2] sdd[1]
1465159488 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]
[==>..................] reshape = 10.1% (49591552/488386496) finish=12338.1min speed=592K/sec
unused devices: <none>
Feb 5 11:54:21 raid01 kernel: raid5: reshape will continue
Feb 5 11:54:21 raid01 kernel: raid5: device sdc operational as raid disk 0
Feb 5 11:54:21 raid01 kernel: raid5: device sdf operational as raid disk 3
Feb 5 11:54:21 raid01 kernel: raid5: device sde operational as raid disk 2
Feb 5 11:54:21 raid01 kernel: raid5: device sdd operational as raid disk 1
Feb 5 11:54:21 raid01 kernel: raid5: allocated 5245kB for md1
Feb 5 11:54:21 raid01 kernel: raid5: raid level 5 set md1 active with 4 out of 5 devices, algorithm 2
Feb 5 11:54:21 raid01 kernel: RAID5 conf printout:
Feb 5 11:54:21 raid01 kernel: --- rd:5 wd:4
Feb 5 11:54:21 raid01 kernel: disk 0, o:1, dev:sdc
Feb 5 11:54:21 raid01 kernel: disk 1, o:1, dev:sdd
Feb 5 11:54:21 raid01 kernel: disk 2, o:1, dev:sde
Feb 5 11:54:21 raid01 kernel: disk 3, o:1, dev:sdf
Feb 5 11:54:21 raid01 kernel: ...ok start reshape thread
Feb 5 11:54:21 raid01 mdadm: RebuildStarted event detected on md device /dev/md1
Feb 5 11:54:21 raid01 kernel: md: reshape of RAID array md1
Feb 5 11:54:21 raid01 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Feb 5 11:54:21 raid01 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
Feb 5 11:54:21 raid01 kernel: md: using 128k window, over a total of 488386496 blocks.
Feb 5 11:56:12 raid01 kernel: BUG: unable to handle kernel paging request at virtual address 001cd901
Feb 5 11:56:12 raid01 kernel: printing eip:
Feb 5 11:56:12 raid01 kernel: c041c374
Feb 5 11:56:12 raid01 kernel: *pde = 00000000
Feb 5 11:56:12 raid01 kernel: Oops: 0002 [#1]
Feb 5 11:56:12 raid01 kernel: SMP
Feb 5 11:56:12 raid01 kernel: Modules linked in: nfsd exportfs lockd nfs_acl sunrpc ipt_LOG xt_tcpudp nf_conntrack_ipv4 xt_state nf_conntrack nfnetlink iptable_filter ip_tables x_tables button ac battery loop tsdev psmouse iTCO_wdt sk98lin serio_raw intel_agp agpgart evdev shpchp pci_hotplug pcspkr rtc ide_cd cdrom ide_disk ata_piix piix e1000 generic ide_core sata_mv uhci_hcd ehci_hcd usbcore thermal processor fan
Feb 5 11:56:12 raid01 kernel: CPU: 1
Feb 5 11:56:12 raid01 kernel: EIP: 0060:[<c041c374>] Not tainted VLI
Feb 5 11:56:12 raid01 kernel: EFLAGS: 00010202 (2.6.22.16-6 #7)
Feb 5 11:56:12 raid01 kernel: EIP is at md_do_sync+0x629/0xa32
Feb 5 11:56:12 raid01 kernel: eax: 001cd901 ebx: c0410d1b ecx: 00000080 edx: 00000000
Feb 5 11:56:12 raid01 kernel: esi: 05e96a00 edi: 00000000 ebp: dff3e400 esp: f796beb4
Feb 5 11:56:12 raid01 kernel: ds: 007b es: 007b fs: 00d8 gs: 0000 ss: 0068
Feb 5 11:56:12 raid01 kernel: Process md1_reshape (pid: 3759, ti=f796a000 task=f7e8a550 task.ti=f796a000)
Feb 5 11:56:12 raid01 kernel: Stack: f796bf9c 00000000 1d1c2fc0 00000000 00000500 00000000 f796bf88 dff3e410
Feb 5 11:56:12 raid01 kernel: 9ac41500 06000000 6a922c00 1d1c2fc0 00000000 dff3e400 000020d2 3a385f80
Feb 5 11:56:12 raid01 kernel: 00000000 001cd800 00000000 00000006 001cd700 00000000 c056fb6b 00177900
Feb 5 11:56:12 raid01 kernel: Call Trace:
Feb 5 11:56:12 raid01 kernel: [<c041e8ee>] md_thread+0xcc/0xe3
Feb 5 11:56:12 raid01 kernel: [<c011b368>] complete+0x39/0x48
Feb 5 11:56:12 raid01 kernel: [<c041e822>] md_thread+0x0/0xe3
Feb 5 11:56:12 raid01 kernel: [<c0131b89>] kthread+0x38/0x5f
Feb 5 11:56:12 raid01 kernel: [<c0131b51>] kthread+0x0/0x5f
Feb 5 11:56:12 raid01 kernel: [<c0104947>] kernel_thread_helper+0x7/0x10
Feb 5 11:56:12 raid01 kernel: =======================
Feb 5 11:56:12 raid01 kernel: Code: 54 24 48 0f 87 a4 01 00 00 72 0a 3b 44 24 44 0f 87 98 01 00 00 3b 7c 24 40 75 0a 3b 74 24 3c 0f 84 88 01 00 00 0b 85 30 01 00 00 <88> 08 0f 85 90 01 00 00 8b 85 30 01 00 00 a8 04 0f 85 82 01 00
Feb 5 11:56:12 raid01 kernel: EIP: [<c041c374>] md_do_sync+0x629/0xa32 SS:ESP 0068:f796beb4
--
Best regards,
Andreas-Sokov
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Re[2]: mdadm 2.6.4 : How i can check out current status of reshaping ?
2008-02-05 9:13 ` Re[2]: " Andreas-Sokov
@ 2008-02-05 10:10 ` Neil Brown
2008-02-06 19:15 ` Re[4]: " Andreas-Sokov
2008-02-09 4:40 ` Re[4]: " Andreas-Sokov
0 siblings, 2 replies; 8+ messages in thread
From: Neil Brown @ 2008-02-05 10:10 UTC (permalink / raw)
To: Andreas-Sokov; +Cc: linux-raid
On Tuesday February 5, andre.s@j8.com.ru wrote:
> Feb 5 11:56:12 raid01 kernel: BUG: unable to handle kernel paging request at virtual address 001cd901
This looks like some sort of memory corruption.
> Feb 5 11:56:12 raid01 kernel: EIP is at md_do_sync+0x629/0xa32
This tells us what code is executing.
> Feb 5 11:56:12 raid01 kernel: Code: 54 24 48 0f 87 a4 01 00 00 72 0a 3b 44 24 44 0f 87 98 01 00 00 3b 7c 24 40 75 0a 3b 74 24 3c 0f 84 88 01 00 00 0b 85 30 01 00 00 <88> 08 0f 85 90 01 00 00 8b 85 30 01 00 00 a8 04 0f 85 82 01 00
This tells us what the actual byte of code were.
If I feed this line (from "Code:" onwards) into "ksymoops" I get
0: 54 push %esp
1: 24 48 and $0x48,%al
3: 0f 87 a4 01 00 00 ja 1ad <_EIP+0x1ad>
9: 72 0a jb 15 <_EIP+0x15>
b: 3b 44 24 44 cmp 0x44(%esp),%eax
f: 0f 87 98 01 00 00 ja 1ad <_EIP+0x1ad>
15: 3b 7c 24 40 cmp 0x40(%esp),%edi
19: 75 0a jne 25 <_EIP+0x25>
1b: 3b 74 24 3c cmp 0x3c(%esp),%esi
1f: 0f 84 88 01 00 00 je 1ad <_EIP+0x1ad>
25: 0b 85 30 01 00 00 or 0x130(%ebp),%eax
Code; 00000000 Before first symbol
2b: 88 08 mov %cl,(%eax)
2d: 0f 85 90 01 00 00 jne 1c3 <_EIP+0x1c3>
33: 8b 85 30 01 00 00 mov 0x130(%ebp),%eax
39: a8 04 test $0x4,%al
3b: 0f .byte 0xf
3c: 85 .byte 0x85
3d: 82 (bad)
3e: 01 00 add %eax,(%eax)
I removed the "Code;..." lines as they are just noise, except for the
one that points to the current instruction in the middle.
Note that it is dereferencing %eax, after just 'or'ing some value into
it, which is rather unusual.
Now get the "md-mod.ko" for the kernel you are running.
run
gdb md-mod.ko
and give the command
disassemble md_do_sync
and look for code at offset 0x629, which is 1577 in decimal.
I found a similar kernel to what you are running, and the matching code
is
0x000055c0 <md_do_sync+1485>: cmp 0x30(%esp),%eax
0x000055c4 <md_do_sync+1489>: ja 0x5749 <md_do_sync+1878>
0x000055ca <md_do_sync+1495>: cmp 0x2c(%esp),%edi
0x000055ce <md_do_sync+1499>: jne 0x55da <md_do_sync+1511>
0x000055d0 <md_do_sync+1501>: cmp 0x28(%esp),%esi
0x000055d4 <md_do_sync+1505>: je 0x5749 <md_do_sync+1878>
0x000055da <md_do_sync+1511>: mov 0x130(%ebp),%eax
0x000055e0 <md_do_sync+1517>: test $0x8,%al
0x000055e2 <md_do_sync+1519>: jne 0x575f <md_do_sync+1900>
0x000055e8 <md_do_sync+1525>: mov 0x130(%ebp),%eax
0x000055ee <md_do_sync+1531>: test $0x4,%al
0x000055f0 <md_do_sync+1533>: jne 0x575f <md_do_sync+1900>
0x000055f6 <md_do_sync+1539>: mov 0x38(%esp),%ecx
0x000055fa <md_do_sync+1543>: mov 0x0,%eax
-
Note the sequence "cmp, ja, cmp, jne, cmp, je"
where the "cmp" arguments are consecutive 4byte values on the stack
(%esp).
In the code from your oops, the offsets are 0x44 0x40 0x3c.
In the kernel I found they are 0x30 0x2c 0x28. The difference is some
subtle difference in the kernel, possibly a different compiler or
something.
Anyway, your code crashed at
25: 0b 85 30 01 00 00 or 0x130(%ebp),%eax
Code; 00000000 Before first symbol
2b: 88 08 mov %cl,(%eax)
The matching code in the kernel I found is
0x000055da <md_do_sync+1511>: mov 0x130(%ebp),%eax
0x000055e0 <md_do_sync+1517>: test $0x8,%al
Note that you have an 'or', the kernel I found has 'mov'.
If we look at the actual byte of code for those two instructions
the code that crashed shows the bytes above:
0b 85 30 01 00 00
88 08
if I get the same bytes with gdb:
(gdb) x/8b 0x000055da
0x55da <md_do_sync+1511>: 0x8b 0x85 0x30 0x01 0x00 0x00 0xa8 0x08
(gdb)
So what should be "8b" has become "0b", and what should be "a8" has
become "08".
If you look for the same data in your md-mod.ko, you might find
slightly different details but it is clear to me that the code in
memory is bad.
Possible you have bad memory, or a bad CPU, or you are overclocking
the CPU, or it is getting hot, or something.
But you clearly have a hardware error.
NeilBrown
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re[4]: mdadm 2.6.4 : How i can check out current status of reshaping ?
2008-02-05 10:10 ` Neil Brown
@ 2008-02-06 19:15 ` Andreas-Sokov
2008-02-06 22:26 ` Janek Kozicki
2008-02-07 21:15 ` Bill Davidsen
2008-02-09 4:40 ` Re[4]: " Andreas-Sokov
1 sibling, 2 replies; 8+ messages in thread
From: Andreas-Sokov @ 2008-02-06 19:15 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Hello, Neil.
.....
> Possible you have bad memory, or a bad CPU, or you are overclocking
> the CPU, or it is getting hot, or something.
As seems to me all my problems has been started after i have started update MDADM.
This is server worked normaly (but only not like soft-raid) more 2-3 years.
Last 6 months it worked as soft-raid. All was normaly, Even I have added successfully
4th hdd into raid5 )when it stared was 3 hdd). And then Reshaping have been passed fine.
Yesterday i have did memtest86 onto it server and 10 passes was WITH OUT any errors.
Temperature of server is about 25 grad celsius.
No overlocking, all set to default.
Realy i do not know what to do because off wee nedd grow our storage, and we can not.
unfortunately, At this moment - Mdadm do not help us in this decision, but very want
it get.
> But you clearly have a hardware error.
> NeilBrown
--
Best regards,
Andreas-Sokov
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: mdadm 2.6.4 : How i can check out current status of reshaping ?
2008-02-06 19:15 ` Re[4]: " Andreas-Sokov
@ 2008-02-06 22:26 ` Janek Kozicki
2008-02-07 21:15 ` Bill Davidsen
1 sibling, 0 replies; 8+ messages in thread
From: Janek Kozicki @ 2008-02-06 22:26 UTC (permalink / raw)
Cc: linux-raid
Andreas-Sokov said: (by the date of Wed, 6 Feb 2008 22:15:05 +0300)
> Hello, Neil.
>
> .....
> > Possible you have bad memory, or a bad CPU, or you are overclocking
> > the CPU, or it is getting hot, or something.
>
> As seems to me all my problems has been started after i have started update MDADM.
what is the update?
- you installed a new version of mdadm?
- you installed new kernel?
- something else?
- what was the version before, and what version is now?
- can you downgrade to previous version?
best regards
--
Janek Kozicki |
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: mdadm 2.6.4 : How i can check out current status of reshaping ?
2008-02-06 19:15 ` Re[4]: " Andreas-Sokov
2008-02-06 22:26 ` Janek Kozicki
@ 2008-02-07 21:15 ` Bill Davidsen
1 sibling, 0 replies; 8+ messages in thread
From: Bill Davidsen @ 2008-02-07 21:15 UTC (permalink / raw)
To: Andreas-Sokov; +Cc: Neil Brown, linux-raid
Andreas-Sokov wrote:
> Hello, Neil.
>
> .....
>
>> Possible you have bad memory, or a bad CPU, or you are overclocking
>> the CPU, or it is getting hot, or something.
>>
>
> As seems to me all my problems has been started after i have started update MDADM.
> This is server worked normaly (but only not like soft-raid) more 2-3 years.
> Last 6 months it worked as soft-raid. All was normaly, Even I have added successfully
> 4th hdd into raid5 )when it stared was 3 hdd). And then Reshaping have been passed fine.
>
> Yesterday i have did memtest86 onto it server and 10 passes was WITH OUT any errors.
> Temperature of server is about 25 grad celsius.
> No overlocking, all set to default.
>
>
What did you find when you loaded the module with gdb as Neil suggested?
If the code in the module doesn't match the code in memory you have a
hardware error. memtest86 is a useful tool, but it is not a definitive
test because it doesn't use all CPUs and do i/o at the same time to load
the memory bus.
> Realy i do not know what to do because off wee nedd grow our storage, and we can not.
> unfortunately, At this moment - Mdadm do not help us in this decision, but very want
> it get.
>
I would pull out half my memory and retest. If it still fails I would
swap to the other half of memory. If that didn't show a change I would
check that the code in the module is what Neil showed in his last
message (I assume you already have), and then reseat all of the cables, etc.
I agree with Neil:
>> But you clearly have a hardware error.
>>
>
>
>> NeilBrown
>>
>
>
>
>
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re[4]: mdadm 2.6.4 : How i can check out current status of reshaping ?
2008-02-05 10:10 ` Neil Brown
2008-02-06 19:15 ` Re[4]: " Andreas-Sokov
@ 2008-02-09 4:40 ` Andreas-Sokov
1 sibling, 0 replies; 8+ messages in thread
From: Andreas-Sokov @ 2008-02-09 4:40 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Hello, Neil.
YOU WROTE : 5 февраля 2008 г., 13:10:00:
> On Tuesday February 5, andre.s@j8.com.ru wrote:
>> Feb 5 11:56:12 raid01 kernel: BUG: unable to handle kernel paging request at virtual address 001cd901
> This looks like some sort of memory corruption.
....
> Possible you have bad memory, or a bad CPU, or you are overclocking
> the CPU, or it is getting hot, or something.
> But you clearly have a hardware error.
> NeilBrown
At this moment i have checked my server. As you wrote earlier Somethere is hidden
prblem or problems. We try find in waht is it and did not find. Try change other
memory modules - result was same (kernel panic, one way or another).
SO, then we move RAID-HDDs into another computer reshape have passed fine!
And now there continue reshape 5->7 drives normaly.
Thank you very much !
--
Best regards,
Andreas-Sokov
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2008-02-09 4:40 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-02-04 4:08 mdadm 2.6.4 : How i can check out current status of reshaping ? Andreas-Sokov
2008-02-04 22:48 ` Neil Brown
2008-02-05 9:13 ` Re[2]: " Andreas-Sokov
2008-02-05 10:10 ` Neil Brown
2008-02-06 19:15 ` Re[4]: " Andreas-Sokov
2008-02-06 22:26 ` Janek Kozicki
2008-02-07 21:15 ` Bill Davidsen
2008-02-09 4:40 ` Re[4]: " Andreas-Sokov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).