* I was dump, I need help.
@ 2016-05-01 9:39 Patrice
0 siblings, 0 replies; 6+ messages in thread
From: Patrice @ 2016-05-01 9:39 UTC (permalink / raw)
To: linux-raid
Hi folks,
my name is Patrice and I was dump. I am new on this list and this topic.
Yesterday I pulled out all of the 4 harddisk of my Netgear NAS. *Info*
the NAS was running. Now my RAID is broken and on the webGUI it shows 0
data. I`ve read the two How-tTos about this topic but I am confused and
not sure what to do. That`s why I am asking you guys friendly if you
could help me, please.
Best regards
Patrice
^ permalink raw reply [flat|nested] 6+ messages in thread
* I was dump, I need help.
@ 2016-05-01 9:54 Patrice
0 siblings, 0 replies; 6+ messages in thread
From: Patrice @ 2016-05-01 9:54 UTC (permalink / raw)
To: linux-raid
Hi,
I figured out that the Event counter is 343 on all 4 disks.
But can I just do
mdadm --assemble --force /dev/mdX <list of devices>
and it's all ok? I am worried about that it`s not and when I do that the
RAID will be distroyed. :-S
Best regards
Patrice
^ permalink raw reply [flat|nested] 6+ messages in thread
* I was dump, I need help.
@ 2016-05-01 10:38 Patrice
2016-05-01 11:30 ` Robin Hill
0 siblings, 1 reply; 6+ messages in thread
From: Patrice @ 2016-05-01 10:38 UTC (permalink / raw)
To: linux-raid
It has a Debian 7 installed and a RAID 5.
Here some more Informations.
Thank you and best regards.
Patrice
root@ReadyNAS:/proc# cat mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid6 sda2[0] sdc2[3] sdb2[2] sdd2[1]
1046528 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4]
[UUUU]
md0 : active raid1 sda1[4] sdd1[3] sdc1[2] sdb1[5]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
-----------------------------------------------------------------------
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 2495045b:2aa5bd3c:8cff9bb5:de4e769c
Name : 119c1bce:0 (local to host 119c1bce)
Creation Time : Sun Apr 3 06:27:48 2016
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 8380416 (4.00 GiB 4.29 GB)
Array Size : 4190208 (4.00 GiB 4.29 GB)
Data Offset : 8192 sectors
Super Offset : 8 sectors
Unused Space : before=8104 sectors, after=0 sectors
State : clean
Device UUID : 5a053636:aeb9cb79:8f7f8807:8263480b
Update Time : Sun May 1 11:29:51 2016
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : cd6387b5 - correct
Events : 343
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
-----------------------------------------------------------------------
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 2495045b:2aa5bd3c:8cff9bb5:de4e769c
Name : 119c1bce:0 (local to host 119c1bce)
Creation Time : Sun Apr 3 06:27:48 2016
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 8380416 (4.00 GiB 4.29 GB)
Array Size : 4190208 (4.00 GiB 4.29 GB)
Data Offset : 8192 sectors
Super Offset : 8 sectors
Unused Space : before=8104 sectors, after=0 sectors
State : clean
Device UUID : 58b53ace:3e80e778:41c7e259:a631d7f1
Update Time : Sun May 1 11:30:02 2016
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9d6d1427 - correct
Events : 343
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
---------------------------------------------------------------------
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 2495045b:2aa5bd3c:8cff9bb5:de4e769c
Name : 119c1bce:0 (local to host 119c1bce)
Creation Time : Sun Apr 3 06:27:48 2016
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 8380416 (4.00 GiB 4.29 GB)
Array Size : 4190208 (4.00 GiB 4.29 GB)
Data Offset : 8192 sectors
Super Offset : 8 sectors
Unused Space : before=8104 sectors, after=0 sectors
State : clean
Device UUID : c92580fb:f2a7cd6a:9090dcfd:20d59709
Update Time : Sun May 1 11:30:20 2016
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 78531924 - correct
Events : 343
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
-----------------------------------------------------------------
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 2495045b:2aa5bd3c:8cff9bb5:de4e769c
Name : 119c1bce:0 (local to host 119c1bce)
Creation Time : Sun Apr 3 06:27:48 2016
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 8380416 (4.00 GiB 4.29 GB)
Array Size : 4190208 (4.00 GiB 4.29 GB)
Data Offset : 8192 sectors
Super Offset : 8 sectors
Unused Space : before=8104 sectors, after=0 sectors
State : active
Device UUID : 38deb03b:1d552d92:6532842d:34f5480d
Update Time : Sun May 1 11:30:26 2016
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 133c40ae - correct
Events : 344
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: I was dump, I need help.
2016-05-01 10:38 I was dump, I need help Patrice
@ 2016-05-01 11:30 ` Robin Hill
[not found] ` <5726128E.9090309@pboenig.de>
0 siblings, 1 reply; 6+ messages in thread
From: Robin Hill @ 2016-05-01 11:30 UTC (permalink / raw)
To: Patrice; +Cc: linux-raid
On Sun May 01, 2016 at 12:38:33pm +0200, Patrice wrote:
> It has a Debian 7 installed and a RAID 5.
> Here some more Informations.
>
> Thank you and best regards.
> Patrice
>
>
>
>
> root@ReadyNAS:/proc# cat mdstat
> Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
> md1 : active raid6 sda2[0] sdc2[3] sdb2[2] sdd2[1]
> 1046528 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4]
> [UUUU]
>
> md0 : active raid1 sda1[4] sdd1[3] sdc1[2] sdb1[5]
> 4190208 blocks super 1.2 [4/4] [UUUU]
>
> unused devices: <none>
>
This shows two RAID devices, both up and running without any issues.
> /dev/sda1:
> /dev/sdb1:
> /dev/sdc1:
> /dev/sdd1:
The info provided here is just for the first partition on each disk.
There's definitely at least 2 partitions on each disk (as the second one
is used for md1 above) - are there any others which should be being
assembled into another array? "fdisk -l" will show the partitions for
all disks - if there's more than 2 on sda/b/c/d then we'll need to see
the "mdadm -E" report for each of the others.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: I was dump, I need help.
[not found] ` <5726128E.9090309@pboenig.de>
@ 2016-05-02 12:41 ` Robin Hill
2016-05-05 10:00 ` Patrice
0 siblings, 1 reply; 6+ messages in thread
From: Robin Hill @ 2016-05-02 12:41 UTC (permalink / raw)
To: Patrice; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 8377 bytes --]
On Sun May 01, 2016 at 04:28:30PM +0200, Patrice wrote:
> Hi Robin,
>
> thank you for your reply.
> Ok, I try not to panic but in my opinion that sounds bad. It seems to me
> like a mess.
> Why is there a RAID 1 und 6? I need a RAID 5.
>
It looks like you have a RAID1, a RAID6 and a RAID5. I'd guess that the
RAID1 and RAID6 store the OS for the NAS system, and the RAID5 is the
data.
> > are there any others which should be being
> > assembled into another array?
>
> There are no others. At least there should be only one partition on each
> HDD. I didn`t do partitioning.
>
>
> fdisk -l output:
> ------------------
>
> Disk /dev/sda: 4000.8 GB, 4000787030016 bytes
> 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sda1 1 4294967295 2147483647+ ee GPT
> Partition 1 does not start on physical sector boundary.
>
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
> fdisk doesn't support GPT. Use GNU Parted.
>
>
> Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
> 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 1 4294967295 2147483647+ ee GPT
> Partition 1 does not start on physical sector boundary.
>
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util
> fdisk doesn't support GPT. Use GNU Parted.
>
>
> Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes
> 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 1 4294967295 2147483647+ ee GPT
> Partition 1 does not start on physical sector boundary.
>
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util
> fdisk doesn't support GPT. Use GNU Parted.
>
>
> Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
> 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdd1 1 4294967295 2147483647+ ee GPT
> Partition 1 does not start on physical sector boundary.
>
> -------------------------------------------------------------------------
>
Okay, so there's 4 4TB drives - they're using GPT partitions so fdisk
doesn't report anything useful here.
>
> mdadm -E /dev/sd* output:
> --------------------------
>
> /dev/sda3:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 632ff5fd:65342524:9c9798d7:80e47e94
> Name : 119c1bce:data-0 (local to host 119c1bce)
> Creation Time : Sun Apr 3 06:27:49 2016
> Raid Level : raid5
> Raid Devices : 4
>
> Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
> Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
> Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
> Data Offset : 262144 sectors
> Super Offset : 8 sectors
> Unused Space : before=262056 sectors, after=112 sectors
> State : clean
> Device UUID : 38adc372:3e0eba36:0f819758:950a0411
>
> Update Time : Sat Apr 30 23:03:32 2016
> Bad Block Log : 512 entries available at offset 72 sectors
> Checksum : d7f5b303 - correct
> Events : 266
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Device Role : Active device 0
> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>
>
> /dev/sdb3:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 632ff5fd:65342524:9c9798d7:80e47e94
> Name : 119c1bce:data-0 (local to host 119c1bce)
> Creation Time : Sun Apr 3 06:27:49 2016
> Raid Level : raid5
> Raid Devices : 4
>
> Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
> Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
> Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
> Data Offset : 262144 sectors
> Super Offset : 8 sectors
> Unused Space : before=262056 sectors, after=112 sectors
> State : clean
> Device UUID : 655ee144:43c43771:0d8a6157:9b556584
>
> Update Time : Sat Apr 30 23:03:32 2016
> Bad Block Log : 512 entries available at offset 72 sectors
> Checksum : 56bc6e3b - correct
> Events : 266
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Device Role : Active device 1
> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>
>
> /dev/sdc3:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 632ff5fd:65342524:9c9798d7:80e47e94
> Name : 119c1bce:data-0 (local to host 119c1bce)
> Creation Time : Sun Apr 3 06:27:49 2016
> Raid Level : raid5
> Raid Devices : 4
>
> Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
> Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
> Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
> Data Offset : 262144 sectors
> Super Offset : 8 sectors
> Unused Space : before=262056 sectors, after=112 sectors
> State : clean
> Device UUID : d066d1aa:ffd1e432:e9ecdd9d:08540efa
>
> Update Time : Sat Apr 30 23:17:27 2016
> Bad Block Log : 512 entries available at offset 72 sectors
> Checksum : 3a7ce8f6 - correct
> Events : 273
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Device Role : Active device 2
> Array State : ..AA ('A' == active, '.' == missing, 'R' == replacing)
>
>
> /dev/sdd3:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 632ff5fd:65342524:9c9798d7:80e47e94
> Name : 119c1bce:data-0 (local to host 119c1bce)
> Creation Time : Sun Apr 3 06:27:49 2016
> Raid Level : raid5
> Raid Devices : 4
>
> Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
> Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
> Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
> Data Offset : 262144 sectors
> Super Offset : 8 sectors
> Unused Space : before=262056 sectors, after=112 sectors
> State : clean
> Device UUID : b8a43a56:2f833e72:7dd9f166:6f80b5a2
>
> Update Time : Sat Apr 30 23:17:27 2016
> Bad Block Log : 512 entries available at offset 72 sectors
> Checksum : 96faf109 - correct
> Events : 273
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Device Role : Active device 3
> Array State : ..AA ('A' == active, '.' == missing, 'R' == replacing)
>
I've removed the info for the first two partitions on each disk as those
arrays are assembling fine. The third partitions look to contain your
data array - the events for sda3 and sdb3 match at 266, and sdc3 and
sdd3 are on 273 (and show sda3 & sdb3 missing). A forced assembly should
work without any issues here - the array name would look to be
/dev/md/data-0, so:
mdadm -Af /dev/md/data-0 /dev/sd[abcd]3
That should assemble the array from 3 of the disks (probably sda3, sdc3
and sdd3) - you'll then need to add the other one back in and allow the
rebuild to complete. You should also do a check on the filesystem to
ensure there's no corruption.
Cheers,
Robin
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: I was dump, I need help.
2016-05-02 12:41 ` Robin Hill
@ 2016-05-05 10:00 ` Patrice
0 siblings, 0 replies; 6+ messages in thread
From: Patrice @ 2016-05-05 10:00 UTC (permalink / raw)
To: Robin Hill; +Cc: linux-raid
Hi robin,
luckily someone managed to repair the RAID.
Now all works fine! :-)
I think he does it like you guessed it.
Here is his way:
Because the disks of NAS data volume did not synced, so it cannot be
assemble, I used command to force it to be started.
$ start_raids
mdadm: /dev/md/0 has been started with 4 drives.
mdadm: /dev/md/1 has been started with 4 drives.
mdadm: NOT forcing event count in /dev/sda3(0) from 266 up to 273
mdadm: You can use --really-force to do that (DANGEROUS)
mdadm: failed to RUN_ARRAY /dev/md/data-0: Input/output error
mdadm: Not enough devices to start the array.
$ cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid6 sda2[0] sdc2[3] sdb2[2] sdd2[1]
1046528 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
md0 : active raid1 sda1[4] sdd1[3] sdc1[2] sdb1[5]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
$ mdadm -S /dev/md127
mdadm: error opening /dev/md127: No such file or directory
$ mdadm -A /dev/md127 /dev/sd[a-d]3 --really-force
mdadm: forcing event count in /dev/sda3(0) from 266 upto 273
mdadm: forcing event count in /dev/sdb3(1) from 266 upto 273
mdadm: /dev/md127 has been started with 4 drives.
Then I can mount data volume and access shares.
root@4FH15855000E3:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 3.7G 1.1G 2.5G 30% /
tmpfs 10M 0 10M 0% /dev
/dev/md127 11T 882G 11T 8% /data
Thank you for your time and your help!
Best regards,
Patrice
On 02.05.2016 14:41, Robin Hill wrote:
> On Sun May 01, 2016 at 04:28:30PM +0200, Patrice wrote:
>
>> Hi Robin,
>>
>> thank you for your reply.
>> Ok, I try not to panic but in my opinion that sounds bad. It seems to me
>> like a mess.
>> Why is there a RAID 1 und 6? I need a RAID 5.
>>
> It looks like you have a RAID1, a RAID6 and a RAID5. I'd guess that the
> RAID1 and RAID6 store the OS for the NAS system, and the RAID5 is the
> data.
>
>> > are there any others which should be being
>> > assembled into another array?
>>
>> There are no others. At least there should be only one partition on each
>> HDD. I didn`t do partitioning.
>>
>>
>> fdisk -l output:
>> ------------------
>>
>> Disk /dev/sda: 4000.8 GB, 4000787030016 bytes
>> 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disk identifier: 0x00000000
>>
>> Device Boot Start End Blocks Id System
>> /dev/sda1 1 4294967295 2147483647+ ee GPT
>> Partition 1 does not start on physical sector boundary.
>>
>> WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
>> fdisk doesn't support GPT. Use GNU Parted.
>>
>>
>> Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
>> 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disk identifier: 0x00000000
>>
>> Device Boot Start End Blocks Id System
>> /dev/sdb1 1 4294967295 2147483647+ ee GPT
>> Partition 1 does not start on physical sector boundary.
>>
>> WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util
>> fdisk doesn't support GPT. Use GNU Parted.
>>
>>
>> Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes
>> 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disk identifier: 0x00000000
>>
>> Device Boot Start End Blocks Id System
>> /dev/sdc1 1 4294967295 2147483647+ ee GPT
>> Partition 1 does not start on physical sector boundary.
>>
>> WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util
>> fdisk doesn't support GPT. Use GNU Parted.
>>
>>
>> Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
>> 256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disk identifier: 0x00000000
>>
>> Device Boot Start End Blocks Id System
>> /dev/sdd1 1 4294967295 2147483647+ ee GPT
>> Partition 1 does not start on physical sector boundary.
>>
>> -------------------------------------------------------------------------
>>
> Okay, so there's 4 4TB drives - they're using GPT partitions so fdisk
> doesn't report anything useful here.
>
>> mdadm -E /dev/sd* output:
>> --------------------------
>>
>> /dev/sda3:
>> Magic : a92b4efc
>> Version : 1.2
>> Feature Map : 0x0
>> Array UUID : 632ff5fd:65342524:9c9798d7:80e47e94
>> Name : 119c1bce:data-0 (local to host 119c1bce)
>> Creation Time : Sun Apr 3 06:27:49 2016
>> Raid Level : raid5
>> Raid Devices : 4
>>
>> Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
>> Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
>> Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
>> Data Offset : 262144 sectors
>> Super Offset : 8 sectors
>> Unused Space : before=262056 sectors, after=112 sectors
>> State : clean
>> Device UUID : 38adc372:3e0eba36:0f819758:950a0411
>>
>> Update Time : Sat Apr 30 23:03:32 2016
>> Bad Block Log : 512 entries available at offset 72 sectors
>> Checksum : d7f5b303 - correct
>> Events : 266
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> Device Role : Active device 0
>> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>>
>>
>> /dev/sdb3:
>> Magic : a92b4efc
>> Version : 1.2
>> Feature Map : 0x0
>> Array UUID : 632ff5fd:65342524:9c9798d7:80e47e94
>> Name : 119c1bce:data-0 (local to host 119c1bce)
>> Creation Time : Sun Apr 3 06:27:49 2016
>> Raid Level : raid5
>> Raid Devices : 4
>>
>> Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
>> Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
>> Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
>> Data Offset : 262144 sectors
>> Super Offset : 8 sectors
>> Unused Space : before=262056 sectors, after=112 sectors
>> State : clean
>> Device UUID : 655ee144:43c43771:0d8a6157:9b556584
>>
>> Update Time : Sat Apr 30 23:03:32 2016
>> Bad Block Log : 512 entries available at offset 72 sectors
>> Checksum : 56bc6e3b - correct
>> Events : 266
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> Device Role : Active device 1
>> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>>
>>
>> /dev/sdc3:
>> Magic : a92b4efc
>> Version : 1.2
>> Feature Map : 0x0
>> Array UUID : 632ff5fd:65342524:9c9798d7:80e47e94
>> Name : 119c1bce:data-0 (local to host 119c1bce)
>> Creation Time : Sun Apr 3 06:27:49 2016
>> Raid Level : raid5
>> Raid Devices : 4
>>
>> Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
>> Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
>> Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
>> Data Offset : 262144 sectors
>> Super Offset : 8 sectors
>> Unused Space : before=262056 sectors, after=112 sectors
>> State : clean
>> Device UUID : d066d1aa:ffd1e432:e9ecdd9d:08540efa
>>
>> Update Time : Sat Apr 30 23:17:27 2016
>> Bad Block Log : 512 entries available at offset 72 sectors
>> Checksum : 3a7ce8f6 - correct
>> Events : 273
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> Device Role : Active device 2
>> Array State : ..AA ('A' == active, '.' == missing, 'R' == replacing)
>>
>>
>> /dev/sdd3:
>> Magic : a92b4efc
>> Version : 1.2
>> Feature Map : 0x0
>> Array UUID : 632ff5fd:65342524:9c9798d7:80e47e94
>> Name : 119c1bce:data-0 (local to host 119c1bce)
>> Creation Time : Sun Apr 3 06:27:49 2016
>> Raid Level : raid5
>> Raid Devices : 4
>>
>> Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
>> Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
>> Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
>> Data Offset : 262144 sectors
>> Super Offset : 8 sectors
>> Unused Space : before=262056 sectors, after=112 sectors
>> State : clean
>> Device UUID : b8a43a56:2f833e72:7dd9f166:6f80b5a2
>>
>> Update Time : Sat Apr 30 23:17:27 2016
>> Bad Block Log : 512 entries available at offset 72 sectors
>> Checksum : 96faf109 - correct
>> Events : 273
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> Device Role : Active device 3
>> Array State : ..AA ('A' == active, '.' == missing, 'R' == replacing)
>>
> I've removed the info for the first two partitions on each disk as those
> arrays are assembling fine. The third partitions look to contain your
> data array - the events for sda3 and sdb3 match at 266, and sdc3 and
> sdd3 are on 273 (and show sda3 & sdb3 missing). A forced assembly should
> work without any issues here - the array name would look to be
> /dev/md/data-0, so:
> mdadm -Af /dev/md/data-0 /dev/sd[abcd]3
>
> That should assemble the array from 3 of the disks (probably sda3, sdc3
> and sdd3) - you'll then need to add the other one back in and allow the
> rebuild to complete. You should also do a check on the filesystem to
> ensure there's no corruption.
>
> Cheers,
> Robin
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-05-05 10:00 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-01 10:38 I was dump, I need help Patrice
2016-05-01 11:30 ` Robin Hill
[not found] ` <5726128E.9090309@pboenig.de>
2016-05-02 12:41 ` Robin Hill
2016-05-05 10:00 ` Patrice
-- strict thread matches above, loose matches on Subject: below --
2016-05-01 9:54 Patrice
2016-05-01 9:39 Patrice
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).