linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Miroslav Šulc" <miroslav.sulc@fordfrog.com>
To: linux-raid@vger.kernel.org
Subject: Re: problem with synology external raid
Date: Fri, 23 Aug 2024 18:27:39 +0200	[thread overview]
Message-ID: <d6e87810cbfe40f3be74dfa6b0acb48e@fordfrog.com> (raw)
In-Reply-To: <d963fc43f8a7afee7f77d8ece450105f@fordfrog.com>

currently, i can see these two options for recovering the data:

1) send the disk 3 (the broken one) for repair, copy the data, and 
restore the array using disks 2-5
    this brings more expenses for the client but the chances for 
recovering the data are probably pretty high, if the data from the disk 
3 is recovered - i might need to use --force on the assembly or even 
--assume-clean as the number of events is little bit increased because i 
re-assembled the array from disks 2-6 after i made the backups of the 
other disks
2) investigate deeper whether i can or cannot re-create the array using 
disks 1, 2, 4 and 5
    i am confused about this solution, because of that (imo) interrupted 
rebuild on disk 1 (Recovery Offset : 4910216 sectors). on the other 
hand, i was able to assemble the array, read the lvm logical volume, 
"just" the btrfs filesystem was reported to be seriously corrupted that 
even btrfs recovery recovered nothing. but i didn't try to repair the 
filesystem yet.

any hints?

Dne 2024-08-22 17:02, Miroslav Šulc napsal:
> hello,
> 
> a broken synology raid5 ended up on my desk. the raid was broken for 
> quite some time, but i got it from the client just a few days back.
> 
> the raid consisted of 5 disks (no spares, all used):
> disk 1 (sdca): according to my understanding, it was removed from the 
> raid, then re-added, but the synchronization was interrupted, so it 
> cannot be used to restore the raid
> disk 2 (sdcb): is ok and up to date
> disk 3 (sdcc): seems to be up to date, still spinning, but there are 
> many issues with bad sectors
> disk 4 (sdcd): is ok and up to date
> disk 5 (sdce): is ok and up to date
> 
> the raid ran in degraded mode for over two months, client trying to 
> make a copy of the data from it.
> 
> i made copies of the disk images from disks 1, 2, 4, and 5, at the 
> state shown below. i didn't attempt to make a copy of disk 3 so far. 
> since then i re-assembled the raid from disk 2-5 so the number of 
> events on the disk 3 is now a bit higher then on the copies of the disk 
> images.
> 
> according to my understanding, as the disk 1 never finished the sync, i 
> cannot use it to recover the data, so the only way to recover the data 
> is to assemble the raid in degraded mode using disk 2-5, if i ever 
> succeed to make a copy of the disk 3. i'd just like to verify that my 
> understanding is correct and there is no other way to attempt to 
> recover the data. of course any hints are welcome.
> 
> here is the info from the partitions of the raid:
> 
> /dev/sdca5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x2
>      Array UUID : f697911c:bc85b162:13eaba4e:d1152a4f
>            Name : DS_KLIENT:4
>   Creation Time : Tue Mar 31 11:40:19 2020
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 15618390912 (7447.43 GiB 7996.62 GB)
>      Array Size : 31236781824 (29789.72 GiB 31986.46 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
> Recovery Offset : 4910216 sectors
>    Unused Space : before=1968 sectors, after=0 sectors
>           State : clean
>     Device UUID : 681c6c33:49df0163:bb4271d4:26c0c76d
> 
>     Update Time : Tue Jun  4 18:35:54 2024
>        Checksum : cf45a6c1 - correct
>          Events : 60223
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 0
>    Array State : AAAAA ('A' == active, '.' == missing, 'R' == 
> replacing)
> /dev/sdcb5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : f697911c:bc85b162:13eaba4e:d1152a4f
>            Name : DS_KLIENT:4
>   Creation Time : Tue Mar 31 11:40:19 2020
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 15618390912 (7447.43 GiB 7996.62 GB)
>      Array Size : 31236781824 (29789.72 GiB 31986.46 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=1968 sectors, after=0 sectors
>           State : clean
>     Device UUID : 0f23d7cd:b93301a9:5289553e:286ab6f0
> 
>     Update Time : Wed Aug 14 15:09:24 2024
>        Checksum : 9c93703e - correct
>          Events : 60286
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 1
>    Array State : .AAAA ('A' == active, '.' == missing, 'R' == 
> replacing)
> /dev/sdcc5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : f697911c:bc85b162:13eaba4e:d1152a4f
>            Name : DS_KLIENT:4
>   Creation Time : Tue Mar 31 11:40:19 2020
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 15618390912 (7447.43 GiB 7996.62 GB)
>      Array Size : 31236781824 (29789.72 GiB 31986.46 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=1968 sectors, after=0 sectors
>           State : clean
>     Device UUID : 1d1c04b4:24dabd8d:235afb7d:1494b8eb
> 
>     Update Time : Wed Aug 14 12:42:26 2024
>        Checksum : a224ec08 - correct
>          Events : 60283
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 2
>    Array State : .AAAA ('A' == active, '.' == missing, 'R' == 
> replacing)
> /dev/sdcd5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : f697911c:bc85b162:13eaba4e:d1152a4f
>            Name : DS_KLIENT:4
>   Creation Time : Tue Mar 31 11:40:19 2020
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 15618390912 (7447.43 GiB 7996.62 GB)
>      Array Size : 31236781824 (29789.72 GiB 31986.46 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=1968 sectors, after=0 sectors
>           State : clean
>     Device UUID : 76698d3f:e9c5a397:05ef7553:9fd0af16
> 
>     Update Time : Wed Aug 14 15:09:24 2024
>        Checksum : 38061500 - correct
>          Events : 60286
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 4
>    Array State : .AAAA ('A' == active, '.' == missing, 'R' == 
> replacing)
> /dev/sdce5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : f697911c:bc85b162:13eaba4e:d1152a4f
>            Name : DS_KLIENT:4
>   Creation Time : Tue Mar 31 11:40:19 2020
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 15618390912 (7447.43 GiB 7996.62 GB)
>      Array Size : 31236781824 (29789.72 GiB 31986.46 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=1968 sectors, after=0 sectors
>           State : clean
>     Device UUID : 9c7077f8:3120195a:1af11955:6bcebd99
> 
>     Update Time : Wed Aug 14 15:09:24 2024
>        Checksum : 38177651 - correct
>          Events : 60286
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 3
>    Array State : .AAAA ('A' == active, '.' == missing, 'R' == 
> replacing)
> 
> 
> thank you for your help.
> 
> miroslav

      parent reply	other threads:[~2024-08-23 16:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-22 15:02 problem with synology external raid Miroslav Šulc
2024-08-23 11:56 ` Mariusz Tkaczyk
2024-08-23 15:54   ` Miroslav Šulc
2024-08-27  7:38     ` Mariusz Tkaczyk
     [not found] ` <CALtW_ahB+mDUWVhc0Y5KCnkvn+0PyWprCnpD4ufgw2b_6UhFrg@mail.gmail.com>
2024-08-23 15:56   ` Miroslav Šulc
2024-08-23 16:27 ` Miroslav Šulc [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d6e87810cbfe40f3be74dfa6b0acb48e@fordfrog.com \
    --to=miroslav.sulc@fordfrog.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).