linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Resync of the degraded RAID10 array
@ 2017-05-10 14:00 Tomasz Majchrzak
  2017-05-10 22:27 ` Shaohua Li
  0 siblings, 1 reply; 2+ messages in thread
From: Tomasz Majchrzak @ 2017-05-10 14:00 UTC (permalink / raw)
  To: linux-raid

Hi all,

I wonder what should be the resync behaviour for the degraded RAID10 array.

cat /proc/mdstat
Personalities : [raid10] 
md127 : active raid10 nvme3n1[3] nvme2n1[2] nvme1n1[1] nvme0n1[0]
      2097152 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
	  [==>..................]  resync = 11.0% (232704/2097152) finish=0.1min speed=232704K/sec

mdadm -If nvme3n1
mdadm: set nvme3n1 faulty in md127
mdadm: hot removed nvme3n1 from md127

cat /proc/mdstat
Personalities : [raid10] 
md127 : active (auto-read-only) raid10 nvme2n1[2] nvme1n1[1] nvme0n1[0]
      2097152 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]
	  resync=PENDING

cat /sys/block/md127/md/resync_start
465408

At the moment it stops the resync. When new disk is added to the array, the
recovery starts and completes, however no resync for the first 2 disks takes
place and array is reported as clean when it's really out-of-sync.

My kernel version is 4.11.

What is the expected behaviour? Shall resync continue on 3-disk RAID10 or
shall it be restarted when recovery completes?

Regards,

Tomek

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Resync of the degraded RAID10 array
  2017-05-10 14:00 Resync of the degraded RAID10 array Tomasz Majchrzak
@ 2017-05-10 22:27 ` Shaohua Li
  0 siblings, 0 replies; 2+ messages in thread
From: Shaohua Li @ 2017-05-10 22:27 UTC (permalink / raw)
  To: Tomasz Majchrzak; +Cc: linux-raid

On Wed, May 10, 2017 at 04:00:44PM +0200, Tomasz Majchrzak wrote:
> Hi all,
> 
> I wonder what should be the resync behaviour for the degraded RAID10 array.
> 
> cat /proc/mdstat
> Personalities : [raid10] 
> md127 : active raid10 nvme3n1[3] nvme2n1[2] nvme1n1[1] nvme0n1[0]
>       2097152 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
> 	  [==>..................]  resync = 11.0% (232704/2097152) finish=0.1min speed=232704K/sec
> 
> mdadm -If nvme3n1
> mdadm: set nvme3n1 faulty in md127
> mdadm: hot removed nvme3n1 from md127
> 
> cat /proc/mdstat
> Personalities : [raid10] 
> md127 : active (auto-read-only) raid10 nvme2n1[2] nvme1n1[1] nvme0n1[0]
>       2097152 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]
> 	  resync=PENDING
> 
> cat /sys/block/md127/md/resync_start
> 465408
> 
> At the moment it stops the resync. When new disk is added to the array, the

Probably check why the resync is stopped. The resync can still continue in
degraded mode. After resync completes, the recovery will start.

Thanks,
Shaohua

> recovery starts and completes, however no resync for the first 2 disks takes
> place and array is reported as clean when it's really out-of-sync.
> 
> My kernel version is 4.11.
> 
> What is the expected behaviour? Shall resync continue on 3-disk RAID10 or
> shall it be restarted when recovery completes?
> 
> Regards,
> 
> Tomek
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-05-10 22:27 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-05-10 14:00 Resync of the degraded RAID10 array Tomasz Majchrzak
2017-05-10 22:27 ` Shaohua Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).