* Re-shape raid0 acts up
@ 2013-07-10 11:56 Ole Tange
2013-07-10 12:32 ` Sam Bingner
0 siblings, 1 reply; 2+ messages in thread
From: Ole Tange @ 2013-07-10 11:56 UTC (permalink / raw)
To: linux-raid
I tested reshaping a raid0 on 2 devices to 4 devices.
It seems the reshape first converted to RAID4 and then quickly
converted to RAID0.
I have now done that on a bigger array. The only change that I am aware of is:
* The 2+2 devices are much larger (25 TB each compared to 1 GB each)
* The system has crashed during the reshape
So right now the system looks like this:
Personalities : [raid6] [raid5] [raid4]
md3 : active raid4 md1[0] md5[3] md4[4] md2[1]
109396484096 blocks super 1.2 level 4, 512k chunk, algorithm 0
[5/4] [UUUU_]
which looks like the RAID4 just before the final step.
I then tried:
# mdadm --grow /dev/md3 -n 4 -l 0 --backup-file reshape.bak
But that seems to cause the reshape to go through the full 100 TB again:
root@lemaitre:/lemaitre-internal# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md3 : active raid4 dm-0[0] dm-3[3] dm-2[4] dm-1[1]
109396484096 blocks super 1.2 level 4, 512k chunk, algorithm 0
[5/4] [UUUU_]
[>....................] reshape = 0.0% (28100/27349121024)
finish=32428.7min speed=14050K/sec
So I cancelled that and rolled back to the situation before (this was
possible because I ran this on overlay files):
Personalities : [raid6] [raid5] [raid4]
md3 : active raid4 md1[0] md5[3] md4[4] md2[1]
109396484096 blocks super 1.2 level 4, 512k chunk, algorithm 0
[5/4] [UUUU_]
Can I convert that to RAID0? Can I do that without having to wait the
2-3 weeks a full reshape takes?
/Ole
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Re-shape raid0 acts up
2013-07-10 11:56 Re-shape raid0 acts up Ole Tange
@ 2013-07-10 12:32 ` Sam Bingner
0 siblings, 0 replies; 2+ messages in thread
From: Sam Bingner @ 2013-07-10 12:32 UTC (permalink / raw)
To: Ole Tange, linux-raid@vger.kernel.org
On 7/10/13 1:56 AM, "Ole Tange" <tange@binf.ku.dk> wrote:
>I tested reshaping a raid0 on 2 devices to 4 devices.
>
>It seems the reshape first converted to RAID4 and then quickly
>converted to RAID0.
>
>I have now done that on a bigger array. The only change that I am aware
>of is:
>
>* The 2+2 devices are much larger (25 TB each compared to 1 GB each)
>* The system has crashed during the reshape
>
>So right now the system looks like this:
>
>Personalities : [raid6] [raid5] [raid4]
>md3 : active raid4 md1[0] md5[3] md4[4] md2[1]
> 109396484096 blocks super 1.2 level 4, 512k chunk, algorithm 0
>[5/4] [UUUU_]
>
>which looks like the RAID4 just before the final step.
>
>I then tried:
>
># mdadm --grow /dev/md3 -n 4 -l 0 --backup-file reshape.bak
>
>But that seems to cause the reshape to go through the full 100 TB again:
>
>root@lemaitre:/lemaitre-internal# cat /proc/mdstat
>Personalities : [raid6] [raid5] [raid4]
>md3 : active raid4 dm-0[0] dm-3[3] dm-2[4] dm-1[1]
> 109396484096 blocks super 1.2 level 4, 512k chunk, algorithm 0
>[5/4] [UUUU_]
> [>....................] reshape = 0.0% (28100/27349121024)
>finish=32428.7min speed=14050K/sec
>
>So I cancelled that and rolled back to the situation before (this was
>possible because I ran this on overlay files):
>
>Personalities : [raid6] [raid5] [raid4]
>md3 : active raid4 md1[0] md5[3] md4[4] md2[1]
> 109396484096 blocks super 1.2 level 4, 512k chunk, algorithm 0
>[5/4] [UUUU_]
>
>Can I convert that to RAID0? Can I do that without having to wait the
>2-3 weeks a full reshape takes?
>
All you need to do to directly convert to RAID0 is:
echo 0 > /sys/block/md3/md/level
Sam
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2013-07-10 12:32 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-10 11:56 Re-shape raid0 acts up Ole Tange
2013-07-10 12:32 ` Sam Bingner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).