linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Neil Brown <neilb@suse.de>
To: Zoltan Szecsei <zoltans@geograph.co.za>
Cc: linux-raid@vger.kernel.org
Subject: Re: Confusion with setting up new RAID6 with mdadm
Date: Tue, 16 Nov 2010 06:53:25 +1100	[thread overview]
Message-ID: <20101116065325.6dd5e8cf@notabene.brown> (raw)
In-Reply-To: <4CE1758C.5080008@geograph.co.za>

On Mon, 15 Nov 2010 20:01:48 +0200
Zoltan Szecsei <zoltans@geograph.co.za> wrote:

> Hi,
> One last quick question:
> 
> Neil Brown <neilb@suse.de> wrote:
> > Depending on which version of mdadm you are using, the default chunk size
> > will be 64K or 512K.  I would recommend using 512K even if you have an older
> > mdadm.  64K appears to be too small for modern hardware, particularly if you
> > are storing large files.
> >
> > For raid6 with the current implementation it is safe to use "--assume-clean"
> > to avoid the long recovery time.  It is certainly safe to use that if you
> > want to build a test array, do some performance measurement, and then scrap
> > it and try again.  If some time later you want to be sure that the array is
> > entirely in sync you can
> >    echo repair>  /sys/block/md0/md/sync_action
> > and wait a while.
> >    
> ****************************************************
> I have compiled the following mdadm on my Ubuntu 64 bit 10.04 Desktop 
> system:
> root@gs0:/home/geograph# uname -a
> Linux gs0 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 
> x86_64 GNU/Linux
> root@gs0:/home/geograph# mdadm -V
> mdadm - v3.1.4 - 31st August 2010
> root@gs0:/home/geograph#
> 
> ****************************************************
> I have deleted the partitions on all 8 drives, and done a mdadm -Ss
> 
> root@gs0:/home/geograph# fdisk -lu
> 
> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
> 
> Disk /dev/sda doesn't contain a valid partition table
> 
> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
> 
> ******************************************************
> Based on the above "assume-clean" comment, plus all the help you guys 
> have offered, I have just run:
> mdadm --create /dev/md0 --metadata=1.2 --auto=md --assume-clean 
> --bitmap=internal --bitmap-chunk=131072 --chunk=512 --level=6 
> --raid-devices=8 /dev/sd[abcdefgh]
> 
> It took a nano-second to complete!
> 
> The man-pages for assume-clean say that "the array pre-existed". Surely 
> as I have erased the HDs, and now have no partitions on them, this is 
> not true?
> Do I need to re-run the above mdadm command, or is it safe to proceed 
> with LVM then mkfs ext4?

It is safe to proceed.

The situation is that the two parity block are probably not correct on most
(or even any) stripes.  But you have no live data on them to protect, so it
doesn't really matter.

With the current implementation of RAID6, every time you write, the correct
parity blocks are computed and written.  So any live data that is written
will be accompanies by correct parity blocks to protect it.

This does *not* apply to RAID5 as it sometimes uses the old parity block to
compute the new parity block.  If the old was wrong, the new will be wrong
too.

It is conceivable that one day we might change the raid6 code to perform
similar updates if it ever turns out to be faster to do it that way, but it
seems unlikely at the moment.

NeilBrown


> 
> Thanks for all,
> Zoltan
> 
> ******************************************************
> root@gs0:/home/geograph# mdadm -E /dev/md0
> mdadm: No md superblock detected on /dev/md0.
> 
> 
> 
> root@gs0:/home/geograph# ls -la /dev/md*
> brw-rw---- 1 root disk 9, 0 2010-11-15 19:53 /dev/md0
> /dev/md:
> total 0
> drwxr-xr-x  2 root root   60 2010-11-15 19:53 .
> drwxr-xr-x 19 root root 4260 2010-11-15 19:53 ..
> lrwxrwxrwx  1 root root    6 2010-11-15 19:53 0 -> ../md0
> 
> 
> root@gs0:/home/geograph# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md0 : active raid6 sdc[2] sdf[5] sdh[7] sdd[3] sdb[1] sdg[6] sda[0] sde[4]
>        11721077760 blocks super 1.2 level 6, 512k chunk, algorithm 2 
> [8/8] [UUUUUUUU]
>        bitmap: 0/8 pages [0KB], 131072KB chunk
> 
> unused devices: <none>
> 
> 
> 
> 
> *******************************************************
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 


  reply	other threads:[~2010-11-15 19:53 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-14 15:36 Confusion with setting up new RAID6 with mdadm Zoltan Szecsei
2010-11-14 16:48 ` Mikael Abrahamsson
2010-11-15 12:27   ` Zoltan Szecsei
2010-11-15 12:47     ` Michal Soltys
2010-11-15 13:23       ` Zoltan Szecsei
2010-11-14 19:50 ` Luca Berra
2010-11-15  6:52   ` Zoltan Szecsei
2010-11-15  7:41     ` Luca Berra
2010-11-15 11:06       ` Zoltan Szecsei
2011-07-22  1:08   ` Tanguy Herrmann
2011-07-22  5:17     ` Mikael Abrahamsson
2010-11-14 22:13 ` Neil Brown
2010-11-15  5:30   ` Roman Mamedov
2010-11-15  6:58   ` Zoltan Szecsei
2010-11-15  7:43     ` Mikael Abrahamsson
2010-11-15  9:18       ` Neil Brown
2010-11-15 18:01   ` Zoltan Szecsei
2010-11-15 19:53     ` Neil Brown [this message]
2010-11-16  6:48       ` Zoltan Szecsei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101116065325.6dd5e8cf@notabene.brown \
    --to=neilb@suse.de \
    --cc=linux-raid@vger.kernel.org \
    --cc=zoltans@geograph.co.za \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).