linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Benjamin ESTRABAUD <be@mpstor.com>
To: linux-raid@vger.kernel.org
Subject: Re: mdadm issue adding components to an array (avail_size / array_size issue).
Date: Thu, 07 May 2009 16:19:16 +0100	[thread overview]
Message-ID: <4A02FBF4.3070902@mpstor.com> (raw)
In-Reply-To: <4A02F95C.9000106@mpstor.com>

Quick added note:

Using "array.size * 2" rather than array_size computed using 
get_component_size works perfectly fine.

The array.size points to the correct component size of the array, 
array_size always points to md/d0's component_size.

This workaround can be used in my case for the moment but when large 
devices come into play this might pose a problem.

Ben.

Benjamin ESTRABAUD wrote:
> Hi,
>
> I am experiencing what seems to be a bug with mdadm which prevents me 
> from --add ing a disk in some specifics conditions.
>
> The current setup is as follow:
>
> -1 RAID 5 on 3* 26Gb block devices. > /dev/md/d0
> -1 RAID 5 on 3*36Gb block devices. > /dev/md/d1
> -1 RAID 5 on 3* 9Gb block devices. > /dev/md/d2
>
> No config file is being used. RAIDs are created as follow:
>
> mdadm - v2.6.9 - 10th March 2009
>
> ./mdadm --create -vvv --force --run --metadata=1.2 /dev/md/dX 
> --level=5 --size=<sizeofraid> --chunk=64 --name=<name, like: 1356341> 
> -n3 --bitmap=internal --bitmap-chunk=4096 --layout=ls /dev/<blockdev1> 
> /dev/<blockdev2> /dev/<blockdev3>
>
> - Several different size available block devices for adding to the 
> arrays (1*14Gb, 1*26Gb, 2*32Gb, etc.)
>
> If trying to --add a block device to the /dev/md/d0 RAID array after 
> degrading it, everything works fine as long as the device being added 
> is at least as big as the "component_size" size found in sysfs from 
> /dev/md_d0/md/component_size. Therefore, a 32Gb drive can be added to 
> the first array.
>
> However, trying to do the same procedure for the third RAID, using 
> either a 9Gb, 14Gb block device fails complaining that the device 
> being hot added is not large enough to join the array. Which is 
> strange, since after checking the /dev/md_d3/md/component_size, this 
> value is much lower than the size obtained for the block device being 
> added.
>
> On another hand, degrading md/d1 and trying to add a 32Gb block device 
> to this array composed of 3*36Gb block devices does not complain that 
> the block device size is not large enough to join the array, and adds 
> it to /dev/md/d1, however, as a Failed (F) drive.
>
> In the second example, the hotAdd does not work on /dev/md/d1 that has 
> its smallest component size set to 9Gb as long as the drive being 
> added is not >= to the component size of /dev/md/d0's component size.
>
> After further checking in the mdadm source, I noticed that 
> "array_size" in Manage_subdevs from Manage.c is always the same, 
> regardless of which RAID we are trying to operate on.
>
> by examining the "get_component_size" method, I noticed the following:
>
>   if (major(stb.st_rdev) != get_mdp_major())
>        sprintf(fname, "/sys/block/md%d/md/component_size",
>            (int)minor(stb.st_rdev));
>    else
>        sprintf(fname, "/sys/block/md_d%d/md/component_size",
>            (int)minor(stb.st_rdev)>>MdpMinorShift);"
>
> >>> (int)minor(stb.st_rdev)>>MdpMinorShift) is always "0", therefore 
> the component size file is always the following:
>
> /sys/block/md_d0/md/component_size
>
> Whatever the md device is currently used, md/d1, or md/d2 etc.
>
> The "get_component_size" seems to be using an integer, "fd" to find 
> out the size and return it.
> However, fd is always the same value, "3", whatever RAID is being 
> worked on.
>
> this value seems to be generated in mdadm.c, in the main function:
>
> line 944: mdfd = open_mddev(devlist->devname, autof);
>
> This always returns "3" in my case.
>
> I was wondering what exactly this "mdfd" corresponded to, and if the 
> fact that it never changes is normal or not. I am wondering whether 
> the issue lies with this variable, or if it does in the 
> get_component_size function.
>
> Would anyone have experienced a similar issue here?
>
> Thank you very much in advance for your comments/advices.
>
> Ben.
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


  reply	other threads:[~2009-05-07 15:19 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-07 15:08 mdadm issue adding components to an array (avail_size / array_size issue) Benjamin ESTRABAUD
2009-05-07 15:19 ` Benjamin ESTRABAUD [this message]
2009-05-08  0:49 ` NeilBrown
2009-05-11 11:05   ` Benjamin ESTRABAUD

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A02FBF4.3070902@mpstor.com \
    --to=be@mpstor.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).