* ANNOUNCE: mdadm 3.1.4 - A tool for managing Soft RAID under Linux @ 2010-08-31 5:46 Neil Brown 2010-09-17 3:33 ` fibreraid 0 siblings, 1 reply; 5+ messages in thread From: Neil Brown @ 2010-08-31 5:46 UTC (permalink / raw) To: linux-raid I am pleased to announce the availability of mdadm version 3.1.4 It is available at the usual places: countrycode=xx. http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/ and via git at git://neil.brown.name/mdadm http://neil.brown.name/git?p=mdadm This is a bugfix/stability release over 3.1.3. 3.1.3 had a couple of embarrasing regressions and a couple of other issues surfaces which had easy fixes so I decided to make a 3.1.4 release after all. Two fixes related to configs that aren't using udev: - Don't remove md devices which 'standard' names on --stop - Allow dev_open to work on read-only /dev And fixed regressions: - Allow --incremental to add spares to an array - Accept --no-degraded as a deprecated option rather than throwing an error - Return correct success status when --incrmental assembling a container which does not yet have enough devices. - Don't link mdadm with pthreads, only mdmon needs it. - Fix compiler warning due to bad use of snprintf This release is believed to be stable and you should feel free to upgrade to 3.1.4 It is expected that the next release will be 3.2 with a number of new features. NeilBrown 31st August 2010 ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ANNOUNCE: mdadm 3.1.4 - A tool for managing Soft RAID under Linux 2010-08-31 5:46 ANNOUNCE: mdadm 3.1.4 - A tool for managing Soft RAID under Linux Neil Brown @ 2010-09-17 3:33 ` fibreraid 2010-09-21 23:50 ` Neil Brown 0 siblings, 1 reply; 5+ messages in thread From: fibreraid @ 2010-09-17 3:33 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid Hi Neil, Thank you for the 3.1.4 mdadm release, but I am seeing three immediate issues with it. I upgraded to 3.1.4 from 3.1.3 plus the patch to deal with incremental assembly of spares at system reboot. Now, new md's are coming back with strange device #'s and coming back online as "auto-read-only". I have duplicated these issues on multiple systems as well as on a virtual machine. My machine configuration is as follows: Ubuntu 10.04 Lucid 64-bit up to date 8GB RAM Dual quad-core CPUs 12 x Seagate Cheetah 15K.7 hard drives Drives connected with LSI 3Gbps SAS HBA To reproduce the issues: 1. Create /dev/md0 with RAID level 6, 11 active drives, 1 hot-spare, 64K chunk size, v1.2 superblock, run immediately 2. After a few minutes of syncing, reboot the system. 3. When Ubuntu comes back up, /proc/mdstat will report: Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md127 : active (auto-read-only) raid6 sdd1[2] sdl1[10] sdk1[9] sdb1[0] sdj1[8] sdf1[4] sde1[3] sdg1[5] sdc1[1] sdh1[6] sdi1[7] sdm1[11](S) 943695936 blocks super 1.2 level 6, 64k chunk, algorithm 2 [11/11] [UUUUUUUUUUU] resync=PENDING Please note the three issues: A) md marked as "auto-read-only". This never happened with 3.1.3. B) md comes back up as md127 eventhough it is the only md in the system. I've never seen ever with mdadm. C) md comes back with "resync=PENDING" instead of automatically resyncing. mdadm 3.1.3 would auto resync. Please note that issues A and B also occur even if the system is rebooted AFTER RAID synchronization is fully completed. Running mdadm -R /dev/md0 produces an error about the device being busy but does appear to clear the auto-read-only designation. I also tested with this three md's. Surprisingly, md0 came back as md127, md1 as md126, and md2 as md125, and of course, all auto-read-only. I am happy to run any tests you like as these issues are very quick and easy to reproduce. They seem like serious regressions or perhaps some incompatibility of mdadm 3.1.4 with Ubuntu. I await your guidance. Thank you Neil! Best -Tommy On Tue, Aug 31, 2010 at 1:46 AM, Neil Brown <neilb@suse.de> wrote: > > I am pleased to announce the availability of > mdadm version 3.1.4 > > It is available at the usual places: > countrycode=xx. > http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/ > and via git at > git://neil.brown.name/mdadm > http://neil.brown.name/git?p=mdadm > > This is a bugfix/stability release over 3.1.3. > 3.1.3 had a couple of embarrasing regressions and a couple of other > issues surfaces which had easy fixes so I decided to make a 3.1.4 > release after all. > > Two fixes related to configs that aren't using udev: > - Don't remove md devices which 'standard' names on --stop > - Allow dev_open to work on read-only /dev > And fixed regressions: > - Allow --incremental to add spares to an array > - Accept --no-degraded as a deprecated option rather than > throwing an error > - Return correct success status when --incrmental assembling > a container which does not yet have enough devices. > - Don't link mdadm with pthreads, only mdmon needs it. > - Fix compiler warning due to bad use of snprintf > > This release is believed to be stable and you should feel free to > upgrade to 3.1.4 > > It is expected that the next release will be 3.2 with a number of new > features. > > NeilBrown 31st August 2010 > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ANNOUNCE: mdadm 3.1.4 - A tool for managing Soft RAID under Linux 2010-09-17 3:33 ` fibreraid @ 2010-09-21 23:50 ` Neil Brown 2010-09-24 14:19 ` fibreraid 0 siblings, 1 reply; 5+ messages in thread From: Neil Brown @ 2010-09-21 23:50 UTC (permalink / raw) To: fibreraid@gmail.com; +Cc: linux-raid On Thu, 16 Sep 2010 23:33:36 -0400 "fibreraid@gmail.com" <fibreraid@gmail.com> wrote: > Hi Neil, > > Thank you for the 3.1.4 mdadm release, but I am seeing three immediate > issues with it. I upgraded to 3.1.4 from 3.1.3 plus the patch to deal > with incremental assembly of spares at system reboot. Now, new md's > are coming back with strange device #'s and coming back online as > "auto-read-only". I have duplicated these issues on multiple systems > as well as on a virtual machine. > > My machine configuration is as follows: > > Ubuntu 10.04 Lucid 64-bit up to date > 8GB RAM > Dual quad-core CPUs > 12 x Seagate Cheetah 15K.7 hard drives > Drives connected with LSI 3Gbps SAS HBA > > > To reproduce the issues: > > 1. Create /dev/md0 with RAID level 6, 11 active drives, 1 hot-spare, > 64K chunk size, v1.2 superblock, run immediately > 2. After a few minutes of syncing, reboot the system. > 3. When Ubuntu comes back up, /proc/mdstat will report: > > Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] > [raid1] [raid10] > md127 : active (auto-read-only) raid6 sdd1[2] sdl1[10] sdk1[9] sdb1[0] > sdj1[8] sdf1[4] sde1[3] sdg1[5] sdc1[1] sdh1[6] sdi1[7] sdm1[11](S) > 943695936 blocks super 1.2 level 6, 64k chunk, algorithm 2 > [11/11] [UUUUUUUUUUU] > resync=PENDING > > Please note the three issues: > A) md marked as "auto-read-only". This never happened with 3.1.3. > B) md comes back up as md127 eventhough it is the only md in the > system. I've never seen ever with mdadm. > C) md comes back with "resync=PENDING" instead of automatically > resyncing. mdadm 3.1.3 would auto resync. > > Please note that issues A and B also occur even if the system is > rebooted AFTER RAID synchronization is fully completed. Running mdadm > -R /dev/md0 produces an error about the device being busy but does > appear to clear the auto-read-only designation. I also tested with > this three md's. Surprisingly, md0 came back as md127, md1 as md126, > and md2 as md125, and of course, all auto-read-only. > > I am happy to run any tests you like as these issues are very quick > and easy to reproduce. They seem like serious regressions or perhaps > some incompatibility of mdadm 3.1.4 with Ubuntu. I await your > guidance. Thank you Neil! > > > Best > -Tommy > Hi Tommy, the issues you are seeing here are almost certainly not related to any change between 3.1.3 and 3.1.4. The symptoms you describe suggest that mdadm doesn't recognise these arrays as belonging to 'this' host. Each array has the hostname of the owning host encoding the metadata. If that doesn't match the current hostname it is assumed to be foreign and mdadm is more cautious about assembling it or giving it a name that some other local array might want. I suspect something is happening in the Ubuntu initramfs to confuse things. What does mdadm -E /dev/sd?1 | grep Name show? and what is your hostname? Maybe just running mkintramfs will fix the problem. NeilBrown > > > > > > On Tue, Aug 31, 2010 at 1:46 AM, Neil Brown <neilb@suse.de> wrote: > > > > I am pleased to announce the availability of > > mdadm version 3.1.4 > > > > It is available at the usual places: > > countrycode=xx. > > http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/ > > and via git at > > git://neil.brown.name/mdadm > > http://neil.brown.name/git?p=mdadm > > > > This is a bugfix/stability release over 3.1.3. > > 3.1.3 had a couple of embarrasing regressions and a couple of other > > issues surfaces which had easy fixes so I decided to make a 3.1.4 > > release after all. > > > > Two fixes related to configs that aren't using udev: > > - Don't remove md devices which 'standard' names on --stop > > - Allow dev_open to work on read-only /dev > > And fixed regressions: > > - Allow --incremental to add spares to an array > > - Accept --no-degraded as a deprecated option rather than > > throwing an error > > - Return correct success status when --incrmental assembling > > a container which does not yet have enough devices. > > - Don't link mdadm with pthreads, only mdmon needs it. > > - Fix compiler warning due to bad use of snprintf > > > > This release is believed to be stable and you should feel free to > > upgrade to 3.1.4 > > > > It is expected that the next release will be 3.2 with a number of new > > features. > > > > NeilBrown 31st August 2010 > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ANNOUNCE: mdadm 3.1.4 - A tool for managing Soft RAID under Linux 2010-09-21 23:50 ` Neil Brown @ 2010-09-24 14:19 ` fibreraid 2010-09-24 14:26 ` Jools Wills 0 siblings, 1 reply; 5+ messages in thread From: fibreraid @ 2010-09-24 14:19 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid Thank you Neil! That absolutely solved the issue. Please take note Ubuntu users of this excellent recommendation. Sincerely, Tommy On Tue, Sep 21, 2010 at 7:50 PM, Neil Brown <neilb@suse.de> wrote: > On Thu, 16 Sep 2010 23:33:36 -0400 > "fibreraid@gmail.com" <fibreraid@gmail.com> wrote: > >> Hi Neil, >> >> Thank you for the 3.1.4 mdadm release, but I am seeing three immediate >> issues with it. I upgraded to 3.1.4 from 3.1.3 plus the patch to deal >> with incremental assembly of spares at system reboot. Now, new md's >> are coming back with strange device #'s and coming back online as >> "auto-read-only". I have duplicated these issues on multiple systems >> as well as on a virtual machine. >> >> My machine configuration is as follows: >> >> Ubuntu 10.04 Lucid 64-bit up to date >> 8GB RAM >> Dual quad-core CPUs >> 12 x Seagate Cheetah 15K.7 hard drives >> Drives connected with LSI 3Gbps SAS HBA >> >> >> To reproduce the issues: >> >> 1. Create /dev/md0 with RAID level 6, 11 active drives, 1 hot-spare, >> 64K chunk size, v1.2 superblock, run immediately >> 2. After a few minutes of syncing, reboot the system. >> 3. When Ubuntu comes back up, /proc/mdstat will report: >> >> Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] >> [raid1] [raid10] >> md127 : active (auto-read-only) raid6 sdd1[2] sdl1[10] sdk1[9] sdb1[0] >> sdj1[8] sdf1[4] sde1[3] sdg1[5] sdc1[1] sdh1[6] sdi1[7] sdm1[11](S) >> 943695936 blocks super 1.2 level 6, 64k chunk, algorithm 2 >> [11/11] [UUUUUUUUUUU] >> resync=PENDING >> >> Please note the three issues: >> A) md marked as "auto-read-only". This never happened with 3.1.3. >> B) md comes back up as md127 eventhough it is the only md in the >> system. I've never seen ever with mdadm. >> C) md comes back with "resync=PENDING" instead of automatically >> resyncing. mdadm 3.1.3 would auto resync. >> >> Please note that issues A and B also occur even if the system is >> rebooted AFTER RAID synchronization is fully completed. Running mdadm >> -R /dev/md0 produces an error about the device being busy but does >> appear to clear the auto-read-only designation. I also tested with >> this three md's. Surprisingly, md0 came back as md127, md1 as md126, >> and md2 as md125, and of course, all auto-read-only. >> >> I am happy to run any tests you like as these issues are very quick >> and easy to reproduce. They seem like serious regressions or perhaps >> some incompatibility of mdadm 3.1.4 with Ubuntu. I await your >> guidance. Thank you Neil! >> >> >> Best >> -Tommy >> > > > Hi Tommy, > the issues you are seeing here are almost certainly not related to any > change between 3.1.3 and 3.1.4. > > The symptoms you describe suggest that mdadm doesn't recognise these arrays > as belonging to 'this' host. > Each array has the hostname of the owning host encoding the metadata. If > that doesn't match the current hostname it is assumed to be foreign and mdadm > is more cautious about assembling it or giving it a name that some other > local array might want. > > I suspect something is happening in the Ubuntu initramfs to confuse things. > > What does > mdadm -E /dev/sd?1 | grep Name > > show? and what is your hostname? > > Maybe just running > mkintramfs > will fix the problem. > > NeilBrown > > > > > >> >> >> >> >> >> On Tue, Aug 31, 2010 at 1:46 AM, Neil Brown <neilb@suse.de> wrote: >> > >> > I am pleased to announce the availability of >> > mdadm version 3.1.4 >> > >> > It is available at the usual places: >> > countrycode=xx. >> > http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/ >> > and via git at >> > git://neil.brown.name/mdadm >> > http://neil.brown.name/git?p=mdadm >> > >> > This is a bugfix/stability release over 3.1.3. >> > 3.1.3 had a couple of embarrasing regressions and a couple of other >> > issues surfaces which had easy fixes so I decided to make a 3.1.4 >> > release after all. >> > >> > Two fixes related to configs that aren't using udev: >> > - Don't remove md devices which 'standard' names on --stop >> > - Allow dev_open to work on read-only /dev >> > And fixed regressions: >> > - Allow --incremental to add spares to an array >> > - Accept --no-degraded as a deprecated option rather than >> > throwing an error >> > - Return correct success status when --incrmental assembling >> > a container which does not yet have enough devices. >> > - Don't link mdadm with pthreads, only mdmon needs it. >> > - Fix compiler warning due to bad use of snprintf >> > >> > This release is believed to be stable and you should feel free to >> > upgrade to 3.1.4 >> > >> > It is expected that the next release will be 3.2 with a number of new >> > features. >> > >> > NeilBrown 31st August 2010 >> > -- >> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in >> > the body of a message to majordomo@vger.kernel.org >> > More majordomo info at http://vger.kernel.org/majordomo-info.html >> > > > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ANNOUNCE: mdadm 3.1.4 - A tool for managing Soft RAID under Linux 2010-09-24 14:19 ` fibreraid @ 2010-09-24 14:26 ` Jools Wills 0 siblings, 0 replies; 5+ messages in thread From: Jools Wills @ 2010-09-24 14:26 UTC (permalink / raw) To: fibreraid@gmail.com; +Cc: Neil Brown, linux-raid On 24/09/10 15:19, fibreraid@gmail.com wrote: > Thank you Neil! That absolutely solved the issue. Please take note > Ubuntu users of this excellent recommendation. Are you running your own ubuntu packages or someone elses btw ? since ubuntu doesn't even have 3.1.3 afaik. i had the same behaviour as you on a new array until i did a update-initramfs "update-initramfs -k all -u" and then all was well - as it adds the mdadm.conf with to the initramfs (as has already been said). I wasn't 100% sure of the details why so thanks to Neil for clarifying this. Best Regards Jools Wills -- IT Consultant Oxford Inspire - http://www.oxfordinspire.co.uk - be inspired t: 01235 519446 m: 07966 577498 f: 08715046117 jools@oxfordinspire.co.uk ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2010-09-24 14:26 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-08-31 5:46 ANNOUNCE: mdadm 3.1.4 - A tool for managing Soft RAID under Linux Neil Brown 2010-09-17 3:33 ` fibreraid 2010-09-21 23:50 ` Neil Brown 2010-09-24 14:19 ` fibreraid 2010-09-24 14:26 ` Jools Wills
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).