* Mixing mdadm versions @ 2011-02-17 10:21 hansbkk 2011-02-17 13:25 ` Phil Turmel 0 siblings, 1 reply; 3+ messages in thread From: hansbkk @ 2011-02-17 10:21 UTC (permalink / raw) To: Linux-RAID I've created and manage sets of arrays with mdadm v3.1.4 - I've been using System Rescue CD and Grml for my sysadmin tasks, as they are based on fairly up-to-date gentoo and debian and have a lot of convenient tools not available on the production OS, a "stable" (read: old packages) flavor of RHEL, which turns out is running mdadm v2.6.4. I spec'd v1.2 metadata for the big raid6 storage arrays, but kept to 0.90 for the smaller raid1's as some of those are my boot devices. As per a previous thread, I've noticed on the production OS the output of mdadm -E on a member returns a long string of "failed, failed". The more modern mdadm reports everything's OK. - Also mixed in are some "fled"s - whazzup with that? Unfortunately the server is designed to run as a packaged appliance and uses the rpath/conary package manager, so I'm hesitant to fiddle around upgrading some bits, afraid that other bits will break - the sysadmin tools are run from a web interface to a bunch of PHP scripts. So, here are my questions: As long as the more recent versions of mdadm report that everything's OK, can I ignore the mishmosh output of the older mdadm -E report? And am I correct in thinking that from now on I should create everything with the older native packages that are actually going to serve the arrays in production? ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Mixing mdadm versions 2011-02-17 10:21 Mixing mdadm versions hansbkk @ 2011-02-17 13:25 ` Phil Turmel 2011-02-17 14:16 ` hansbkk 0 siblings, 1 reply; 3+ messages in thread From: Phil Turmel @ 2011-02-17 13:25 UTC (permalink / raw) To: hansbkk; +Cc: Linux-RAID On 02/17/2011 05:21 AM, hansbkk@gmail.com wrote: > I've created and manage sets of arrays with mdadm v3.1.4 - I've been > using System Rescue CD and Grml for my sysadmin tasks, as they are > based on fairly up-to-date gentoo and debian and have a lot of > convenient tools not available on the production OS, a "stable" (read: > old packages) flavor of RHEL, which turns out is running mdadm v2.6.4. > I spec'd v1.2 metadata for the big raid6 storage arrays, but kept to > 0.90 for the smaller raid1's as some of those are my boot devices. The default data offset for for v1.1 and v1.2 meta-data changed in mdadm v3.1.2. If you ever need to use the running system to "mdadm --create --assume-clean" in a recovery effort, the data segments will *NOT* line up if the original array was created with a current version of mdadm. (git commit a380e2751efea7df "super1: encourage data alignment on 1Meg boundary") > As per a previous thread, I've noticed on the production OS the output > of mdadm -E on a member returns a long string of "failed, failed". The > more modern mdadm reports everything's OK. > > - Also mixed in are some "fled"s - whazzup with that? > > Unfortunately the server is designed to run as a packaged appliance > and uses the rpath/conary package manager, so I'm hesitant to fiddle > around upgrading some bits, afraid that other bits will break - the > sysadmin tools are run from a web interface to a bunch of PHP scripts. > > So, here are my questions: > > As long as the more recent versions of mdadm report that everything's > OK, can I ignore the mishmosh output of the older mdadm -E report? Don't know. > And am I correct in thinking that from now on I should create > everything with the older native packages that are actually going to > serve the arrays in production? If there's a more modern Red Hat mdadm package that you can include in your appliance, that would be my first choice. After testing with the web tools, though. Otherwise, I would say "Yes", for the above reason. However, the reverse problem can also occur. You won't be able to use a modern mdadm to do a "--create --assume-clean" on an offline system. That's what happened to Simon in another thread. Avoiding that might be worth the effort qualifying a newer version of mdadm. Phil ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Mixing mdadm versions 2011-02-17 13:25 ` Phil Turmel @ 2011-02-17 14:16 ` hansbkk 0 siblings, 0 replies; 3+ messages in thread From: hansbkk @ 2011-02-17 14:16 UTC (permalink / raw) To: Phil Turmel; +Cc: Linux-RAID OK, thanks for those details. I plan to use the running system for day-to-day serving of the data, and to use the more modern versions (which originally created the arrays) for any recovery/maintenance. I believe the running system will be getting upgraded (RHEL5?) in the next few months, so unless I have reason to think there's actually something wrong, I think I'll leave it alone, I really don't feel like learning another package management system at the moment - the Linux learning curve has been making my brain ache lately 8-) On Thu, Feb 17, 2011 at 8:25 PM, Phil Turmel <philip@turmel.org> wrote: > On 02/17/2011 05:21 AM, hansbkk@gmail.com wrote: >> I've created and manage sets of arrays with mdadm v3.1.4 - I've been >> using System Rescue CD and Grml for my sysadmin tasks, as they are >> based on fairly up-to-date gentoo and debian and have a lot of >> convenient tools not available on the production OS, a "stable" (read: >> old packages) flavor of RHEL, which turns out is running mdadm v2.6.4. >> I spec'd v1.2 metadata for the big raid6 storage arrays, but kept to >> 0.90 for the smaller raid1's as some of those are my boot devices. > > The default data offset for for v1.1 and v1.2 meta-data changed in mdadm v3.1.2. If you ever need to use the running system to "mdadm --create --assume-clean" in a recovery effort, the data segments will *NOT* line up if the original array was created with a current version of mdadm. > > (git commit a380e2751efea7df "super1: encourage data alignment on 1Meg boundary") > >> As per a previous thread, I've noticed on the production OS the output >> of mdadm -E on a member returns a long string of "failed, failed". The >> more modern mdadm reports everything's OK. >> >> - Also mixed in are some "fled"s - whazzup with that? >> >> Unfortunately the server is designed to run as a packaged appliance >> and uses the rpath/conary package manager, so I'm hesitant to fiddle >> around upgrading some bits, afraid that other bits will break - the >> sysadmin tools are run from a web interface to a bunch of PHP scripts. >> >> So, here are my questions: >> >> As long as the more recent versions of mdadm report that everything's >> OK, can I ignore the mishmosh output of the older mdadm -E report? > > Don't know. > >> And am I correct in thinking that from now on I should create >> everything with the older native packages that are actually going to >> serve the arrays in production? > > If there's a more modern Red Hat mdadm package that you can include in your appliance, that would be my first choice. After testing with the web tools, though. > > Otherwise, I would say "Yes", for the above reason. However, the reverse problem can also occur. You won't be able to use a modern mdadm to do a "--create --assume-clean" on an offline system. That's what happened to Simon in another thread. Avoiding that might be worth the effort qualifying a newer version of mdadm. > > Phil > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-02-17 14:16 UTC | newest] Thread overview: 3+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-02-17 10:21 Mixing mdadm versions hansbkk 2011-02-17 13:25 ` Phil Turmel 2011-02-17 14:16 ` hansbkk
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).