* SLES 9 SP3 and mdadm 2.6.1 (via rpm)
@ 2007-06-12 7:24 Thorsten Wolf
2007-06-12 7:34 ` Neil Brown
0 siblings, 1 reply; 11+ messages in thread
From: Thorsten Wolf @ 2007-06-12 7:24 UTC (permalink / raw)
To: linux-raid
Hello everyone.
I've got a SLES9 SP3 running and I've been quite happy with it so far.
Recently, I've created a 4 disk spanning RAID-5 on our company server. Runs quite nice and we're happy with that too. I created that RAID using the SLES mdadm (1.4 I believe) package.
After discovering that there is a much newer mdadm out here (2.6.1), I decided to upgrade. It went just fine. Raid still running at 120 MB/sec.
After adding a disk to the raid, which went fine as well...... BUT:
The added disk /dev/sda1 shows up in /proc/mdstat, but does not have the "spare (s)" flag.
Plus... the --grow doesn't work...
I get the: mdadm: /dev/md0: Cannot get array details from sysfs error which has been discussed before. Can it be that this is caused by the 2.6.5-7.2xx Kernel? Any ideas?
regards,
Thorsten
--
Contact me on ICQ: 7656468
skype://sysfried
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-12 7:24 SLES 9 SP3 and mdadm 2.6.1 (via rpm) Thorsten Wolf
@ 2007-06-12 7:34 ` Neil Brown
2007-06-12 16:56 ` Bill Davidsen
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Neil Brown @ 2007-06-12 7:34 UTC (permalink / raw)
To: Thorsten Wolf; +Cc: linux-raid
On Tuesday June 12, T_Wolf@gmx.net wrote:
> Hello everyone.
>
> I've got a SLES9 SP3 running and I've been quite happy with it so far.
>
> Recently, I've created a 4 disk spanning RAID-5 on our company
> server. Runs quite nice and we're happy with that too. I created
> that RAID using the SLES mdadm (1.4 I believe) package.
>
> After discovering that there is a much newer mdadm out here (2.6.1),
> I decided to upgrade. It went just fine. Raid still running at 120
> MB/sec.
>
> After adding a disk to the raid, which went fine as well...... BUT:
>
> The added disk /dev/sda1 shows up in /proc/mdstat, but does not have
> the "spare (s)" flag.
>
> Plus... the --grow doesn't work...
>
> I get the: mdadm: /dev/md0: Cannot get array details from sysfs
> error which has been discussed before. Can it be that this is caused
> by the 2.6.5-7.2xx Kernel? Any ideas?
Yes. All of your issues are caused by using a 2.6.5 based kernel.
However even upgrading to SLES10 would not get you raid5-grow. That
came a little later. You would need to compile a mainline kernel or
wait for SLES11.
NeilBrown
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-12 7:34 ` Neil Brown
@ 2007-06-12 16:56 ` Bill Davidsen
2007-06-15 8:16 ` need help in recovering a RAID Thorsten Wolf
2007-06-13 11:45 ` SLES 9 SP3 and mdadm 2.6.1 (via rpm) Thorsten Wolf
2007-06-25 19:17 ` Thorsten Wolf
2 siblings, 1 reply; 11+ messages in thread
From: Bill Davidsen @ 2007-06-12 16:56 UTC (permalink / raw)
To: Neil Brown; +Cc: Thorsten Wolf, linux-raid
Neil Brown wrote:
> On Tuesday June 12, T_Wolf@gmx.net wrote:
>
>> Hello everyone.
>>
>> I've got a SLES9 SP3 running and I've been quite happy with it so far.
>>
>> Recently, I've created a 4 disk spanning RAID-5 on our company
>> server. Runs quite nice and we're happy with that too. I created
>> that RAID using the SLES mdadm (1.4 I believe) package.
>>
>> After discovering that there is a much newer mdadm out here (2.6.1),
>> I decided to upgrade. It went just fine. Raid still running at 120
>> MB/sec.
>>
>> After adding a disk to the raid, which went fine as well...... BUT:
>>
>> The added disk /dev/sda1 shows up in /proc/mdstat, but does not have
>> the "spare (s)" flag.
>>
>> Plus... the --grow doesn't work...
>>
>> I get the: mdadm: /dev/md0: Cannot get array details from sysfs
>> error which has been discussed before. Can it be that this is caused
>> by the 2.6.5-7.2xx Kernel? Any ideas?
>>
>
> Yes. All of your issues are caused by using a 2.6.5 based kernel.
> However even upgrading to SLES10 would not get you raid5-grow. That
> came a little later. You would need to compile a mainline kernel or
> wait for SLES11.
I have to think that if features require a later kernel version that a
warning message would be appropriate. I'm always leary about trying a
new mdadm version with a vendor kernel unless it's a minor bugfix problem.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-12 7:34 ` Neil Brown
2007-06-12 16:56 ` Bill Davidsen
@ 2007-06-13 11:45 ` Thorsten Wolf
2007-06-13 11:50 ` Justin Piszcz
2007-06-25 19:17 ` Thorsten Wolf
2 siblings, 1 reply; 11+ messages in thread
From: Thorsten Wolf @ 2007-06-13 11:45 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Dear Neil,
hi everyone else.
I've been reading a lot about the mdadm tools lately, and I believe that it is possible, but I haven't found the right documentation yet.
Is it possible to re-mount the RAID I have from another Linux installation? Do I need more than the 'mdadm --detail --scan > /etc/mdadm.conf' info saved to another location (so I can re-import it)?
Regards,
Thorsten
> On Tuesday June 12, T_Wolf@gmx.net wrote:
> > Hello everyone.
> >
> > I've got a SLES9 SP3 running and I've been quite happy with it so far.
> >
> > Recently, I've created a 4 disk spanning RAID-5 on our company
> > server. Runs quite nice and we're happy with that too. I created
> > that RAID using the SLES mdadm (1.4 I believe) package.
> >
> > After discovering that there is a much newer mdadm out here (2.6.1),
> > I decided to upgrade. It went just fine. Raid still running at 120
> > MB/sec.
> >
> > After adding a disk to the raid, which went fine as well...... BUT:
> >
> > The added disk /dev/sda1 shows up in /proc/mdstat, but does not have
> > the "spare (s)" flag.
> >
> > Plus... the --grow doesn't work...
> >
> > I get the: mdadm: /dev/md0: Cannot get array details from sysfs
> > error which has been discussed before. Can it be that this is caused
> > by the 2.6.5-7.2xx Kernel? Any ideas?
>
> Yes. All of your issues are caused by using a 2.6.5 based kernel.
> However even upgrading to SLES10 would not get you raid5-grow. That
> came a little later. You would need to compile a mainline kernel or
> wait for SLES11.
>
> NeilBrown
--
Contact me on ICQ: 7656468
skype://sysfried
Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-13 11:45 ` SLES 9 SP3 and mdadm 2.6.1 (via rpm) Thorsten Wolf
@ 2007-06-13 11:50 ` Justin Piszcz
2007-06-14 7:53 ` Luca Berra
0 siblings, 1 reply; 11+ messages in thread
From: Justin Piszcz @ 2007-06-13 11:50 UTC (permalink / raw)
To: Thorsten Wolf; +Cc: Neil Brown, linux-raid
[-- Attachment #1: Type: TEXT/PLAIN, Size: 3004 bytes --]
You don't even need that, just do this:
1. echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
2. mdadm --examine --scan --config=mdadm.conf
This will search all partitions and give the relevant SW raid information:
ARRAY /dev/md/4 level=raid5 metadata=1 num-devices=5
UUID=7f453e1889:3e4dd96e:8103724c:724f49 name=4
ARRAY /dev/md/3 level=raid5 metadata=1 num-devices=4
UUID=b3d7134904:52828f3d:0f0245a2:e8226d name=3
Then edit these lines so they look like this:
ARRAY /dev/md3 level=raid5 metadata=1 num-devices=4
UUID=b3d7134904:52828f3d:0f0245a2:e8226d name=3
ARRAY /dev/md4 level=raid5 metadata=1 num-devices=5
UUID=7f453e1889:3e4dd96e:8103724c:724f49 name=4
Then run:
1. mdadm -As /dev/md3
2. mdadm -As /dev/md4
p34:/etc/mdadm# mdadm -As /dev/md3
mdadm: /dev/md3 has been started with 4 drives.
p34:/etc/mdadm# mdadm -As /dev/md4
mdadm: /dev/md4 has been started with 5 drives.
Done.
On Wed, 13 Jun 2007, Thorsten Wolf wrote:
> Dear Neil,
> hi everyone else.
>
> I've been reading a lot about the mdadm tools lately, and I believe that it is possible, but I haven't found the right documentation yet.
>
> Is it possible to re-mount the RAID I have from another Linux installation? Do I need more than the 'mdadm --detail --scan > /etc/mdadm.conf' info saved to another location (so I can re-import it)?
>
> Regards,
>
> Thorsten
>
>> On Tuesday June 12, T_Wolf@gmx.net wrote:
>>> Hello everyone.
>>>
>>> I've got a SLES9 SP3 running and I've been quite happy with it so far.
>>>
>>> Recently, I've created a 4 disk spanning RAID-5 on our company
>>> server. Runs quite nice and we're happy with that too. I created
>>> that RAID using the SLES mdadm (1.4 I believe) package.
>>>
>>> After discovering that there is a much newer mdadm out here (2.6.1),
>>> I decided to upgrade. It went just fine. Raid still running at 120
>>> MB/sec.
>>>
>>> After adding a disk to the raid, which went fine as well...... BUT:
>>>
>>> The added disk /dev/sda1 shows up in /proc/mdstat, but does not have
>>> the "spare (s)" flag.
>>>
>>> Plus... the --grow doesn't work...
>>>
>>> I get the: mdadm: /dev/md0: Cannot get array details from sysfs
>>> error which has been discussed before. Can it be that this is caused
>>> by the 2.6.5-7.2xx Kernel? Any ideas?
>>
>> Yes. All of your issues are caused by using a 2.6.5 based kernel.
>> However even upgrading to SLES10 would not get you raid5-grow. That
>> came a little later. You would need to compile a mainline kernel or
>> wait for SLES11.
>>
>> NeilBrown
>
> --
> Contact me on ICQ: 7656468
> skype://sysfried
>
> Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
> Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-13 11:50 ` Justin Piszcz
@ 2007-06-14 7:53 ` Luca Berra
2007-06-14 8:22 ` Justin Piszcz
0 siblings, 1 reply; 11+ messages in thread
From: Luca Berra @ 2007-06-14 7:53 UTC (permalink / raw)
To: linux-raid
On Wed, Jun 13, 2007 at 07:50:06AM -0400, Justin Piszcz wrote:
>You don't even need that, just do this:
>
>1. echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
do _NOT_ do the above, _never_.
recent mdadm do not need the "DEVICE" line
for older one use "DEVICE partitions".
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-14 7:53 ` Luca Berra
@ 2007-06-14 8:22 ` Justin Piszcz
2007-06-16 16:44 ` Luca Berra
0 siblings, 1 reply; 11+ messages in thread
From: Justin Piszcz @ 2007-06-14 8:22 UTC (permalink / raw)
To: Luca Berra; +Cc: linux-raid
On Thu, 14 Jun 2007, Luca Berra wrote:
> On Wed, Jun 13, 2007 at 07:50:06AM -0400, Justin Piszcz wrote:
>> You don't even need that, just do this:
>>
>> 1. echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
> do _NOT_ do the above, _never_.
> recent mdadm do not need the "DEVICE" line
> for older one use "DEVICE partitions".
>
> --
> Luca Berra -- bluca@comedia.it
> Communication Media & Services S.r.l.
> /"\
> \ / ASCII RIBBON CAMPAIGN
> X AGAINST HTML MAIL
> / \
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Why never do that? It has always worked for me.
Justin.
^ permalink raw reply [flat|nested] 11+ messages in thread
* need help in recovering a RAID
2007-06-12 16:56 ` Bill Davidsen
@ 2007-06-15 8:16 ` Thorsten Wolf
0 siblings, 0 replies; 11+ messages in thread
From: Thorsten Wolf @ 2007-06-15 8:16 UTC (permalink / raw)
To: linux-raid
Hello again, everyone.
I've been really busy playing and working with mdadm tools lately. And I've come across a problem which I haven't been able to solve myself so far.
My system looks like this:
IDE:
/dev/hda1 --> / 100 GB
/dev/hda2 --> SWAP 1 GB
/dev/hda3 --> /home 395 GB
S-ATA onboard:
/dev/hde1 --> Disk 0 on /dev/md0
/dev/hdg1 --> Disk 1 on /dev/md0
/dev/hdi1 --> Disk 2 on /dev/md0
/dev/hdk1 --> Disk 3 on /dev/md0
/dev/md0 --> via LVG mounted as /raid
A loose cable caused hdi and hdk to go offline. The server itself was shut down 1 minute later.
But I can't access the raid anymore because it marked the disks as failed. There was no write access to the system when the power went down on the two disks.
How can I mark them as online (I'm pretty sure the data is still ok) ???
I really want to get the raid working again as I think this is a good scenario that I'd like to be able to handle in the future.
Regards,
Thorsten
--
Contact me on ICQ: 7656468
skype://sysfried
Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-14 8:22 ` Justin Piszcz
@ 2007-06-16 16:44 ` Luca Berra
0 siblings, 0 replies; 11+ messages in thread
From: Luca Berra @ 2007-06-16 16:44 UTC (permalink / raw)
To: linux-raid
On Thu, Jun 14, 2007 at 04:22:03AM -0400, Justin Piszcz wrote:
>
>
>On Thu, 14 Jun 2007, Luca Berra wrote:
>
>>On Wed, Jun 13, 2007 at 07:50:06AM -0400, Justin Piszcz wrote:
>>>You don't even need that, just do this:
>>>
>>>1. echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
>>do _NOT_ do the above, _never_.
>>recent mdadm do not need the "DEVICE" line
>>for older one use "DEVICE partitions".
>>
>
>Why never do that? It has always worked for me.
>
the keyword here is "for me"
but /dev/hd*[0-9] /dev/sd*[0-9] does not mean "all disks" for everybody.
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-12 7:34 ` Neil Brown
2007-06-12 16:56 ` Bill Davidsen
2007-06-13 11:45 ` SLES 9 SP3 and mdadm 2.6.1 (via rpm) Thorsten Wolf
@ 2007-06-25 19:17 ` Thorsten Wolf
2007-06-25 19:28 ` Justin Piszcz
2 siblings, 1 reply; 11+ messages in thread
From: Thorsten Wolf @ 2007-06-25 19:17 UTC (permalink / raw)
To: linux-raid; +Cc: Neil Brown
Hello again.
I've upgraded my SLES 9 SP3 system to SLES 10 (no SP1). The raid I had running on my devices:
/dev/hdg1 /dev/hdi1 /dev/hdk1 /dev/hde1
doesn't work because SLES 10 does detect them as:
/dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
I guess it's going to be simple, but can anyone give me the right syntax to fix this?
Regards,
Thorsten Wolf
--
Contact me on ICQ: 7656468
skype://sysfried
Psssst! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SLES 9 SP3 and mdadm 2.6.1 (via rpm)
2007-06-25 19:17 ` Thorsten Wolf
@ 2007-06-25 19:28 ` Justin Piszcz
0 siblings, 0 replies; 11+ messages in thread
From: Justin Piszcz @ 2007-06-25 19:28 UTC (permalink / raw)
To: Thorsten Wolf; +Cc: linux-raid, Neil Brown
[-- Attachment #1: Type: TEXT/PLAIN, Size: 893 bytes --]
On Mon, 25 Jun 2007, Thorsten Wolf wrote:
> Hello again.
>
> I've upgraded my SLES 9 SP3 system to SLES 10 (no SP1). The raid I had running on my devices:
> /dev/hdg1 /dev/hdi1 /dev/hdk1 /dev/hde1
> doesn't work because SLES 10 does detect them as:
> /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
>
>
> I guess it's going to be simple, but can anyone give me the right syntax to fix this?
>
> Regards,
>
> Thorsten Wolf
>
> --
> Contact me on ICQ: 7656468
> skype://sysfried
>
> Psssst! Schon vom neuen GMX MultiMessenger gehört?
> Der kanns mit allen: http://www.gmx.net/de/go/multimessenger
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
mdadm --examine --scan and use that in your mdadm.conf ?
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2007-06-25 19:28 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-12 7:24 SLES 9 SP3 and mdadm 2.6.1 (via rpm) Thorsten Wolf
2007-06-12 7:34 ` Neil Brown
2007-06-12 16:56 ` Bill Davidsen
2007-06-15 8:16 ` need help in recovering a RAID Thorsten Wolf
2007-06-13 11:45 ` SLES 9 SP3 and mdadm 2.6.1 (via rpm) Thorsten Wolf
2007-06-13 11:50 ` Justin Piszcz
2007-06-14 7:53 ` Luca Berra
2007-06-14 8:22 ` Justin Piszcz
2007-06-16 16:44 ` Luca Berra
2007-06-25 19:17 ` Thorsten Wolf
2007-06-25 19:28 ` Justin Piszcz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).