* help
@ 2004-02-01 13:13 Rami Addady
0 siblings, 0 replies; 23+ messages in thread
From: Rami Addady @ 2004-02-01 13:13 UTC (permalink / raw)
To: linux-raid
^ permalink raw reply [flat|nested] 23+ messages in thread
* Help.
@ 2004-04-01 16:56 Jason C. Leach
2004-04-01 17:00 ` Help Måns Rullgård
0 siblings, 1 reply; 23+ messages in thread
From: Jason C. Leach @ 2004-04-01 16:56 UTC (permalink / raw)
To: linux-raid
Help.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help.
2004-04-01 16:56 Help Jason C. Leach
@ 2004-04-01 17:00 ` Måns Rullgård
0 siblings, 0 replies; 23+ messages in thread
From: Måns Rullgård @ 2004-04-01 17:00 UTC (permalink / raw)
To: linux-raid
"Jason C. Leach" <jleach@ocis.net> writes:
> Help.
Thanks for the offer. I'd like to know how to set up four ATA disks
for maximum performance, while retaining at least half the storage
space, and giving some fault tolerance.
--
Måns Rullgård
mru@kth.se
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* help
@ 2004-10-20 5:05 Srinivasa S
2004-10-20 5:50 ` help Guy
2004-10-21 1:47 ` help Jon Lewis
0 siblings, 2 replies; 23+ messages in thread
From: Srinivasa S @ 2004-10-20 5:05 UTC (permalink / raw)
To: linux-raid
i've a raid 5 setup with 3 disks each of 1 GB. i'm using it for some
experimentation purpose. resync is n progress and is not ending at
all. its almost 15 hrs since the resync started and the "cat
/proc/mdstat" always shows something like this.
Personalities : [raid5]
md0 : active raid5 sdj1[2] sdi1[1] sdh1[0]
2002688 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]
[>....................] resync = 0.0% (0/1001344)
finish=442287.5min speed=0K/sec
is there anything wrong - can i possibly stop the resync. the problem
i'm having is that whichever process tries to do IO with the array is
hanging. please help. thanks.
srinivasa s
^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: help
2004-10-20 5:05 help Srinivasa S
@ 2004-10-20 5:50 ` Guy
2004-10-21 1:47 ` help Jon Lewis
1 sibling, 0 replies; 23+ messages in thread
From: Guy @ 2004-10-20 5:50 UTC (permalink / raw)
To: 'Srinivasa S', linux-raid
Your ETA is 307 days! Your computer will be obsolete by then! :)
Something is wrong.
The re-sync should take less than 10 minutes.
But I don't know what is wrong.
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Srinivasa S
Sent: Wednesday, October 20, 2004 1:06 AM
To: linux-raid@vger.kernel.org
Subject: help
i've a raid 5 setup with 3 disks each of 1 GB. i'm using it for some
experimentation purpose. resync is n progress and is not ending at
all. its almost 15 hrs since the resync started and the "cat
/proc/mdstat" always shows something like this.
Personalities : [raid5]
md0 : active raid5 sdj1[2] sdi1[1] sdh1[0]
2002688 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]
[>....................] resync = 0.0% (0/1001344)
finish=442287.5min speed=0K/sec
is there anything wrong - can i possibly stop the resync. the problem
i'm having is that whichever process tries to do IO with the array is
hanging. please help. thanks.
srinivasa s
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: help
2004-10-20 5:05 help Srinivasa S
2004-10-20 5:50 ` help Guy
@ 2004-10-21 1:47 ` Jon Lewis
1 sibling, 0 replies; 23+ messages in thread
From: Jon Lewis @ 2004-10-21 1:47 UTC (permalink / raw)
To: Srinivasa S; +Cc: linux-raid
On Wed, 20 Oct 2004, Srinivasa S wrote:
> i've a raid 5 setup with 3 disks each of 1 GB. i'm using it for some
> experimentation purpose. resync is n progress and is not ending at
> all. its almost 15 hrs since the resync started and the "cat
> /proc/mdstat" always shows something like this.
>
> Personalities : [raid5]
> md0 : active raid5 sdj1[2] sdi1[1] sdh1[0]
> 2002688 blocks level 5, 8k chunk, algorithm 2 [3/3] [UUU]
> [>....................] resync = 0.0% (0/1001344)
> finish=442287.5min speed=0K/sec
What kind of disks? My bad Maxtor SATA drive caused similar issues.
----------------------------------------------------------------------
Jon Lewis | I route
Senior Network Engineer | therefore you are
Atlantic Net |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
^ permalink raw reply [flat|nested] 23+ messages in thread
* Help
@ 2006-02-04 2:21 Oren Ben-Menachem
0 siblings, 0 replies; 23+ messages in thread
From: Oren Ben-Menachem @ 2006-02-04 2:21 UTC (permalink / raw)
To: linux-raid
^ permalink raw reply [flat|nested] 23+ messages in thread
* help
@ 2006-08-23 19:21 Archie Cotton
0 siblings, 0 replies; 23+ messages in thread
From: Archie Cotton @ 2006-08-23 19:21 UTC (permalink / raw)
To: linux-net
Do you want a w-atch?
In our online store you can buy r e p l i c a s of R o l e x watches and
other brands. They look and feel exactly like the real thing.
- We have 100+ different brands in our selection
- Best prices on the market Just For You
- Great Discount Live Support Extended Warranty
- Free shipping if you order 2 or more
- Save up to 85% compared to the cost of other r e p l i c a s
- Standard Features:
- Screw-in crown
- Unidirectional turning bezel where appropriate
- All the appropriate r o l e x logos, on crown and dial
- Heavy weight
Clisk here: http://superty.info
"It's yours.
That was the only way he could account for this bizarre behavior she had seen the marks after all, and this was the beginning of some new and spectacular punishment.
As a result, hadn't his "serious fiction»become steadily more self-conscious, a sort of scream?
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-21 18:02 ` Info
@ 2009-08-21 19:20 ` Info
2009-08-21 19:38 ` Help John Robinson
2009-08-22 6:14 ` Help Info
0 siblings, 2 replies; 23+ messages in thread
From: Info @ 2009-08-21 19:20 UTC (permalink / raw)
To: linux-raid
My God, the command is not working. I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
# mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid10 sdb3[1]
1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
bitmap: 94/446 pages [376KB], 2048KB chunk
md1 : active raid10 sdb2[1]
6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
bitmap: 0/25 pages [0KB], 128KB chunk
md0 : active raid10 sdb1[1]
78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
bitmap: 76/151 pages [304KB], 256KB chunk
unused devices: <none>
#
My system is half-converted and is now unbootable. What am I going to do?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-21 19:20 ` Help Info
@ 2009-08-21 19:38 ` John Robinson
2009-08-21 20:51 ` Help Info
2009-08-22 6:14 ` Help Info
1 sibling, 1 reply; 23+ messages in thread
From: John Robinson @ 2009-08-21 19:38 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On 21/08/2009 20:20, Info@quantum-sci.net wrote:
> My God, the command is not working. I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md2 : active raid10 sdb3[1]
> 1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 94/446 pages [376KB], 2048KB chunk
>
> md1 : active raid10 sdb2[1]
> 6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 0/25 pages [0KB], 128KB chunk
>
> md0 : active raid10 sdb1[1]
> 78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 76/151 pages [304KB], 256KB chunk
Well, it won't let you remove the only thing keeping the array active.
Stop the array first with `mdadm --stop /dev/md0`. After that I think
you can just create your new RAID-1 array without doing anything else.
Cheers,
John.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-21 19:38 ` Help John Robinson
@ 2009-08-21 20:51 ` Info
0 siblings, 0 replies; 23+ messages in thread
From: Info @ 2009-08-21 20:51 UTC (permalink / raw)
To: linux-raid
On Friday 21 August 2009 12:38:00 John Robinson wrote:
> Well, it won't let you remove the only thing keeping the array active.
> Stop the array first with `mdadm --stop /dev/md0`. After that I think
> you can just create your new RAID-1 array without doing anything else.
WHEW, thank you.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-21 19:20 ` Help Info
2009-08-21 19:38 ` Help John Robinson
@ 2009-08-22 6:14 ` Info
2009-08-22 9:34 ` Help NeilBrown
1 sibling, 1 reply; 23+ messages in thread
From: Info @ 2009-08-22 6:14 UTC (permalink / raw)
To: linux-raid
Not able to boot to my RAID devices. md0 is / and ext3 RAID1, but md1 and md2 are swap and JFS respectively, RAID10 created like this:
mdadm --create /dev/md1 --level=raid10 --layout=o2 --metadata=1.2 --chunk=256 --raid-disks=2 missing /dev/sdb2
It gives the initial kernel boot message but then says
invalid raid superblock magic on sdb2
invalid raid superblock magic on sdb3
... and halts progress. I have to hard-reset to continue. Why isn't the error more specific?
I've tried setting the metadata to 1.1, and tried adjusting mdadm.conf from /dev/md/1 to /dev/md1, but neither helped. The parts are set to raid autodetect and the kernel parameter is set to md_autodetect. What could be wrong?
On Friday 21 August 2009 12:20:28 Info@quantum-sci.net wrote:
>
> My God, the command is not working. I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md2 : active raid10 sdb3[1]
> 1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 94/446 pages [376KB], 2048KB chunk
>
> md1 : active raid10 sdb2[1]
> 6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 0/25 pages [0KB], 128KB chunk
>
> md0 : active raid10 sdb1[1]
> 78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
> bitmap: 76/151 pages [304KB], 256KB chunk
>
> unused devices: <none>
> #
>
>
> My system is half-converted and is now unbootable. What am I going to do?
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-22 6:14 ` Help Info
@ 2009-08-22 9:34 ` NeilBrown
2009-08-22 12:56 ` Help Info
0 siblings, 1 reply; 23+ messages in thread
From: NeilBrown @ 2009-08-22 9:34 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On Sat, August 22, 2009 4:14 pm, Info@quantum-sci.net wrote:
>
> Not able to boot to my RAID devices. md0 is / and ext3 RAID1, but md1 and
> md2 are swap and JFS respectively, RAID10 created like this:
> mdadm --create /dev/md1 --level=raid10 --layout=o2 --metadata=1.2
> --chunk=256 --raid-disks=2 missing /dev/sdb2
>
> It gives the initial kernel boot message but then says
> invalid raid superblock magic on sdb2
> invalid raid superblock magic on sdb3
>
> ... and halts progress. I have to hard-reset to continue. Why isn't the
> error more specific?
You say md0 is raid1 but mdstat shows it to be raid10, so that won't boot.
'raid autodetect' only works for 0.90 metadata, and you are using 1.x.
You should not use 'raid autodetect' partitions. Rather the initrd
should use mdadm to assemble the arrays. Most distros seem to get this
right these days. Maybe you just need to rebuild your
initrd...
NeilBrown
>
> I've tried setting the metadata to 1.1, and tried adjusting mdadm.conf
> from /dev/md/1 to /dev/md1, but neither helped. The parts are set to raid
> autodetect and the kernel parameter is set to md_autodetect. What could
> be wrong?
>
>
>
> On Friday 21 August 2009 12:20:28 Info@quantum-sci.net wrote:
>>
>> My God, the command is not working. I need to remove sdb1 from md0 so I
>> can change it from a RAID10 to RAID1, and it simply ignores my command:
>> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
>> # cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5]
>> [raid4] [multipath]
>> md2 : active raid10 sdb3[1]
>> 1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>> bitmap: 94/446 pages [376KB], 2048KB chunk
>>
>> md1 : active raid10 sdb2[1]
>> 6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>> bitmap: 0/25 pages [0KB], 128KB chunk
>>
>> md0 : active raid10 sdb1[1]
>> 78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>> bitmap: 76/151 pages [304KB], 256KB chunk
>>
>> unused devices: <none>
>> #
>>
>>
>> My system is half-converted and is now unbootable. What am I going to
>> do?
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-22 9:34 ` Help NeilBrown
@ 2009-08-22 12:56 ` Info
2009-08-22 16:47 ` Help John Robinson
0 siblings, 1 reply; 23+ messages in thread
From: Info @ 2009-08-22 12:56 UTC (permalink / raw)
To: linux-raid
On Saturday 22 August 2009 02:34:12 NeilBrown wrote:
> You say md0 is raid1 but mdstat shows it to be raid10, so that won't boot.
Thanks Neil. However that was an early attempt before I knew RAID10 won't boot.
> 'raid autodetect' only works for 0.90 metadata, and you are using 1.x.
> You should not use 'raid autodetect' partitions. Rather the initrd
> should use mdadm to assemble the arrays. Most distros seem to get this
> right these days. Maybe you just need to rebuild your
> initrd...
I am not using an initrd. Have all the RAID and disk drivers built into the (custom-compiled) kernel. It uses mdadm to assemble the arrays? Maybe this is the problem.
I am using this procedure to build a RAID array from a live system:
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch
It is very lucid and clear, however I am slightly modifying it to use RAID10 on my second and third partitions. When I come to
update-initramfs -u
... the only initrd it updates is for an old stock kernel. It doesn't build one for any of my compiled kernels.
What partition type should I use rather than raid autodetect? Or should I revert to 0.90 metadata?
Looking at dmesg it does say that md1 & 2 do not have a valid ==v.90== superblock. There is no other linux raid partition type, so I guess it's got to be v.090. Why do they make 1.1 and 1.2 then, if they do not work?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-22 12:56 ` Help Info
@ 2009-08-22 16:47 ` John Robinson
2009-08-22 18:12 ` Help Info
0 siblings, 1 reply; 23+ messages in thread
From: John Robinson @ 2009-08-22 16:47 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On 22/08/2009 13:56, Info@quantum-sci.net wrote:
[...]
> It is very lucid and clear, however I am slightly modifying it to use RAID10 on my second and third partitions. When I come to
> update-initramfs -u
> ... the only initrd it updates is for an old stock kernel. It doesn't build one for any of my compiled kernels.
You should have mkinitrd (that's what it is on Fedora/RHEL/CentOS) or
something similar with which you can build initramfs images for any kernel.
> What partition type should I use rather than raid autodetect? Or should I revert to 0.90 metadata?
Probably type DA, Non-FS data, though type FD will be fine even if
they're not auto-detected.
> Looking at dmesg it does say that md1 & 2 do not have a valid ==v.90== superblock. There is no other linux raid partition type, so I guess it's got to be v.090. Why do they make 1.1 and 1.2 then, if they do not work?
The newer metadata types have their benefits. Auto-detection is being
deprecated, I think it's because things which are only for boot-up time
are being pushed out of the permanently-loaded kernel into initramfs, so
they don't hang around wasting space on a running system. For example,
CentOS 5 uses autodetection, Fedora 10 automatically puts mdadm in the
initramfs and runs it at the right time.
Cheers,
John.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-22 16:47 ` Help John Robinson
@ 2009-08-22 18:12 ` Info
2009-08-22 20:45 ` Help Info
2009-08-23 20:28 ` Help John Robinson
0 siblings, 2 replies; 23+ messages in thread
From: Info @ 2009-08-22 18:12 UTC (permalink / raw)
To: linux-raid
On Saturday 22 August 2009 09:47:48 John Robinson wrote:
> You should have mkinitrd (that's what it is on Fedora/RHEL/CentOS) or
> something similar with which you can build initramfs images for any kernel.
OK once I changed the version to 0.90 it stopped just at the kernel banner on boot and hung. I was about to give up on RAID when your message came through, and I created the initrd.img file. I always compile my own kernels and don't depend on an initrd, but it now seems to be necessary. So in Debian:
# update-initramfs -o /boot/initrd.img-2.6.30-5
... reboot, and voila it did what it was supposed to, for a change. I'm now resyncing my 2TB drives, which will take a good while.
> > What partition type should I use rather than raid autodetect? Or should I revert to 0.90 metadata?
>
> Probably type DA, Non-FS data, though type FD will be fine even if
> they're not auto-detected.
It simply found 'bad magick' with FD, so that doesn't work with the newer versions. I tried to use both newer versions, but it's not possible. You sound not quite sure of the partition type, so I'll stick with FD and 0.90. Thanks though John.
Goswin says, "For scanning your videos raid10 with far layout is probably best with
a large read ahead." I have the RAID10 blocksize set to 1024 for the video partition, but any idea how to set readahead?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-22 18:12 ` Help Info
@ 2009-08-22 20:45 ` Info
2009-08-22 20:59 ` Help Guy Watkins
2009-08-23 20:28 ` Help John Robinson
1 sibling, 1 reply; 23+ messages in thread
From: Info @ 2009-08-22 20:45 UTC (permalink / raw)
To: linux-raid
On Saturday 22 August 2009 11:12:35 Info@quantum-sci.net wrote:
> Goswin says, "For scanning your videos raid10 with far layout is probably best with
> a large read ahead." I have the RAID10 blocksize set to 1024 for the video partition, but any idea how to set readahead?
My gosh, it turns out this setting is astounding. You test your drive speed with some large file, as such:
# time dd if={somelarge}.iso of=/dev/null bs=256k
... and check your drive's default readahead setting:
# blockdev --getra /dev/sda
256
... then test with various settings like 1024, 1536, 2048, 4096, 8192, and maybe 16384:
# blockdev --setra 4096 /dev/sda
Here are the results for my laptop. I can't test the HTPC with the array yet, as it's still syncing.
256 40.4 MB/s
1024 123 MB/s
1536 2.7 GB/s
2048 2.4 GB/s
4096 2.4 GB/s
8192 2.4 GB/s
16384 2.5 GB/s
I suspect it's best to use the minimum readahead for the best speed (in my case 1536), for two reasons:
- To save memory;
- So there isn't such a performance impact when the blocks are not sequential.
^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: Help
2009-08-22 20:45 ` Help Info
@ 2009-08-22 20:59 ` Guy Watkins
[not found] ` <200908230631.46865.Info@quantum-sci.net>
0 siblings, 1 reply; 23+ messages in thread
From: Guy Watkins @ 2009-08-22 20:59 UTC (permalink / raw)
To: Info, linux-raid
} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Info@quantum-sci.net
} Sent: Saturday, August 22, 2009 4:45 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: Help
}
} On Saturday 22 August 2009 11:12:35 Info@quantum-sci.net wrote:
} > Goswin says, "For scanning your videos raid10 with far layout is
} probably best with
} > a large read ahead." I have the RAID10 blocksize set to 1024 for the
} video partition, but any idea how to set readahead?
}
} My gosh, it turns out this setting is astounding. You test your drive
} speed with some large file, as such:
} # time dd if={somelarge}.iso of=/dev/null bs=256k
}
} ... and check your drive's default readahead setting:
} # blockdev --getra /dev/sda
} 256
}
} ... then test with various settings like 1024, 1536, 2048, 4096, 8192, and
} maybe 16384:
} # blockdev --setra 4096 /dev/sda
}
} Here are the results for my laptop. I can't test the HTPC with the array
} yet, as it's still syncing.
} 256 40.4 MB/s
} 1024 123 MB/s
} 1536 2.7 GB/s
} 2048 2.4 GB/s
} 4096 2.4 GB/s
} 8192 2.4 GB/s
} 16384 2.5 GB/s
}
} I suspect it's best to use the minimum readahead for the best speed (in my
} case 1536), for two reasons:
} - To save memory;
} - So there isn't such a performance impact when the blocks are not
} sequential.
The disk cache is being used. You should reboot between each test, or use a
file much bigger than the amount of RAM you have. Or use a different file
each time.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-22 18:12 ` Help Info
2009-08-22 20:45 ` Help Info
@ 2009-08-23 20:28 ` John Robinson
1 sibling, 0 replies; 23+ messages in thread
From: John Robinson @ 2009-08-23 20:28 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On 22/08/2009 19:12, Info@quantum-sci.net wrote:
> On Saturday 22 August 2009 09:47:48 John Robinson wrote:
[...]
>>> What partition type should I use rather than raid autodetect? Or should I revert to 0.90 metadata?
>> Probably type DA, Non-FS data, though type FD will be fine even if
>> they're not auto-detected.
>
> It simply found 'bad magick' with FD, so that doesn't work with the newer versions. I tried to use both newer versions, but it's not possible. You sound not quite sure of the partition type, so I'll stick with FD and 0.90. Thanks though John.
I said "probably" DA because that's what's been suggested by others
previously on this list. Others have simply used 83, but that's not
ideal because if the partitions appear to have filesystems on (e.g. the
metadata's not at the beginning), they might get auto-mounted without md
RAID. I'm sure FD will work fine with later metadata versions as long as
you have mdadm in your initramfs, and while as you've noted there'll be
a whinge in the boot log about it not being version 0.90, it's not going
to cause the kernel to lock up or anything like that.
Cheers,
John.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
[not found] ` <200908230631.46865.Info@quantum-sci.net>
@ 2009-08-24 23:08 ` Info
2009-08-24 23:38 ` Help NeilBrown
0 siblings, 1 reply; 23+ messages in thread
From: Info @ 2009-08-24 23:08 UTC (permalink / raw)
To: linux-raid
The sync has finally finished, but something's wrong with the first partition set; Only sda1 is a member of md0. In dmesg I find:
[ 4.756365] md: kicking non-fresh sdb1 from array!
Huh? If it's not fresh, why doesn't it sync it? What should I do about this? How did it happen on a new array?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-24 23:08 ` Help Info
@ 2009-08-24 23:38 ` NeilBrown
2009-08-25 13:18 ` Help Info
0 siblings, 1 reply; 23+ messages in thread
From: NeilBrown @ 2009-08-24 23:38 UTC (permalink / raw)
To: Info; +Cc: linux-raid
On Tue, August 25, 2009 9:08 am, Info@quantum-sci.net wrote:
>
> The sync has finally finished, but something's wrong with the first
> partition set; Only sda1 is a member of md0. In dmesg I find:
> [ 4.756365] md: kicking non-fresh sdb1 from array!
>
> Huh? If it's not fresh, why doesn't it sync it? What should I do about
> this? How did it happen on a new array?
You'll need to provide a lot more information, starting with
kernel log at all relevant times (and don't use 'grep', just cut out
of contiguous section of the log including a few lines before and
after anything that might be relevant).
And "mdadm -E" of any relevant device.
"Kicking non-free sdb1 from arrays" is a message that you get when
assembling an array if the metadata on sdb1 is older than the others.
This can happen if it was evicted from the array due to failure or
if the array was assembled without sdb1 for some reason. There
are probably other scenarios.
That is why I need to see recent history, including anything
from the last time the array was active.
NeilBrown
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-24 23:38 ` Help NeilBrown
@ 2009-08-25 13:18 ` Info
2009-08-27 12:47 ` Help Info
0 siblings, 1 reply; 23+ messages in thread
From: Info @ 2009-08-25 13:18 UTC (permalink / raw)
To: linux-raid
On Monday 24 August 2009 16:38:41 you wrote:
> You'll need to provide a lot more information, starting with
> kernel log at all relevant times (and don't use 'grep', just cut out
> of contiguous section of the log including a few lines before and
> after anything that might be relevant).
> And "mdadm -E" of any relevant device.
# mdadm -E /dev/sdb
mdadm: No md superblock detected on /dev/sdb.
#
Aug 22 21:17:17 localhost kernel: [ 3.048020] ata4: SATA link down (SStatus 0 SControl 300)
Aug 22 21:17:17 localhost kernel: [ 3.048037] ata6: SATA link down (SStatus 0 SControl 300)
Aug 22 21:17:17 localhost kernel: [ 3.048045] ata5: SATA link down (SStatus 0 SControl 300)
Aug 22 21:17:17 localhost kernel: [ 3.048054] ata3: SATA link down (SStatus 0 SControl 300)
Aug 22 21:17:17 localhost kernel: [ 3.201035] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Aug 22 21:17:17 localhost kernel: [ 3.201044] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Aug 22 21:17:17 localhost kernel: [ 3.204330] ata2.00: ATA-8: WDC WD20EADS-00S2B0, 04.05G04, max UDMA/133
Aug 22 21:17:17 localhost kernel: [ 3.204333] ata2.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
Aug 22 21:17:17 localhost kernel: [ 3.206097] ata1.00: ATA-8: WDC WD20EADS-00R6B0, 01.00A01, max UDMA/133
Aug 22 21:17:17 localhost kernel: [ 3.206100] ata1.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
Aug 22 21:17:17 localhost kernel: [ 3.207345] ata2.00: configured for UDMA/133
Aug 22 21:17:17 localhost kernel: [ 3.211122] ata1.00: configured for UDMA/133
Aug 22 21:17:17 localhost kernel: [ 3.211203] scsi 0:0:0:0: Direct-Access ATA WDC WD20EADS-00R 01.0 PQ: 0 ANSI: 5
Aug 22 21:17:17 localhost kernel: [ 3.211406] sd 0:0:0:0: [sda] 3907029168 512-byte hardware sectors: (2.00 TB/1.81 TiB)
Aug 22 21:17:17 localhost kernel: [ 3.211417] sd 0:0:0:0: [sda] Write Protect is off
Aug 22 21:17:17 localhost kernel: [ 3.211435] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Aug 22 21:17:17 localhost kernel: [ 3.211495] sda:<5>sd 0:0:0:0: Attached scsi generic sg0 type 0
Aug 22 21:17:17 localhost kernel: [ 3.211568] scsi 1:0:0:0: Direct-Access ATA WDC WD20EADS-00S 04.0 PQ: 0 ANSI: 5
Aug 22 21:17:17 localhost kernel: [ 3.211719] sd 1:0:0:0: [sdb] 3907029168 512-byte hardware sectors: (2.00 TB/1.81 TiB)
Aug 22 21:17:17 localhost kernel: [ 3.211728] sd 1:0:0:0: [sdb] Write Protect is off
Aug 22 21:17:17 localhost kernel: [ 3.211746] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Aug 22 21:17:17 localhost kernel: [ 3.211798] sdb:<5>sd 1:0:0:0: Attached scsi generic sg1 type 0
Aug 22 21:17:17 localhost kernel: [ 3.222483] sdb1 sdb2 sdb3
Aug 22 21:17:17 localhost kernel: [ 3.222744] sd 1:0:0:0: [sdb] Attached SCSI disk
Aug 22 21:17:17 localhost kernel: [ 3.223107] sda1 sda2 sda3
Aug 22 21:17:17 localhost kernel: [ 3.223321] sd 0:0:0:0: [sda] Attached SCSI disk
...
Aug 22 21:17:17 localhost kernel: [ 4.719904] md: md0 stopped.
Aug 22 21:17:17 localhost kernel: [ 4.756229] md: bind<sdb1>
Aug 22 21:17:17 localhost kernel: [ 4.756348] md: bind<sda1>
Aug 22 21:17:17 localhost kernel: [ 4.756365] md: kicking non-fresh sdb1 from array!
Aug 22 21:17:17 localhost kernel: [ 4.756370] md: unbind<sdb1>
Aug 22 21:17:17 localhost kernel: [ 4.761035] md: export_rdev(sdb1)
Aug 22 21:17:17 localhost kernel: [ 4.762357] raid1: raid set md0 active with 1 out of 2 mirrors
Aug 22 21:17:17 localhost kernel: [ 4.768650] md0: bitmap initialized from disk: read 10/10 pages, set 198 bits
Aug 22 21:17:17 localhost kernel: [ 4.768653] created bitmap (151 pages) for device md0
Aug 22 21:17:17 localhost kernel: [ 4.777530] md: md1 stopped.
Aug 22 21:17:17 localhost kernel: [ 4.777616] md0: unknown partition table
Aug 22 21:17:17 localhost kernel: [ 4.781705] md: bind<sdb2>
Aug 22 21:17:17 localhost kernel: [ 4.781820] md: bind<sda2>
Aug 22 21:17:17 localhost kernel: [ 4.783078] raid10: raid set md1 active with 2 out of 2 devices
Aug 22 21:17:17 localhost kernel: [ 4.791063] md1: bitmap initialized from disk: read 13/13 pages, set 0 bits
Aug 22 21:17:17 localhost kernel: [ 4.791066] created bitmap (193 pages) for device md1
Aug 22 21:17:17 localhost kernel: [ 4.827200] md: md2 stopped.
Aug 22 21:17:17 localhost kernel: [ 4.827294] md1: unknown partition table
Aug 22 21:17:17 localhost kernel: [ 4.835293] md: bind<sdb3>
Aug 22 21:17:17 localhost kernel: [ 4.835413] md: bind<sda3>
Aug 22 21:17:17 localhost kernel: [ 4.846525] raid10: raid set md2 active with 2 out of 2 devices
Aug 22 21:17:17 localhost kernel: [ 4.862129] md2: bitmap initialized from disk: read 14/14 pages, set 0 bits
Aug 22 21:17:17 localhost kernel: [ 4.862132] created bitmap (223 pages) for device md2
Aug 22 21:17:17 localhost kernel: [ 4.898461] md2: unknown partition table
...
Hm, how would the superblock have been destroyed? This is a little disturbing.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Help
2009-08-25 13:18 ` Help Info
@ 2009-08-27 12:47 ` Info
0 siblings, 0 replies; 23+ messages in thread
From: Info @ 2009-08-27 12:47 UTC (permalink / raw)
To: linux-raid
OK I think I've resolved this, so don't worry about me anymore.
On Tuesday 25 August 2009 06:18:38 Info@quantum-sci.net wrote:
> On Monday 24 August 2009 16:38:41 you wrote:
> > You'll need to provide a lot more information, starting with
> > kernel log at all relevant times (and don't use 'grep', just cut out
> > of contiguous section of the log including a few lines before and
> > after anything that might be relevant).
> > And "mdadm -E" of any relevant device.
>
> # mdadm -E /dev/sdb
> mdadm: No md superblock detected on /dev/sdb.
> #
>
> Aug 22 21:17:17 localhost kernel: [ 3.048020] ata4: SATA link down (SStatus 0 SControl 300)
> Aug 22 21:17:17 localhost kernel: [ 3.048037] ata6: SATA link down (SStatus 0 SControl 300)
> Aug 22 21:17:17 localhost kernel: [ 3.048045] ata5: SATA link down (SStatus 0 SControl 300)
> Aug 22 21:17:17 localhost kernel: [ 3.048054] ata3: SATA link down (SStatus 0 SControl 300)
> Aug 22 21:17:17 localhost kernel: [ 3.201035] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
> Aug 22 21:17:17 localhost kernel: [ 3.201044] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
> Aug 22 21:17:17 localhost kernel: [ 3.204330] ata2.00: ATA-8: WDC WD20EADS-00S2B0, 04.05G04, max UDMA/133
> Aug 22 21:17:17 localhost kernel: [ 3.204333] ata2.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
> Aug 22 21:17:17 localhost kernel: [ 3.206097] ata1.00: ATA-8: WDC WD20EADS-00R6B0, 01.00A01, max UDMA/133
> Aug 22 21:17:17 localhost kernel: [ 3.206100] ata1.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
> Aug 22 21:17:17 localhost kernel: [ 3.207345] ata2.00: configured for UDMA/133
> Aug 22 21:17:17 localhost kernel: [ 3.211122] ata1.00: configured for UDMA/133
> Aug 22 21:17:17 localhost kernel: [ 3.211203] scsi 0:0:0:0: Direct-Access ATA WDC WD20EADS-00R 01.0 PQ: 0 ANSI: 5
> Aug 22 21:17:17 localhost kernel: [ 3.211406] sd 0:0:0:0: [sda] 3907029168 512-byte hardware sectors: (2.00 TB/1.81 TiB)
> Aug 22 21:17:17 localhost kernel: [ 3.211417] sd 0:0:0:0: [sda] Write Protect is off
> Aug 22 21:17:17 localhost kernel: [ 3.211435] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
> Aug 22 21:17:17 localhost kernel: [ 3.211495] sda:<5>sd 0:0:0:0: Attached scsi generic sg0 type 0
> Aug 22 21:17:17 localhost kernel: [ 3.211568] scsi 1:0:0:0: Direct-Access ATA WDC WD20EADS-00S 04.0 PQ: 0 ANSI: 5
> Aug 22 21:17:17 localhost kernel: [ 3.211719] sd 1:0:0:0: [sdb] 3907029168 512-byte hardware sectors: (2.00 TB/1.81 TiB)
> Aug 22 21:17:17 localhost kernel: [ 3.211728] sd 1:0:0:0: [sdb] Write Protect is off
> Aug 22 21:17:17 localhost kernel: [ 3.211746] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
> Aug 22 21:17:17 localhost kernel: [ 3.211798] sdb:<5>sd 1:0:0:0: Attached scsi generic sg1 type 0
> Aug 22 21:17:17 localhost kernel: [ 3.222483] sdb1 sdb2 sdb3
> Aug 22 21:17:17 localhost kernel: [ 3.222744] sd 1:0:0:0: [sdb] Attached SCSI disk
> Aug 22 21:17:17 localhost kernel: [ 3.223107] sda1 sda2 sda3
> Aug 22 21:17:17 localhost kernel: [ 3.223321] sd 0:0:0:0: [sda] Attached SCSI disk
>
> ...
>
> Aug 22 21:17:17 localhost kernel: [ 4.719904] md: md0 stopped.
> Aug 22 21:17:17 localhost kernel: [ 4.756229] md: bind<sdb1>
> Aug 22 21:17:17 localhost kernel: [ 4.756348] md: bind<sda1>
> Aug 22 21:17:17 localhost kernel: [ 4.756365] md: kicking non-fresh sdb1 from array!
> Aug 22 21:17:17 localhost kernel: [ 4.756370] md: unbind<sdb1>
> Aug 22 21:17:17 localhost kernel: [ 4.761035] md: export_rdev(sdb1)
> Aug 22 21:17:17 localhost kernel: [ 4.762357] raid1: raid set md0 active with 1 out of 2 mirrors
> Aug 22 21:17:17 localhost kernel: [ 4.768650] md0: bitmap initialized from disk: read 10/10 pages, set 198 bits
> Aug 22 21:17:17 localhost kernel: [ 4.768653] created bitmap (151 pages) for device md0
> Aug 22 21:17:17 localhost kernel: [ 4.777530] md: md1 stopped.
> Aug 22 21:17:17 localhost kernel: [ 4.777616] md0: unknown partition table
> Aug 22 21:17:17 localhost kernel: [ 4.781705] md: bind<sdb2>
> Aug 22 21:17:17 localhost kernel: [ 4.781820] md: bind<sda2>
> Aug 22 21:17:17 localhost kernel: [ 4.783078] raid10: raid set md1 active with 2 out of 2 devices
> Aug 22 21:17:17 localhost kernel: [ 4.791063] md1: bitmap initialized from disk: read 13/13 pages, set 0 bits
> Aug 22 21:17:17 localhost kernel: [ 4.791066] created bitmap (193 pages) for device md1
> Aug 22 21:17:17 localhost kernel: [ 4.827200] md: md2 stopped.
> Aug 22 21:17:17 localhost kernel: [ 4.827294] md1: unknown partition table
> Aug 22 21:17:17 localhost kernel: [ 4.835293] md: bind<sdb3>
> Aug 22 21:17:17 localhost kernel: [ 4.835413] md: bind<sda3>
> Aug 22 21:17:17 localhost kernel: [ 4.846525] raid10: raid set md2 active with 2 out of 2 devices
> Aug 22 21:17:17 localhost kernel: [ 4.862129] md2: bitmap initialized from disk: read 14/14 pages, set 0 bits
> Aug 22 21:17:17 localhost kernel: [ 4.862132] created bitmap (223 pages) for device md2
> Aug 22 21:17:17 localhost kernel: [ 4.898461] md2: unknown partition table
>
> ...
>
> Hm, how would the superblock have been destroyed? This is a little disturbing.
>
>
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2009-08-27 12:47 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-10-20 5:05 help Srinivasa S
2004-10-20 5:50 ` help Guy
2004-10-21 1:47 ` help Jon Lewis
-- strict thread matches above, loose matches on Subject: below --
2009-08-21 13:27 RAID10 Layouts Info
2009-08-21 16:43 ` Goswin von Brederlow
2009-08-21 18:02 ` Info
2009-08-21 19:20 ` Help Info
2009-08-21 19:38 ` Help John Robinson
2009-08-21 20:51 ` Help Info
2009-08-22 6:14 ` Help Info
2009-08-22 9:34 ` Help NeilBrown
2009-08-22 12:56 ` Help Info
2009-08-22 16:47 ` Help John Robinson
2009-08-22 18:12 ` Help Info
2009-08-22 20:45 ` Help Info
2009-08-22 20:59 ` Help Guy Watkins
[not found] ` <200908230631.46865.Info@quantum-sci.net>
2009-08-24 23:08 ` Help Info
2009-08-24 23:38 ` Help NeilBrown
2009-08-25 13:18 ` Help Info
2009-08-27 12:47 ` Help Info
2009-08-23 20:28 ` Help John Robinson
2006-08-23 19:21 help Archie Cotton
2006-02-04 2:21 Help Oren Ben-Menachem
2004-04-01 16:56 Help Jason C. Leach
2004-04-01 17:00 ` Help Måns Rullgård
2004-02-01 13:13 help Rami Addady
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).