* RAID5 producing fake partition table on single drive
@ 2006-08-19 11:40 Lem
2006-08-21 7:35 ` Neil Brown
0 siblings, 1 reply; 12+ messages in thread
From: Lem @ 2006-08-19 11:40 UTC (permalink / raw)
To: linux-raid
Hi all,
I'm having a problem with my RAID5 array, here's the deal:
System is an AMD Athlon 64 X2 4200+ on a Gigabyte K8NS-939-Ultra
(nForce3 Ultra). Linux 2.6.17.7, x86_64. Debian GNU/Linux Sid, GCC 4.1.1
(kernel configured and compiled by hand).
RAID5 array created using mdadm 2.5.2. All drives are 250Gb Seagate
SATAs, spread across three controllers: nForce3 Ultra (motherboard),
Silicon Image 3124 (motherboard) and Promise SATA TX300 (PCI).
/dev/sda: ST3250624NS
/dev/sdb: ST3250624NS
/dev/sdc: ST3250823AS
/dev/sdd: ST3250624NS
/dev/sde: ST3250823AS
The array assembles and runs perfectly at boot, and continues to operate
without errors, and has been for a few months. It is using a version
0.90 superblock. None of the devices were partitioned with fdisk, they
were just passed to mdadm when the array was created.
Recently (last week or two), I have noticed the following in dmesg:
SCSI device sde: 488397168 512-byte hdwr sectors (250059 MB)
sde: Write Protect is off
sde: Mode Sense: 00 3a 00 00
SCSI device sde: drive cache: write back
SCSI device sde: 488397168 512-byte hdwr sectors (250059 MB)
sde: Write Protect is off
sde: Mode Sense: 00 3a 00 00
SCSI device sde: drive cache: write back
sde: sde1 sde3
sd 6:0:0:0: Attached scsi disk sde
Buffer I/O error on device sde3, logical block 1792
Buffer I/O error on device sde3, logical block 1793
Buffer I/O error on device sde3, logical block 1794
Buffer I/O error on device sde3, logical block 1795
Buffer I/O error on device sde3, logical block 1796
Buffer I/O error on device sde3, logical block 1797
Buffer I/O error on device sde3, logical block 1798
Buffer I/O error on device sde3, logical block 1799
Buffer I/O error on device sde3, logical block 1792
Buffer I/O error on device sde3, logical block 1793
/dev/sda appears to have a partition table as well, but no partitions
defined. /dev/sdb and /dev/sdc are "unknown partition tables".
Bearing in mind I did not create a partition table on /dev/sde, yet the
kernel reports one and also individual partitions, here's the output
from fdisk:
# fdisk /dev/sde
Command (m for help): p
Disk /dev/sde: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 1 995+ c7 Syrinx
Partition 1 does not end on cylinder boundary.
/dev/sde2 1 1 0 0 Empty
Partition 2 does not end on cylinder boundary.
/dev/sde3 133675 133675 995+ c7 Syrinx
Partition 3 does not end on cylinder boundary.
/dev/sde4 1 1 0 0 Empty
Partition 4 does not end on cylinder boundary.
I'm not a software/kernel/RAID developer by any stretch of the
imagination, but my thoughts are that I've just been unlucky with my
array and that the data on there has somehow managed to look like a
partition table, and the kernel is trying to read it, resulting in the
buffer IO errors.
I believe a solution to this problem would be for me to create proper
partitions on my RAID disks (with type fd I suspect?), and possibly use
a version 1.x superblock rather than 0.90.
I would much appreciate some help with this as I want to preserve the
data on the array (naturally). Is it possible to fdisk one drive in the
array at a time, create a proper partition table and a type 'fd'
partition spanning the entire disk, then re-add it to the array?
If any more information about my system is required, let me know.
Thanks in advance,
Lem
P.S. Output from mdadm --examine /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 00.90.00
UUID : 54485c92:a2b165f8:5f97d5ed:bbc54eaf
Creation Time : Sat Jul 1 11:06:03 2006
Raid Level : raid5
Device Size : 244198400 (232.89 GiB 250.06 GB)
Array Size : 976793600 (931.54 GiB 1000.24 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 0
Update Time : Sat Aug 19 19:56:48 2006
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Checksum : 56205d7d - correct
Events : 0.504412
Layout : left-symmetric
Chunk Size : 128K
Number Major Minor RaidDevice State
this 4 8 64 4 active sync /dev/sde
0 0 8 0 0 active sync /dev/sda
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc
3 3 8 48 3 active sync /dev/sdd
4 4 8 64 4 active sync /dev/sde
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: RAID5 producing fake partition table on single drive 2006-08-19 11:40 RAID5 producing fake partition table on single drive Lem @ 2006-08-21 7:35 ` Neil Brown 2006-08-21 8:03 ` Lem 2006-08-21 22:47 ` Doug Ledford 0 siblings, 2 replies; 12+ messages in thread From: Neil Brown @ 2006-08-21 7:35 UTC (permalink / raw) To: Lem; +Cc: linux-raid On Saturday August 19, l3mming@iinet.net.au wrote: > Hi all, > > I'm having a problem with my RAID5 array, here's the deal: > > System is an AMD Athlon 64 X2 4200+ on a Gigabyte K8NS-939-Ultra > (nForce3 Ultra). Linux 2.6.17.7, x86_64. Debian GNU/Linux Sid, GCC 4.1.1 > (kernel configured and compiled by hand). > > RAID5 array created using mdadm 2.5.2. All drives are 250Gb Seagate > SATAs, spread across three controllers: nForce3 Ultra (motherboard), > Silicon Image 3124 (motherboard) and Promise SATA TX300 (PCI). > > /dev/sda: ST3250624NS > /dev/sdb: ST3250624NS > /dev/sdc: ST3250823AS > /dev/sdd: ST3250624NS > /dev/sde: ST3250823AS > > The array assembles and runs perfectly at boot, and continues to operate > without errors, and has been for a few months. It is using a version > 0.90 superblock. None of the devices were partitioned with fdisk, they > were just passed to mdadm when the array was created. > > Recently (last week or two), I have noticed the following in dmesg: > > SCSI device sde: 488397168 512-byte hdwr sectors (250059 MB) > sde: Write Protect is off > sde: Mode Sense: 00 3a 00 00 > SCSI device sde: drive cache: write back > SCSI device sde: 488397168 512-byte hdwr sectors (250059 MB) > sde: Write Protect is off > sde: Mode Sense: 00 3a 00 00 > SCSI device sde: drive cache: write back > sde: sde1 sde3 This itself shouldn't be a problem. The fact that the kernel imagines there are partitions shouldn't hurt as long as no-one tries to access them. > sd 6:0:0:0: Attached scsi disk sde > > Buffer I/O error on device sde3, logical block 1792 > Buffer I/O error on device sde3, logical block 1793 > Buffer I/O error on device sde3, logical block 1794 > Buffer I/O error on device sde3, logical block 1795 > Buffer I/O error on device sde3, logical block 1796 > Buffer I/O error on device sde3, logical block 1797 > Buffer I/O error on device sde3, logical block 1798 > Buffer I/O error on device sde3, logical block 1799 > Buffer I/O error on device sde3, logical block 1792 > Buffer I/O error on device sde3, logical block 1793 This, on the other hand, might be a problem - though possibly only a small one. Who is trying to access sde3 I wonder. I'm fairly sure the kernel wouldn't do that directly. Maybe some 'udev' related thing is trying to be clever? Apart from these messages, is there any symptoms that cause a problem? It could just be that something is reading from somewhere that doesn't exist, and is complaining. So let them complain. Who cares :-) > > I'm not a software/kernel/RAID developer by any stretch of the > imagination, but my thoughts are that I've just been unlucky with my > array and that the data on there has somehow managed to look like a > partition table, and the kernel is trying to read it, resulting in the > buffer IO errors. But these errors are necessarily a problem (I admit they look scary). > > I believe a solution to this problem would be for me to create proper > partitions on my RAID disks (with type fd I suspect?), and possibly use > a version 1.x superblock rather than 0.90. Creation partitions and then raiding them would remove these messages. Also using a verions 1.1 or 1.2 superblock would (as they put the superblock at the start of the device instead of the end). But is it worth the effort? NeilBrown ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-08-21 7:35 ` Neil Brown @ 2006-08-21 8:03 ` Lem 2006-08-28 3:46 ` Neil Brown 2006-08-21 22:47 ` Doug Ledford 1 sibling, 1 reply; 12+ messages in thread From: Lem @ 2006-08-21 8:03 UTC (permalink / raw) To: linux-raid On Mon, 2006-08-21 at 17:35 +1000, Neil Brown wrote: > On Saturday August 19, l3mming@iinet.net.au wrote: > > Hi all, > > > > I'm having a problem with my RAID5 array, here's the deal: > > > > System is an AMD Athlon 64 X2 4200+ on a Gigabyte K8NS-939-Ultra > > (nForce3 Ultra). Linux 2.6.17.7, x86_64. Debian GNU/Linux Sid, GCC 4.1.1 > > (kernel configured and compiled by hand). > > > > RAID5 array created using mdadm 2.5.2. All drives are 250Gb Seagate > > SATAs, spread across three controllers: nForce3 Ultra (motherboard), > > Silicon Image 3124 (motherboard) and Promise SATA TX300 (PCI). > > > > /dev/sda: ST3250624NS > > /dev/sdb: ST3250624NS > > /dev/sdc: ST3250823AS > > /dev/sdd: ST3250624NS > > /dev/sde: ST3250823AS > > > > The array assembles and runs perfectly at boot, and continues to operate > > without errors, and has been for a few months. It is using a version > > 0.90 superblock. None of the devices were partitioned with fdisk, they > > were just passed to mdadm when the array was created. > > > > Recently (last week or two), I have noticed the following in dmesg: > > > > SCSI device sde: 488397168 512-byte hdwr sectors (250059 MB) > > sde: Write Protect is off > > sde: Mode Sense: 00 3a 00 00 > > SCSI device sde: drive cache: write back > > SCSI device sde: 488397168 512-byte hdwr sectors (250059 MB) > > sde: Write Protect is off > > sde: Mode Sense: 00 3a 00 00 > > SCSI device sde: drive cache: write back > > sde: sde1 sde3 > > This itself shouldn't be a problem. The fact that the kernel imagines > there are partitions shouldn't hurt as long as no-one tries to access > them. This is where I'm having a problem - lilo fails due to the bogus partition table, here's the output: # lilo part_nowrite: read:: Input/output error and from dmesg/syslog due to running lilo: printk: 537 messages suppressed. Buffer I/O error on device sde3, logical block 0 Buffer I/O error on device sde3, logical block 1 Buffer I/O error on device sde3, logical block 2 Buffer I/O error on device sde3, logical block 3 Buffer I/O error on device sde3, logical block 4 Buffer I/O error on device sde3, logical block 5 Buffer I/O error on device sde3, logical block 6 Buffer I/O error on device sde3, logical block 7 Buffer I/O error on device sde3, logical block 8 Buffer I/O error on device sde3, logical block 9 > > > sd 6:0:0:0: Attached scsi disk sde > > > > Buffer I/O error on device sde3, logical block 1792 > > Buffer I/O error on device sde3, logical block 1793 > > Buffer I/O error on device sde3, logical block 1794 > > Buffer I/O error on device sde3, logical block 1795 > > Buffer I/O error on device sde3, logical block 1796 > > Buffer I/O error on device sde3, logical block 1797 > > Buffer I/O error on device sde3, logical block 1798 > > Buffer I/O error on device sde3, logical block 1799 > > Buffer I/O error on device sde3, logical block 1792 > > Buffer I/O error on device sde3, logical block 1793 > > This, on the other hand, might be a problem - though possibly only a > small one. > Who is trying to access sde3 I wonder. I'm fairly sure the kernel > wouldn't do that directly. > > Maybe some 'udev' related thing is trying to be clever? The above buffer I/O errors (logical block 1792+) occur as filesystems are being automounted. /dev/sde* doesn't exist in /etc/fstab of course. > Apart from these messages, is there any symptoms that cause a problem? > It could just be that something is reading from somewhere that doesn't > exist, and is complaining. So let them complain. Who cares :-) There's no problems with any software apart from lilo so far. fdisk works (since it doesn't scan all block devices on startup). Gparted might fail, though I haven't tried (it scans all block devices on startup). And yep, sounds about right that something is reading from somewhere that doesn't exist (the bogus partition table on /dev/sde would suggest this is the case). > > > > I'm not a software/kernel/RAID developer by any stretch of the > > imagination, but my thoughts are that I've just been unlucky with my > > array and that the data on there has somehow managed to look like a > > partition table, and the kernel is trying to read it, resulting in the > > buffer IO errors. > > But these errors are necessarily a problem (I admit they look scary). > > > > > I believe a solution to this problem would be for me to create proper > > partitions on my RAID disks (with type fd I suspect?), and possibly use > > a version 1.x superblock rather than 0.90. > > Creation partitions and then raiding them would remove these messages. > Also using a verions 1.1 or 1.2 superblock would (as they put the > superblock at the start of the device instead of the end). > > But is it worth the effort? A few questions, searching for the best possible solution. I believe this is worth the effort, else I can't run lilo without disabling the array and removing /dev/sde from the system. 1. Is it possible to have mdadm or another tool automatically convert the superblocks to v1.1/1.2 (and perhaps create proper partitions)? 2. If number 1 isn't possible, is it possible to convert one drive at a time to have a proper partition table? Like this: Stop array; fdisk /dev/sde, create partition of type fd (entire disk), save partition table; Start array. (then I'd assume mdadm would notice that /dev/sde has changed and possibly start a resync? - if not, and it just works, then great!). If that works, then do every other drive, one at a time. Thanks for your help Neil. > > NeilBrown ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-08-21 8:03 ` Lem @ 2006-08-28 3:46 ` Neil Brown 2006-08-29 7:17 ` Lem 0 siblings, 1 reply; 12+ messages in thread From: Neil Brown @ 2006-08-28 3:46 UTC (permalink / raw) To: Lem; +Cc: linux-raid On Monday August 21, l3mming@iinet.net.au wrote: > > This is where I'm having a problem - lilo fails due to the bogus > partition table, here's the output: > > # lilo > part_nowrite: read:: Input/output error I think maybe lilo needs to be fixed. If you haven't explicitly told it to look at sde, the worst it should do is give a warning. Maybe you can put something in lilo.conf to tell it what sort of partitions sde does (or doesn't) have so it won't bother looking itself? > > The above buffer I/O errors (logical block 1792+) occur as filesystems > are being automounted. /dev/sde* doesn't exist in /etc/fstab of course. > Do you mount-by-label at all. In that case mount we look at each partition, but it doesn't fail, so not-a-problem. > > A few questions, searching for the best possible solution. I believe > this is worth the effort, else I can't run lilo without disabling the > array and removing /dev/sde from the system. > > 1. Is it possible to have mdadm or another tool automatically convert > the superblocks to v1.1/1.2 (and perhaps create proper partitions)? No. It would require moving all the data on the device. > > 2. If number 1 isn't possible, is it possible to convert one drive at a > time to have a proper partition table? Like this: Stop array; > fdisk /dev/sde, create partition of type fd (entire disk), save > partition table; Start array. (then I'd assume mdadm would notice > that /dev/sde has changed and possibly start a resync? - if not, and it > just works, then great!). If that works, then do every other drive, one > at a time. When you create a partition table, sde1 will be slightly smaller than sde1. But if it is less than 64K smaller you might be able to do something. mdadm won't notice by itself, but you could tell it. mdadm /dev/md0 --fail /dev/sde mdadm /dev/md0 --remove /dev/sde mdadm --zero-superb lock /dev/sde ##use fdisk to create a single partition mdadm /dev/md0 --add /dev/sde1 ## if that works then wait for the resync to complete ## and do the same to other drives. Then update any ## config files that might be relevant. NeilBrown ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-08-28 3:46 ` Neil Brown @ 2006-08-29 7:17 ` Lem 0 siblings, 0 replies; 12+ messages in thread From: Lem @ 2006-08-29 7:17 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid On Mon, 2006-08-28 at 13:46 +1000, Neil Brown wrote: > On Monday August 21, l3mming@iinet.net.au wrote: > > > > This is where I'm having a problem - lilo fails due to the bogus > > partition table, here's the output: > > > > # lilo > > part_nowrite: read:: Input/output error > I think maybe lilo needs to be fixed. If you haven't explicitly told > it to look at sde, the worst it should do is give a warning. > Maybe you can put something in lilo.conf to tell it what sort of > partitions sde does (or doesn't) have so it won't bother looking > itself? # lilo -P ignore part_nowrite: read:: Input/output error Telling lilo to ignore invalid partition tables doesn't appear to work, perhaps a bug report needs to be filed here. > > > > The above buffer I/O errors (logical block 1792+) occur as filesystems > > are being automounted. /dev/sde* doesn't exist in /etc/fstab of course. > > > > Do you mount-by-label at all. In that case mount we look at each > partition, but it doesn't fail, so not-a-problem. I do mount by label, yes, and it all works perfectly. > > > > A few questions, searching for the best possible solution. I believe > > this is worth the effort, else I can't run lilo without disabling the > > array and removing /dev/sde from the system. > > > > 1. Is it possible to have mdadm or another tool automatically convert > > the superblocks to v1.1/1.2 (and perhaps create proper partitions)? > > No. It would require moving all the data on the device. Bugger. :-( > > > > 2. If number 1 isn't possible, is it possible to convert one drive at a > > time to have a proper partition table? Like this: Stop array; > > fdisk /dev/sde, create partition of type fd (entire disk), save > > partition table; Start array. (then I'd assume mdadm would notice > > that /dev/sde has changed and possibly start a resync? - if not, and it > > just works, then great!). If that works, then do every other drive, one > > at a time. > > When you create a partition table, sde1 will be slightly smaller than > sde1. But if it is less than 64K smaller you might be able to do > something. mdadm won't notice by itself, but you could tell it. > > mdadm /dev/md0 --fail /dev/sde > mdadm /dev/md0 --remove /dev/sde > mdadm --zero-superb lock /dev/sde > ##use fdisk to create a single partition > mdadm /dev/md0 --add /dev/sde1 > ## if that works then wait for the resync to complete > ## and do the same to other drives. Then update any > ## config files that might be relevant. I tried doing this, but mdadm complains in the following way: [output from fdisk after using 'o' to create a new MSDOS partition table] Disk /dev/sde: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sde1 1 30401 244196001 fd Linux raid autodetect # mdadm /dev/md0 --add /dev/sde1 mdadm: add new device failed for /dev/sde1 as 5: Invalid argument from dmesg: md: sde1 has invalid sb, not importing! md: md_import_device returned -22 'mdadm --zero-superblock /dev/sde1' does not fix it. 'mdadm /dev/md0 --add /dev/sde' adds the HD back to the array without issue. I'm having another issue with hard lockups (not even caps-lock light working) when the array is being accessed and the kernel goes to swap something out (I think this is what's happening, more observation required). The swap space is on the system drive (shares a controller with the RAID array - onboard SATA controller). I'm not sure why this happens. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-08-21 7:35 ` Neil Brown 2006-08-21 8:03 ` Lem @ 2006-08-21 22:47 ` Doug Ledford 2006-09-04 17:55 ` Bill Davidsen 1 sibling, 1 reply; 12+ messages in thread From: Doug Ledford @ 2006-08-21 22:47 UTC (permalink / raw) To: Neil Brown; +Cc: Lem, linux-raid [-- Attachment #1: Type: text/plain, Size: 693 bytes --] On Mon, 2006-08-21 at 17:35 +1000, Neil Brown wrote: > > Buffer I/O error on device sde3, logical block 1793 > > This, on the other hand, might be a problem - though possibly only a > small one. > Who is trying to access sde3 I wonder. I'm fairly sure the kernel > wouldn't do that directly. It's the mount program collecting possible LABEL= data on the partitions listed in /proc/partitions, of which sde3 is outside the valid range for the drive. -- Doug Ledford <dledford@redhat.com> GPG KeyID: CFBFF194 http://people.redhat.com/dledford Infiniband specific RPMs available at http://people.redhat.com/dledford/Infiniband [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-08-21 22:47 ` Doug Ledford @ 2006-09-04 17:55 ` Bill Davidsen 2006-09-05 16:49 ` Luca Berra 2006-09-10 5:59 ` Lem 0 siblings, 2 replies; 12+ messages in thread From: Bill Davidsen @ 2006-09-04 17:55 UTC (permalink / raw) To: Doug Ledford; +Cc: Neil Brown, Lem, linux-raid Doug Ledford wrote: >On Mon, 2006-08-21 at 17:35 +1000, Neil Brown wrote: > > > >>>Buffer I/O error on device sde3, logical block 1793 >>> >>> >>This, on the other hand, might be a problem - though possibly only a >>small one. >>Who is trying to access sde3 I wonder. I'm fairly sure the kernel >>wouldn't do that directly. >> >> > >It's the mount program collecting possible LABEL= data on the partitions >listed in /proc/partitions, of which sde3 is outside the valid range for >the drive. > > > May I belatedly say that this is sort-of a kernel issue, since /proc/partitions reflects invalid data? Perhaps a boot option like nopart=sda,sdb or similar would be in order? -- bill davidsen <davidsen@tmr.com> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-09-04 17:55 ` Bill Davidsen @ 2006-09-05 16:49 ` Luca Berra 2006-09-10 5:59 ` Lem 1 sibling, 0 replies; 12+ messages in thread From: Luca Berra @ 2006-09-05 16:49 UTC (permalink / raw) To: linux-raid On Mon, Sep 04, 2006 at 01:55:52PM -0400, Bill Davidsen wrote: >Doug Ledford wrote: >>It's the mount program collecting possible LABEL= data on the partitions >>listed in /proc/partitions, of which sde3 is outside the valid range for >>the drive. >> >May I belatedly say that this is sort-of a kernel issue, since >/proc/partitions reflects invalid data? Perhaps a boot option like >nopart=sda,sdb or similar would be in order? > i would move partition detection code to user space completely, so it could be ran selectively on the drives that do happen to have a patition table. a compromise could be having mdadm (or the script that starts mdadm at boot time, issue an ioctl(fd,BLKPG,...) to make kernel forget about any eventual partition table it might have misdetected L. -- Luca Berra -- bluca@comedia.it Communication Media & Services S.r.l. /"\ \ / ASCII RIBBON CAMPAIGN X AGAINST HTML MAIL / \ ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-09-04 17:55 ` Bill Davidsen 2006-09-05 16:49 ` Luca Berra @ 2006-09-10 5:59 ` Lem 2006-09-14 22:42 ` Bill Davidsen 1 sibling, 1 reply; 12+ messages in thread From: Lem @ 2006-09-10 5:59 UTC (permalink / raw) To: Bill Davidsen; +Cc: Doug Ledford, Neil Brown, linux-raid On Mon, 2006-09-04 at 13:55 -0400, Bill Davidsen wrote: > May I belatedly say that this is sort-of a kernel issue, since > /proc/partitions reflects invalid data? Perhaps a boot option like > nopart=sda,sdb or similar would be in order? Is this an argument to be passed to the kernel at boot time? It didn't work for me. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-09-10 5:59 ` Lem @ 2006-09-14 22:42 ` Bill Davidsen 2006-09-15 7:51 ` Lem 0 siblings, 1 reply; 12+ messages in thread From: Bill Davidsen @ 2006-09-14 22:42 UTC (permalink / raw) To: Lem; +Cc: Doug Ledford, Neil Brown, linux-raid Lem wrote: >On Mon, 2006-09-04 at 13:55 -0400, Bill Davidsen wrote: > > > >>May I belatedly say that this is sort-of a kernel issue, since >>/proc/partitions reflects invalid data? Perhaps a boot option like >>nopart=sda,sdb or similar would be in order? >> >> > >Is this an argument to be passed to the kernel at boot time? It didn't >work for me. > My suggestion was to Neil or other kernel maintainers. If they agree that this is worth fixing, the option could be added in the kernel. It isn't there now, I was soliciting responses on whether this was desirable. Unfortunately I see no way to avoid data in the partition table location, which looks like a partition table, from being used. -- bill davidsen <davidsen@tmr.com> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-09-14 22:42 ` Bill Davidsen @ 2006-09-15 7:51 ` Lem 2006-09-15 8:29 ` Luca Berra 0 siblings, 1 reply; 12+ messages in thread From: Lem @ 2006-09-15 7:51 UTC (permalink / raw) To: Bill Davidsen; +Cc: Doug Ledford, Neil Brown, linux-raid On Thu, 2006-09-14 at 18:42 -0400, Bill Davidsen wrote: > Lem wrote: > > >On Mon, 2006-09-04 at 13:55 -0400, Bill Davidsen wrote: > > > > > > > >>May I belatedly say that this is sort-of a kernel issue, since > >>/proc/partitions reflects invalid data? Perhaps a boot option like > >>nopart=sda,sdb or similar would be in order? > >> > >> > > > >Is this an argument to be passed to the kernel at boot time? It didn't > >work for me. > > > > My suggestion was to Neil or other kernel maintainers. If they agree > that this is worth fixing, the option could be added in the kernel. It > isn't there now, I was soliciting responses on whether this was desirable. My mistake, sorry. It sounds like a nice idea, and would work well in cases where the RAID devices are always assigned the same device names (sda, sdb, sdc etc), which I'd expect to be the case quite frequently. > Unfortunately I see no way to avoid data in the partition table > location, which looks like a partition table, from being used. Perhaps an alternative would be to convert an array with non-partition-based devices to partition-based devices, though I remember Neil saying this would involve relocating all of the data on the entire array (perhaps could be done through some funky resync option?). I'm not a developer, those are just my thoughts. Thanks for all the fine work guys. Cheers, Lem ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: RAID5 producing fake partition table on single drive 2006-09-15 7:51 ` Lem @ 2006-09-15 8:29 ` Luca Berra 0 siblings, 0 replies; 12+ messages in thread From: Luca Berra @ 2006-09-15 8:29 UTC (permalink / raw) To: linux-raid On Fri, Sep 15, 2006 at 05:51:12PM +1000, Lem wrote: >On Thu, 2006-09-14 at 18:42 -0400, Bill Davidsen wrote: >> Lem wrote: >> >On Mon, 2006-09-04 at 13:55 -0400, Bill Davidsen wrote: >> >>May I belatedly say that this is sort-of a kernel issue, since >> >>/proc/partitions reflects invalid data? Perhaps a boot option like >> >>nopart=sda,sdb or similar would be in order? .... >> >> My suggestion was to Neil or other kernel maintainers. If they agree >> that this is worth fixing, the option could be added in the kernel. It >> isn't there now, I was soliciting responses on whether this was desirable. > >My mistake, sorry. It sounds like a nice idea, and would work well in >cases where the RAID devices are always assigned the same device names >(sda, sdb, sdc etc), which I'd expect to be the case quite frequently. that is the issue, quite frequently != always >> Unfortunately I see no way to avoid data in the partition table >> location, which looks like a partition table, from being used. > >Perhaps an alternative would be to convert an array with >non-partition-based devices to partition-based devices, though I >remember Neil saying this would involve relocating all of the data on >the entire array (perhaps could be done through some funky resync >option?). sorry, i do not agree ms-dos partitions are a bad idea, and one i would really love to leave behind. what i'd do is move the partition detect code to userspace where it belongs, togheter with lvm, md, dmraid, multipath and evms so what userspace would do is: check if any wholedisk is one of the above mentioned types or if it is partitionable. I believe the order would be something like: dmraid or multipath evms (*) md lvm partition table (partx or kpartx) md lvm (*) evms should handle all cases by itself after each check the device list for the next check should be recalculated removing devices handled and adding new devices just created. this is too much to be done in kernel space, but it can be done easily in initramfs or initscript. just say Y to "CONFIG_PARTITION_ADVANCED" and N to all other "CONFIG_?????_PARTITION" and code something in userspace. L. P.S. the op can simply use partx to remove partition tables from the components of the md array just after assembling. L. -- Luca Berra -- bluca@comedia.it Communication Media & Services S.r.l. /"\ \ / ASCII RIBBON CAMPAIGN X AGAINST HTML MAIL / \ ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2006-09-15 8:29 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2006-08-19 11:40 RAID5 producing fake partition table on single drive Lem 2006-08-21 7:35 ` Neil Brown 2006-08-21 8:03 ` Lem 2006-08-28 3:46 ` Neil Brown 2006-08-29 7:17 ` Lem 2006-08-21 22:47 ` Doug Ledford 2006-09-04 17:55 ` Bill Davidsen 2006-09-05 16:49 ` Luca Berra 2006-09-10 5:59 ` Lem 2006-09-14 22:42 ` Bill Davidsen 2006-09-15 7:51 ` Lem 2006-09-15 8:29 ` Luca Berra
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).