* Re: "bio too big" error
2002-12-12 9:12 ` Joe Thornber
@ 2002-12-12 17:33 ` Wil Reichert
2002-12-12 21:51 ` Kevin Corry
1 sibling, 0 replies; 7+ messages in thread
From: Wil Reichert @ 2002-12-12 17:33 UTC (permalink / raw)
To: Joe Thornber; +Cc: kernel list
> ie. previously we were accidentally comparing bytes with sectors to
> verify the device sizes. So either I'm being very stupid (likely) and
> the above patch is bogus, or you really don't have room for this lv.
> Can you send me 3 bits of information please:
Well, it works fine w/ the 2.4 kernel & prior 2.5's, I think my lv is
fine...
> 1) disk/partition sizes for your PVs
spans 4 entire discs and one partition
Disk /dev/discs/disc4/disc: 80.0 GB, 80039116800 bytes
Disk /dev/discs/disc1/disc: 123.5 GB, 123522416640 bytes
Disk /dev/ide/host2/bus1/target0/lun0/disc: 100.0 GB, 100030242816 bytes
Disk /dev/ide/host2/bus0/target1/lun0/disc: 10.1 GB, 10110320640 bytes
/dev/discs/disc0/part4 40072 119150 39855816 8e Linux LVM
Dunno if it matters, but the 80G is 2 striped 40s on a 3ware controller,
the 120, 100, and 10 are on a Promise U133 card, and the 40 gig
partition is on the native VIA controller. Top it all of this is an SMP
box.
> 2) an LVM2 backup of the metadata (the nice readable ascii one).
/etc/lvm/backup/cheese_vg -
# Generated by LVM2: Tue Dec 10 21:11:37 2002
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgconvert -M2 cheese_vg'"
creation_host = "darwin" # Linux darwin 2.4.19 #1 SMP Wed Nov 13
16:54:28 EST 2002 i686
creation_time = 1039572697 # Tue Dec 10 21:11:37 2002
cheese_vg {
id = "WF3vAx-k1r3-NUjU-az7z-I4SM-oorx-rvoYSt"
seqno = 1
status = ["RESIZEABLE", "READ", "WRITE"]
system_id = "darwin1025684717"
extent_size = 32768 # 16 Megabytes
max_lv = 256
max_pv = 256
physical_volumes {
pv0 {
id = "XFexK7-KqnW-dt7I-JHfB-gC8t-8Z45-RLiCEW"
device = "/dev/discs/disc4/disc" # Hint
only
status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 4769 # 74.5156 Gigabytes
}
pv1 {
id = "7swUJv-wGiq-9uCz-xiKK-owvf-p77g-zGU1C5"
device = "/dev/discs/disc1/disc" # Hint
only
status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 7361 # 115.016 Gigabytes
}
pv2 {
id = "z1Zxq5-X1JX-q08r-epqS-T0V7-003q-admD5T"
device =
"/dev/ide/host2/bus1/target0/lun0/disc"# Hint only
status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 5961 # 93.1406 Gigabytes
}
pv3 {
id = "zfuSRQ-mYYI-pGHR-9Mu2-uFWu-JQiH-JZyO7I"
device = "/dev/discs/disc0/part4" # Hint
only
status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 2431 # 37.9844 Gigabytes
}
pv4 {
id = "TNYATl-VrjS-Dt4T-e906-Ilb3-bgu7-JQS1sb"
device =
"/dev/ide/host2/bus0/target1/lun0/disc"# Hint only
status = ["ALLOCATABLE"]
pe_start = 33152
pe_count = 601 # 9.39062 Gigabytes
}
}
logical_volumes {
blah {
id = "000000-0000-0000-0000-0000-0000-000000"
status = ["READ", "WRITE", "VISIBLE"]
allocation_policy = "next free"
read_ahead = 1024
segment_count = 7
segment1 {
start_extent = 0
extent_count = 4769 # 74.5156
Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 4769
extent_count = 1409 # 22.0156
Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 4552
]
}
segment3 {
start_extent = 6178
extent_count = 2255 # 35.2344
Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv3", 0
]
}
segment4 {
start_extent = 8433
extent_count = 7361 # 115.016
Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
]
}
segment5 {
start_extent = 15794
extent_count = 4552 # 71.125
Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 0
]
}
segment6 {
start_extent = 20346
extent_count = 176 # 2.75 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv3", 2255
]
}
segment7 {
start_extent = 20522
extent_count = 601 # 9.39062
Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv4", 0
]
}
}
}
}
> 3) The version of LVM that *created* the lv.
1.0.3, I **think**. I know it rev'd to 1.0.4 or 1.0.5 before I added
another disc or two. Upgraded to lvm2 a couple months back. Just
recently did a 'vgconvert -M2 cheese_vg' to see if that helped things,
but didn't seem to matter.
Wil
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: "bio too big" error
2002-12-12 9:12 ` Joe Thornber
2002-12-12 17:33 ` Wil Reichert
@ 2002-12-12 21:51 ` Kevin Corry
2002-12-13 8:41 ` [lvm-devel] " Joe Thornber
1 sibling, 1 reply; 7+ messages in thread
From: Kevin Corry @ 2002-12-12 21:51 UTC (permalink / raw)
To: Joe Thornber, Wil Reichert; +Cc: Greg KH, kernel list, lvm-devel
On Thursday 12 December 2002 03:12, Joe Thornber wrote:
> On Wed, Dec 11, 2002 at 04:15:42PM -0800, Wil Reichert wrote:
> > Ok, 2.5.51 plus dm patches result in the following:
> >
> > Initializing LVM: device-mapper: device
> > /dev/ide/host2/bus1/target0/lun0/disc too small for target
> > device-mapper: internal error adding target to table
> > device-mapper: destroying table
> > device-mapper ioctl cmd 2 failed: Invalid argument
> > Couldn't load device 'cheese_vg-blah'.
> > 0 logical volume(s) in volume group "cheese_vg" now active
> > lvm2.
> >
> > Was fine (minus of course the entire bio thing) in 50, did something
> > break in 51 or is it just my box?
>
> I've had a couple of reports of this problem. The offending patch is:
>
> http://people.sistina.com/~thornber/patches/2.5-stable/2.5.51/2.5.51-dm-1/0
>0005.patch
>
> back it out if necc.
>
> All it does is:
>
> --- diff/drivers/md/dm-table.c 2002-12-11 11:59:51.000000000 +0000
> +++ source/drivers/md/dm-table.c 2002-12-11 12:00:00.000000000 +0000
> @@ -388,7 +388,7 @@
> static int check_device_area(struct dm_dev *dd, sector_t start, sector_t
> len) {
> sector_t dev_size;
> - dev_size = dd->bdev->bd_inode->i_size;
> + dev_size = dd->bdev->bd_inode->i_size >> SECTOR_SHIFT;
> return ((start < dev_size) && (len <= (dev_size - start)));
> }
Actually, this 00005.patch *is* necessary. dd->bdev->bd_inode->i_size *is* in
bytes, and does need to be shifted to do the above comparison.
I believe we have tracked the problem down to the call to dm_get_device() in
dm-linear.c. It is passing in an incorrect value, which winds up being the
"start" parameter to the check_device_area() function. I've included a patch
at the end of this email which I believe should fix the problem. I have also
checked dm-stripe.c, and it appears to make the call to dm_get_device()
correctly, so no worries there.
--
Kevin Corry
corryk@us.ibm.com
http://evms.sourceforge.net/
--- linux-2.5.51a/drivers/md/dm-linear.c 2002/11/20 20:09:22 1.1
+++ linux-2.5.51b/drivers/md/dm-linear.c 2002/12/12 21:38:32
@@ -43,7 +43,7 @@
goto bad;
}
- if (dm_get_device(ti, argv[0], ti->begin, ti->len,
+ if (dm_get_device(ti, argv[0], lc->start, ti->len,
dm_table_get_mode(ti->table), &lc->dev)) {
ti->error = "dm-linear: Device lookup failed";
goto bad;
^ permalink raw reply [flat|nested] 7+ messages in thread