* [PATCH 000 of 3] md: Introduction - 3 assorted md fixes
@ 2006-03-24 5:58 NeilBrown
2006-03-24 5:59 ` [PATCH 001 of 3] md: Remove bi_end_io call out from under a spinlock NeilBrown
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: NeilBrown @ 2006-03-24 5:58 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
Three little fixes.
The last is possibly most interesting as it highlight how wrong I managed
to get the BIO_BARRIER stuff in raid1, which I really thought I had tested.
I'm happy of this and the previous collection of raid5-growth patches
to be merged into 2.6.17-rc1. I had hoped the raid5-growth could sit
in -mm a bit longer, but I didn't get them there in time, and I'd
rather not wait until after 2.6.17. They have received a reasonable
amount of testing both by me and others and appear to be safe.
Thanks,
NeilBrown
[PATCH 001 of 3] md: Remove bi_end_io call out from under a spinlock.
[PATCH 002 of 3] md: Fix md grow/size code to correctly find the maximum available space.
[PATCH 003 of 3] md: Restore 'remaining' count when retrying an write operation.
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 001 of 3] md: Remove bi_end_io call out from under a spinlock.
2006-03-24 5:58 [PATCH 000 of 3] md: Introduction - 3 assorted md fixes NeilBrown
@ 2006-03-24 5:59 ` NeilBrown
2006-03-24 6:00 ` [PATCH 002 of 3] md: Fix md grow/size code to correctly find the maximum available space NeilBrown
2006-03-24 6:00 ` [PATCH 003 of 3] md: Restore 'remaining' count when retrying an write operation NeilBrown
2 siblings, 0 replies; 4+ messages in thread
From: NeilBrown @ 2006-03-24 5:59 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
raid5 overloads bi_phys_segments to count the number of blocks that
the request was broken in to so that it knows when the bio is completely handled.
Accessing this must always be done under a spinlock. In one case we
also call bi_end_io under that spinlock, which probably isn't ideal as
bi_end_io could be expensive (even though it isn't allowed to sleep).
So we reducde the range of the spinlock to just accessing bi_phys_segments.
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/raid5.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff ./drivers/md/raid5.c~current~ ./drivers/md/raid5.c
--- ./drivers/md/raid5.c~current~ 2006-03-24 14:01:30.000000000 +1100
+++ ./drivers/md/raid5.c 2006-03-24 14:06:47.000000000 +1100
@@ -1743,6 +1743,7 @@ static int make_request(request_queue_t
sector_t logical_sector, last_sector;
struct stripe_head *sh;
const int rw = bio_data_dir(bi);
+ int remaining;
if (unlikely(bio_barrier(bi))) {
bio_endio(bi, bi->bi_size, -EOPNOTSUPP);
@@ -1852,7 +1853,9 @@ static int make_request(request_queue_t
}
spin_lock_irq(&conf->device_lock);
- if (--bi->bi_phys_segments == 0) {
+ remaining = --bi->bi_phys_segments;
+ spin_unlock_irq(&conf->device_lock);
+ if (remaining == 0) {
int bytes = bi->bi_size;
if ( bio_data_dir(bi) == WRITE )
@@ -1860,7 +1863,6 @@ static int make_request(request_queue_t
bi->bi_size = 0;
bi->bi_end_io(bi, bytes, 0);
}
- spin_unlock_irq(&conf->device_lock);
return 0;
}
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 002 of 3] md: Fix md grow/size code to correctly find the maximum available space.
2006-03-24 5:58 [PATCH 000 of 3] md: Introduction - 3 assorted md fixes NeilBrown
2006-03-24 5:59 ` [PATCH 001 of 3] md: Remove bi_end_io call out from under a spinlock NeilBrown
@ 2006-03-24 6:00 ` NeilBrown
2006-03-24 6:00 ` [PATCH 003 of 3] md: Restore 'remaining' count when retrying an write operation NeilBrown
2 siblings, 0 replies; 4+ messages in thread
From: NeilBrown @ 2006-03-24 6:00 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
An md array can be asked to change the amount of each device that it
is using, and in particular can be asked to use the maximum available
space. This currently only works if the first device is not larger
than the rest. As 'size' gets changed and so 'fit' becomes wrong.
So check if a 'fit' is required early and don't corrupt it.
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/md.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff ./drivers/md/md.c~current~ ./drivers/md/md.c
--- ./drivers/md/md.c~current~ 2006-03-24 14:01:30.000000000 +1100
+++ ./drivers/md/md.c 2006-03-24 14:06:49.000000000 +1100
@@ -3575,6 +3575,7 @@ static int update_size(mddev_t *mddev, u
mdk_rdev_t * rdev;
int rv;
struct list_head *tmp;
+ int fit = (size == 0);
if (mddev->pers->resize == NULL)
return -EINVAL;
@@ -3592,7 +3593,6 @@ static int update_size(mddev_t *mddev, u
return -EBUSY;
ITERATE_RDEV(mddev,rdev,tmp) {
sector_t avail;
- int fit = (size == 0);
if (rdev->sb_offset > rdev->data_offset)
avail = (rdev->sb_offset*2) - rdev->data_offset;
else
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 003 of 3] md: Restore 'remaining' count when retrying an write operation.
2006-03-24 5:58 [PATCH 000 of 3] md: Introduction - 3 assorted md fixes NeilBrown
2006-03-24 5:59 ` [PATCH 001 of 3] md: Remove bi_end_io call out from under a spinlock NeilBrown
2006-03-24 6:00 ` [PATCH 002 of 3] md: Fix md grow/size code to correctly find the maximum available space NeilBrown
@ 2006-03-24 6:00 ` NeilBrown
2 siblings, 0 replies; 4+ messages in thread
From: NeilBrown @ 2006-03-24 6:00 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
When retrying a write due to barrier failure, we don't reset
'remaining', so it goes negative and never hits 0 again.
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/raid1.c | 3 +++
1 file changed, 3 insertions(+)
diff ./drivers/md/raid1.c~current~ ./drivers/md/raid1.c
--- ./drivers/md/raid1.c~current~ 2006-03-24 14:01:30.000000000 +1100
+++ ./drivers/md/raid1.c 2006-03-24 14:06:49.000000000 +1100
@@ -1402,6 +1402,9 @@ static void raid1d(mddev_t *mddev)
clear_bit(R1BIO_BarrierRetry, &r1_bio->state);
clear_bit(R1BIO_Barrier, &r1_bio->state);
for (i=0; i < conf->raid_disks; i++)
+ if (r1_bio->bios[i])
+ atomic_inc(&r1_bio->remaining);
+ for (i=0; i < conf->raid_disks; i++)
if (r1_bio->bios[i]) {
struct bio_vec *bvec;
int j;
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2006-03-24 6:00 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-03-24 5:58 [PATCH 000 of 3] md: Introduction - 3 assorted md fixes NeilBrown
2006-03-24 5:59 ` [PATCH 001 of 3] md: Remove bi_end_io call out from under a spinlock NeilBrown
2006-03-24 6:00 ` [PATCH 002 of 3] md: Fix md grow/size code to correctly find the maximum available space NeilBrown
2006-03-24 6:00 ` [PATCH 003 of 3] md: Restore 'remaining' count when retrying an write operation NeilBrown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).