public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 000 of 5] Introductory patches for bio refactor.
@ 2007-08-16  5:13 NeilBrown
  2007-08-16  5:13 ` [PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge NeilBrown
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: NeilBrown @ 2007-08-16  5:13 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tejun Heo


Hi Jens,
 I wonder if you would accept these patches the block layer.
 They are, as far as I can tell, quite uncontroversial and provide
 good cleanups.

 The first is a minor bug-fix.
 The next to replace helper function that take a bio (always the first
 bio of a request), to instead take a request.

 The last two combine to replace rq_for_each_bio with
 rq_for_each_segment which makes blk_recalc_rq_segments a lot
 cleaner, and improves (to a lesser degree) every other place that
 used rq_for_each_bio.

 The net result is a decrease in object code size (for my x86-64 compile)
 for block/built-in.o of 96 bytes, though there is some growth out in
 drivers making and over-all decrease of only 48 bytes.

Thanks,
NeilBrown



 [PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge.
 [PATCH 002 of 5] Replace bio_data with blk_rq_data
 [PATCH 003 of 5] Replace bio_cur_sectors with blk_rq_cur_sectors.
 [PATCH 004 of 5] Introduce rq_for_each_segment replacing rq_for_each_bio
 [PATCH 005 of 5] Merge blk_recount_segments into blk_recalc_rq_segments

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge.
  2007-08-16  5:13 [PATCH 000 of 5] Introductory patches for bio refactor NeilBrown
@ 2007-08-16  5:13 ` NeilBrown
  2007-08-16  7:01   ` Jens Axboe
  2007-08-16  5:13 ` [PATCH 002 of 5] Replace bio_data with blk_rq_data NeilBrown
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: NeilBrown @ 2007-08-16  5:13 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tejun Heo


ll_merge_requests_fn can update bi_hw_*_size in one case where we end
up not merging.  This is wrong.

Signed-off-by: Neil Brown <neilb@suse.de>

### Diffstat output
 ./block/ll_rw_blk.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c
--- .prev/block/ll_rw_blk.c	2007-08-16 14:57:39.000000000 +1000
+++ ./block/ll_rw_blk.c	2007-08-16 15:02:29.000000000 +1000
@@ -1518,11 +1518,13 @@ static int ll_merge_requests_fn(struct r
 		/*
 		 * propagate the combined length to the end of the requests
 		 */
+		total_hw_segments--;
+		if (total_hw_segments > q->max_hw_segments)
+			return 0;
 		if (req->nr_hw_segments == 1)
 			req->bio->bi_hw_front_size = len;
 		if (next->nr_hw_segments == 1)
 			next->biotail->bi_hw_back_size = len;
-		total_hw_segments--;
 	}
 
 	if (total_hw_segments > q->max_hw_segments)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 002 of 5] Replace bio_data with blk_rq_data
  2007-08-16  5:13 [PATCH 000 of 5] Introductory patches for bio refactor NeilBrown
  2007-08-16  5:13 ` [PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge NeilBrown
@ 2007-08-16  5:13 ` NeilBrown
  2007-08-16  7:02   ` Jens Axboe
  2007-08-16  5:13 ` [PATCH 003 of 5] Replace bio_cur_sectors with blk_rq_cur_sectors NeilBrown
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: NeilBrown @ 2007-08-16  5:13 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tejun Heo


Almost every call to bio_data is for the first bio
in a request.  A future patch will add some accounting
information to 'struct request' which will need to be
used to find the start of the request in the bio.
So replace bio_data with blk_rq_data which takes a 'struct request *'

The one exception is in dm-emc were using
   page_address(bio->bi_io_vec[0].bv_page);
is appropriate.

Signed-off-by: Neil Brown <neilb@suse.de>

### Diffstat output
 ./block/ll_rw_blk.c                  |   16 ++++++++++++----
 ./drivers/block/floppy.c             |    2 +-
 ./drivers/ide/ide-cd.c               |   11 ++++++-----
 ./drivers/ide/ide-io.c               |    2 +-
 ./drivers/md/dm-emc.c                |    2 +-
 ./drivers/message/fusion/mptsas.c    |    4 ++--
 ./drivers/scsi/libsas/sas_expander.c |    4 ++--
 ./include/linux/bio.h                |    4 +---
 ./include/linux/blkdev.h             |    1 +
 9 files changed, 27 insertions(+), 19 deletions(-)

diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c
--- .prev/block/ll_rw_blk.c	2007-08-16 15:02:29.000000000 +1000
+++ ./block/ll_rw_blk.c	2007-08-16 15:02:30.000000000 +1000
@@ -2907,8 +2907,8 @@ static void init_request_from_bio(struct
 	req->current_nr_sectors = req->hard_cur_sectors = bio_cur_sectors(bio);
 	req->nr_phys_segments = bio_phys_segments(req->q, bio);
 	req->nr_hw_segments = bio_hw_segments(req->q, bio);
-	req->buffer = bio_data(bio);	/* see ->buffer comment above */
 	req->bio = req->biotail = bio;
+	req->buffer = blk_rq_data(req);	/* see ->buffer comment above */
 	req->ioprio = bio_prio(bio);
 	req->rq_disk = bio->bi_bdev->bd_disk;
 	req->start_time = jiffies;
@@ -2977,7 +2977,7 @@ static int __make_request(struct request
 			 * it didn't need a bounce buffer then it better
 			 * not touch req->buffer either...
 			 */
-			req->buffer = bio_data(bio);
+			req->buffer = blk_rq_data(req);
 			req->current_nr_sectors = bio_cur_sectors(bio);
 			req->hard_cur_sectors = req->current_nr_sectors;
 			req->sector = req->hard_sector = bio->bi_sector;
@@ -3373,7 +3373,7 @@ static void blk_recalc_rq_sectors(struct
 			rq->nr_sectors = rq->hard_nr_sectors;
 			rq->hard_cur_sectors = bio_cur_sectors(rq->bio);
 			rq->current_nr_sectors = rq->hard_cur_sectors;
-			rq->buffer = bio_data(rq->bio);
+			rq->buffer = blk_rq_data(rq);
 		}
 
 		/*
@@ -3678,14 +3678,22 @@ void blk_rq_bio_prep(struct request_queu
 	rq->current_nr_sectors = bio_cur_sectors(bio);
 	rq->hard_cur_sectors = rq->current_nr_sectors;
 	rq->hard_nr_sectors = rq->nr_sectors = bio_sectors(bio);
-	rq->buffer = bio_data(bio);
 	rq->data_len = bio->bi_size;
 
 	rq->bio = rq->biotail = bio;
+	rq->buffer = blk_rq_data(rq);
 }
 
 EXPORT_SYMBOL(blk_rq_bio_prep);
 
+void *blk_rq_data(struct request *rq)
+{
+	return page_address(bio_page(rq->bio)) +
+		bio_offset(rq->bio);
+}
+EXPORT_SYMBOL(blk_rq_data);
+
+
 int kblockd_schedule_work(struct work_struct *work)
 {
 	return queue_work(kblockd_workqueue, work);

diff .prev/drivers/block/floppy.c ./drivers/block/floppy.c
--- .prev/drivers/block/floppy.c	2007-08-16 14:57:31.000000000 +1000
+++ ./drivers/block/floppy.c	2007-08-16 15:02:30.000000000 +1000
@@ -2456,7 +2456,7 @@ static int buffer_chain_size(void)
 	int i;
 	char *base;
 
-	base = bio_data(current_req->bio);
+	base = blk_rq_data(current_req);
 	size = 0;
 
 	rq_for_each_bio(bio, current_req) {

diff .prev/drivers/ide/ide-cd.c ./drivers/ide/ide-cd.c
--- .prev/drivers/ide/ide-cd.c	2007-08-16 14:57:31.000000000 +1000
+++ ./drivers/ide/ide-cd.c	2007-08-16 15:02:30.000000000 +1000
@@ -1381,10 +1381,11 @@ static ide_startstop_t cdrom_start_seek 
    start it over entirely, or even put it back on the request queue. */
 static void restore_request (struct request *rq)
 {
-	if (rq->buffer != bio_data(rq->bio)) {
-		sector_t n = (rq->buffer - (char *) bio_data(rq->bio)) / SECTOR_SIZE;
+	if (rq->buffer != blk_rq_data(rq)) {
+		sector_t n = (rq->buffer - (char *)blk_rq_data(rq))
+			/ SECTOR_SIZE;
 
-		rq->buffer = bio_data(rq->bio);
+		rq->buffer = blk_rq_data(rq);
 		rq->nr_sectors += n;
 		rq->sector -= n;
 	}
@@ -1659,7 +1660,7 @@ static void post_transform_command(struc
 		return;
 
 	if (req->bio)
-		ibuf = bio_data(req->bio);
+		ibuf = blk_rq_data(req);
 	else
 		ibuf = req->data;
 
@@ -1768,7 +1769,7 @@ static ide_startstop_t cdrom_newpc_intr(
 		 * bio backed?
 		 */
 		if (rq->bio) {
-			ptr = bio_data(rq->bio);
+			ptr = blk_rq_data(rq);
 			blen = bio_iovec(rq->bio)->bv_len;
 		}
 

diff .prev/drivers/ide/ide-io.c ./drivers/ide/ide-io.c
--- .prev/drivers/ide/ide-io.c	2007-08-16 14:57:31.000000000 +1000
+++ ./drivers/ide/ide-io.c	2007-08-16 15:02:30.000000000 +1000
@@ -1426,7 +1426,7 @@ static ide_startstop_t ide_dma_timeout_r
 	rq->sector = rq->bio->bi_sector;
 	rq->current_nr_sectors = bio_iovec(rq->bio)->bv_len >> 9;
 	rq->hard_cur_sectors = rq->current_nr_sectors;
-	rq->buffer = bio_data(rq->bio);
+	rq->buffer = blk_rq_data(rq);
 out:
 	return ret;
 }

diff .prev/drivers/md/dm-emc.c ./drivers/md/dm-emc.c
--- .prev/drivers/md/dm-emc.c	2007-08-16 14:57:31.000000000 +1000
+++ ./drivers/md/dm-emc.c	2007-08-16 15:02:30.000000000 +1000
@@ -167,7 +167,7 @@ static struct request *emc_trespass_get(
 		return NULL;
 	}
 
-	page22 = (unsigned char *)bio_data(bio);
+	page22 = (unsigned char *)page_address(bio->bi_io_vec[0].bv_page);
 	memset(page22, 0, data_size);
 
 	memcpy(page22, h->short_trespass ?

diff .prev/drivers/message/fusion/mptsas.c ./drivers/message/fusion/mptsas.c
--- .prev/drivers/message/fusion/mptsas.c	2007-08-16 14:57:31.000000000 +1000
+++ ./drivers/message/fusion/mptsas.c	2007-08-16 15:02:30.000000000 +1000
@@ -1382,7 +1382,7 @@ static int mptsas_smp_handler(struct Scs
 		       mpt_addr_size()) << MPI_SGE_FLAGS_SHIFT;
 	flagsLength |= (req->data_len - 4);
 
-	dma_addr_out = pci_map_single(ioc->pcidev, bio_data(req->bio),
+	dma_addr_out = pci_map_single(ioc->pcidev, blk_rq_data(req),
 				      req->data_len, PCI_DMA_BIDIRECTIONAL);
 	if (!dma_addr_out)
 		goto put_mf;
@@ -1392,7 +1392,7 @@ static int mptsas_smp_handler(struct Scs
 	/* response */
 	flagsLength = MPT_SGE_FLAGS_SSIMPLE_READ;
 	flagsLength |= rsp->data_len + 4;
-	dma_addr_in =  pci_map_single(ioc->pcidev, bio_data(rsp->bio),
+	dma_addr_in =  pci_map_single(ioc->pcidev, blk_rq_data(rsp),
 				      rsp->data_len, PCI_DMA_BIDIRECTIONAL);
 	if (!dma_addr_in)
 		goto unmap;

diff .prev/drivers/scsi/libsas/sas_expander.c ./drivers/scsi/libsas/sas_expander.c
--- .prev/drivers/scsi/libsas/sas_expander.c	2007-08-16 14:57:31.000000000 +1000
+++ ./drivers/scsi/libsas/sas_expander.c	2007-08-16 15:02:30.000000000 +1000
@@ -1924,8 +1924,8 @@ int sas_smp_handler(struct Scsi_Host *sh
 		return -EINVAL;
 	}
 
-	ret = smp_execute_task(dev, bio_data(req->bio), req->data_len,
-			       bio_data(rsp->bio), rsp->data_len);
+	ret = smp_execute_task(dev, blk_rq_data(req), req->data_len,
+			       blk_rq_data(rsp), rsp->data_len);
 
 	return ret;
 }

diff .prev/include/linux/bio.h ./include/linux/bio.h
--- .prev/include/linux/bio.h	2007-08-16 14:57:31.000000000 +1000
+++ ./include/linux/bio.h	2007-08-16 15:02:30.000000000 +1000
@@ -167,8 +167,7 @@ struct bio {
 } while (0)
 
 /*
- * various member access, note that bio_data should of course not be used
- * on highmem page vectors
+ * various member access
  */
 #define bio_iovec_idx(bio, idx)	(&((bio)->bi_io_vec[(idx)]))
 #define bio_iovec(bio)		bio_iovec_idx((bio), (bio)->bi_idx)
@@ -177,7 +176,6 @@ struct bio {
 #define bio_segments(bio)	((bio)->bi_vcnt - (bio)->bi_idx)
 #define bio_sectors(bio)	((bio)->bi_size >> 9)
 #define bio_cur_sectors(bio)	(bio_iovec(bio)->bv_len >> 9)
-#define bio_data(bio)		(page_address(bio_page((bio))) + bio_offset((bio)))
 #define bio_barrier(bio)	((bio)->bi_rw & (1 << BIO_RW_BARRIER))
 #define bio_sync(bio)		((bio)->bi_rw & (1 << BIO_RW_SYNC))
 #define bio_failfast(bio)	((bio)->bi_rw & (1 << BIO_RW_FAILFAST))

diff .prev/include/linux/blkdev.h ./include/linux/blkdev.h
--- .prev/include/linux/blkdev.h	2007-08-16 14:57:31.000000000 +1000
+++ ./include/linux/blkdev.h	2007-08-16 15:02:30.000000000 +1000
@@ -700,6 +700,7 @@ extern int blk_execute_rq(struct request
 extern void blk_execute_rq_nowait(struct request_queue *, struct gendisk *,
 				  struct request *, int, rq_end_io_fn *);
 extern int blk_verify_command(unsigned char *, int);
+extern void *blk_rq_data(struct request *);
 
 static inline struct request_queue *bdev_get_queue(struct block_device *bdev)
 {

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 003 of 5] Replace bio_cur_sectors with blk_rq_cur_sectors.
  2007-08-16  5:13 [PATCH 000 of 5] Introductory patches for bio refactor NeilBrown
  2007-08-16  5:13 ` [PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge NeilBrown
  2007-08-16  5:13 ` [PATCH 002 of 5] Replace bio_data with blk_rq_data NeilBrown
@ 2007-08-16  5:13 ` NeilBrown
  2007-08-16  5:13 ` [PATCH 004 of 5] Introduce rq_for_each_segment replacing rq_for_each_bio NeilBrown
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: NeilBrown @ 2007-08-16  5:13 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tejun Heo


All calls to bio_cur_sectors are for the first bio in a 'struct request'.
A future patch will make the discovery of this number dependant on
information in the request.  So change the function to take a 
'struct request *' instread of a 'struct bio *', and make it a real
function as more code will need to be added.

One place wants the current bytes rather than sectors, so the
'real function' we create is blk_rq_cur_bytes, and 
blk_rq_cur_sectors divides this value by 512.

Signed-off-by: Neil Brown <neilb@suse.de>

### Diffstat output
 ./block/ll_rw_blk.c      |   18 ++++++++++++------
 ./drivers/ide/ide-cd.c   |   11 ++++++-----
 ./drivers/ide/ide-io.c   |    2 +-
 ./include/linux/bio.h    |    1 -
 ./include/linux/blkdev.h |    5 +++++
 5 files changed, 24 insertions(+), 13 deletions(-)

diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c
--- .prev/block/ll_rw_blk.c	2007-08-16 15:02:30.000000000 +1000
+++ ./block/ll_rw_blk.c	2007-08-16 15:02:30.000000000 +1000
@@ -2904,10 +2904,11 @@ static void init_request_from_bio(struct
 	req->errors = 0;
 	req->hard_sector = req->sector = bio->bi_sector;
 	req->hard_nr_sectors = req->nr_sectors = bio_sectors(bio);
-	req->current_nr_sectors = req->hard_cur_sectors = bio_cur_sectors(bio);
+	req->bio = req->biotail = bio;
+	req->current_nr_sectors = req->hard_cur_sectors =
+		blk_rq_cur_sectors(req);
 	req->nr_phys_segments = bio_phys_segments(req->q, bio);
 	req->nr_hw_segments = bio_hw_segments(req->q, bio);
-	req->bio = req->biotail = bio;
 	req->buffer = blk_rq_data(req);	/* see ->buffer comment above */
 	req->ioprio = bio_prio(bio);
 	req->rq_disk = bio->bi_bdev->bd_disk;
@@ -2978,7 +2979,7 @@ static int __make_request(struct request
 			 * not touch req->buffer either...
 			 */
 			req->buffer = blk_rq_data(req);
-			req->current_nr_sectors = bio_cur_sectors(bio);
+			req->current_nr_sectors = blk_rq_cur_sectors(req);
 			req->hard_cur_sectors = req->current_nr_sectors;
 			req->sector = req->hard_sector = bio->bi_sector;
 			req->nr_sectors = req->hard_nr_sectors += nr_sectors;
@@ -3371,7 +3372,7 @@ static void blk_recalc_rq_sectors(struct
 		    (rq->sector <= rq->hard_sector)) {
 			rq->sector = rq->hard_sector;
 			rq->nr_sectors = rq->hard_nr_sectors;
-			rq->hard_cur_sectors = bio_cur_sectors(rq->bio);
+			rq->hard_cur_sectors = blk_rq_cur_sectors(rq);
 			rq->current_nr_sectors = rq->hard_cur_sectors;
 			rq->buffer = blk_rq_data(rq);
 		}
@@ -3675,13 +3676,13 @@ void blk_rq_bio_prep(struct request_queu
 
 	rq->nr_phys_segments = bio_phys_segments(q, bio);
 	rq->nr_hw_segments = bio_hw_segments(q, bio);
-	rq->current_nr_sectors = bio_cur_sectors(bio);
-	rq->hard_cur_sectors = rq->current_nr_sectors;
 	rq->hard_nr_sectors = rq->nr_sectors = bio_sectors(bio);
 	rq->data_len = bio->bi_size;
 
 	rq->bio = rq->biotail = bio;
 	rq->buffer = blk_rq_data(rq);
+	rq->current_nr_sectors = blk_rq_cur_sectors(rq);
+	rq->hard_cur_sectors = rq->current_nr_sectors;
 }
 
 EXPORT_SYMBOL(blk_rq_bio_prep);
@@ -3693,6 +3694,11 @@ void *blk_rq_data(struct request *rq)
 }
 EXPORT_SYMBOL(blk_rq_data);
 
+int blk_rq_cur_bytes(struct request *rq)
+{
+	return bio_iovec(rq->bio)->bv_len;
+}
+EXPORT_SYMBOL(blk_rq_cur_bytes);
 
 int kblockd_schedule_work(struct work_struct *work)
 {

diff .prev/drivers/ide/ide-cd.c ./drivers/ide/ide-cd.c
--- .prev/drivers/ide/ide-cd.c	2007-08-16 15:02:30.000000000 +1000
+++ ./drivers/ide/ide-cd.c	2007-08-16 15:02:30.000000000 +1000
@@ -1173,7 +1173,8 @@ static ide_startstop_t cdrom_read_intr (
 
 	/* First, figure out if we need to bit-bucket
 	   any of the leading sectors. */
-	nskip = min_t(int, rq->current_nr_sectors - bio_cur_sectors(rq->bio), sectors_to_transfer);
+	nskip = min_t(int, rq->current_nr_sectors - blk_rq_cur_sectors(rq),
+		      sectors_to_transfer);
 
 	while (nskip > 0) {
 		/* We need to throw away a sector. */
@@ -1273,7 +1274,7 @@ static int cdrom_read_from_buffer (ide_d
 	   represent the number of sectors to skip at the start of a transfer
 	   will fail.  I think that this will never happen, but let's be
 	   paranoid and check. */
-	if (rq->current_nr_sectors < bio_cur_sectors(rq->bio) &&
+	if (rq->current_nr_sectors < blk_rq_cur_sectors(rq) &&
 	    (rq->sector & (sectors_per_frame - 1))) {
 		printk(KERN_ERR "%s: cdrom_read_from_buffer: buffer botch (%ld)\n",
 			drive->name, (long)rq->sector);
@@ -1308,7 +1309,7 @@ static ide_startstop_t cdrom_start_read_
 	nskip = rq->sector & (sectors_per_frame - 1);
 	if (nskip > 0) {
 		/* Sanity check... */
-		if (rq->current_nr_sectors != bio_cur_sectors(rq->bio) &&
+		if (rq->current_nr_sectors != blk_rq_cur_sectors(rq) &&
 			(rq->sector & (sectors_per_frame - 1))) {
 			printk(KERN_ERR "%s: cdrom_start_read_continuation: buffer botch (%u)\n",
 				drive->name, rq->current_nr_sectors);
@@ -1389,7 +1390,7 @@ static void restore_request (struct requ
 		rq->nr_sectors += n;
 		rq->sector -= n;
 	}
-	rq->hard_cur_sectors = rq->current_nr_sectors = bio_cur_sectors(rq->bio);
+	rq->hard_cur_sectors = rq->current_nr_sectors = blk_rq_cur_sectors(rq);
 	rq->hard_nr_sectors = rq->nr_sectors;
 	rq->hard_sector = rq->sector;
 	rq->q->prep_rq_fn(rq->q, rq);
@@ -1770,7 +1771,7 @@ static ide_startstop_t cdrom_newpc_intr(
 		 */
 		if (rq->bio) {
 			ptr = blk_rq_data(rq);
-			blen = bio_iovec(rq->bio)->bv_len;
+			blen = blk_rq_cur_bytes(rq);
 		}
 
 		if (!ptr) {

diff .prev/drivers/ide/ide-io.c ./drivers/ide/ide-io.c
--- .prev/drivers/ide/ide-io.c	2007-08-16 15:02:30.000000000 +1000
+++ ./drivers/ide/ide-io.c	2007-08-16 15:02:30.000000000 +1000
@@ -1424,7 +1424,7 @@ static ide_startstop_t ide_dma_timeout_r
 		goto out;
 
 	rq->sector = rq->bio->bi_sector;
-	rq->current_nr_sectors = bio_iovec(rq->bio)->bv_len >> 9;
+	rq->current_nr_sectors = blk_rq_cur_sectors(rq);
 	rq->hard_cur_sectors = rq->current_nr_sectors;
 	rq->buffer = blk_rq_data(rq);
 out:

diff .prev/include/linux/bio.h ./include/linux/bio.h
--- .prev/include/linux/bio.h	2007-08-16 15:02:30.000000000 +1000
+++ ./include/linux/bio.h	2007-08-16 15:02:30.000000000 +1000
@@ -175,7 +175,6 @@ struct bio {
 #define bio_offset(bio)		bio_iovec((bio))->bv_offset
 #define bio_segments(bio)	((bio)->bi_vcnt - (bio)->bi_idx)
 #define bio_sectors(bio)	((bio)->bi_size >> 9)
-#define bio_cur_sectors(bio)	(bio_iovec(bio)->bv_len >> 9)
 #define bio_barrier(bio)	((bio)->bi_rw & (1 << BIO_RW_BARRIER))
 #define bio_sync(bio)		((bio)->bi_rw & (1 << BIO_RW_SYNC))
 #define bio_failfast(bio)	((bio)->bi_rw & (1 << BIO_RW_FAILFAST))

diff .prev/include/linux/blkdev.h ./include/linux/blkdev.h
--- .prev/include/linux/blkdev.h	2007-08-16 15:02:30.000000000 +1000
+++ ./include/linux/blkdev.h	2007-08-16 15:02:30.000000000 +1000
@@ -701,6 +701,11 @@ extern void blk_execute_rq_nowait(struct
 				  struct request *, int, rq_end_io_fn *);
 extern int blk_verify_command(unsigned char *, int);
 extern void *blk_rq_data(struct request *);
+extern int blk_rq_cur_bytes(struct request *);
+static inline int blk_rq_cur_sectors(struct request *rq)
+{
+	return blk_rq_cur_bytes(rq) >> 9;
+}
 
 static inline struct request_queue *bdev_get_queue(struct block_device *bdev)
 {

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 004 of 5] Introduce rq_for_each_segment replacing rq_for_each_bio
  2007-08-16  5:13 [PATCH 000 of 5] Introductory patches for bio refactor NeilBrown
                   ` (2 preceding siblings ...)
  2007-08-16  5:13 ` [PATCH 003 of 5] Replace bio_cur_sectors with blk_rq_cur_sectors NeilBrown
@ 2007-08-16  5:13 ` NeilBrown
  2007-08-16  5:13 ` [PATCH 005 of 5] Merge blk_recount_segments into blk_recalc_rq_segments NeilBrown
  2007-08-16  6:37 ` [PATCH 000 of 5] Introductory patches for bio refactor Jens Axboe
  5 siblings, 0 replies; 14+ messages in thread
From: NeilBrown @ 2007-08-16  5:13 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tejun Heo


(almost) every usage of rq_for_each_bio wraps a usage of
bio_for_each_segment, so these can be combined into
rq_for_each_segment.

We get it to fill in a bio_vec structure rather than provide a
pointer, as future changes to make bi_io_vec immutable will require
that.

The one place where rq_for_each_bio remains will be changed to use
rq_for_each_segment in a subsequent patch.

Signed-off-by: Neil Brown <neilb@suse.de>

### Diffstat output
 ./block/ll_rw_blk.c              |   54 +++++++++++--------------
 ./drivers/block/floppy.c         |   84 ++++++++++++++++++---------------------
 ./drivers/block/lguest_blk.c     |   25 +++++------
 ./drivers/block/nbd.c            |   67 +++++++++++++++----------------
 ./drivers/block/xen-blkfront.c   |   57 ++++++++++++--------------
 ./drivers/ide/ide-floppy.c       |   62 +++++++++++++---------------
 ./drivers/s390/block/dasd_diag.c |   40 ++++++++----------
 ./drivers/s390/block/dasd_eckd.c |   47 ++++++++++-----------
 ./drivers/s390/block/dasd_fba.c  |   47 ++++++++++-----------
 ./drivers/s390/char/tape_34xx.c  |   33 ++++++---------
 ./drivers/s390/char/tape_3590.c  |   41 ++++++++-----------
 ./include/linux/blkdev.h         |   18 ++++++++
 12 files changed, 280 insertions(+), 295 deletions(-)

diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c
--- .prev/block/ll_rw_blk.c	2007-08-16 15:02:30.000000000 +1000
+++ ./block/ll_rw_blk.c	2007-08-16 15:02:31.000000000 +1000
@@ -1313,9 +1313,11 @@ static int blk_hw_contig_segment(struct 
 int blk_rq_map_sg(struct request_queue *q, struct request *rq,
 		  struct scatterlist *sg)
 {
-	struct bio_vec *bvec, *bvprv;
-	struct bio *bio;
-	int nsegs, i, cluster;
+	struct bio_vec bvec;
+	struct bio_vec bvprv = { 0 };
+	struct req_iterator i;
+	int nsegs;
+	int cluster;
 
 	nsegs = 0;
 	cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
@@ -1323,36 +1325,30 @@ int blk_rq_map_sg(struct request_queue *
 	/*
 	 * for each bio in rq
 	 */
-	bvprv = NULL;
-	rq_for_each_bio(bio, rq) {
-		/*
-		 * for each segment in bio
-		 */
-		bio_for_each_segment(bvec, bio, i) {
-			int nbytes = bvec->bv_len;
+	rq_for_each_segment(rq, i, bvec) {
+		int nbytes = bvec.bv_len;
 
-			if (bvprv && cluster) {
-				if (sg[nsegs - 1].length + nbytes > q->max_segment_size)
-					goto new_segment;
-
-				if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
-					goto new_segment;
-				if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec))
-					goto new_segment;
+		if (bvprv.bv_page && cluster) {
+			if (sg[nsegs - 1].length + nbytes > q->max_segment_size)
+				goto new_segment;
+
+			if (!BIOVEC_PHYS_MERGEABLE(&bvprv, &bvec))
+				goto new_segment;
+			if (!BIOVEC_SEG_BOUNDARY(q, &bvprv, &bvec))
+				goto new_segment;
 
-				sg[nsegs - 1].length += nbytes;
-			} else {
+			sg[nsegs - 1].length += nbytes;
+		} else {
 new_segment:
-				memset(&sg[nsegs],0,sizeof(struct scatterlist));
-				sg[nsegs].page = bvec->bv_page;
-				sg[nsegs].length = nbytes;
-				sg[nsegs].offset = bvec->bv_offset;
+			memset(&sg[nsegs], 0, sizeof(struct scatterlist));
+			sg[nsegs].page = bvec.bv_page;
+			sg[nsegs].length = nbytes;
+			sg[nsegs].offset = bvec.bv_offset;
 
-				nsegs++;
-			}
-			bvprv = bvec;
-		} /* segments in bio */
-	} /* bios in rq */
+			nsegs++;
+		}
+		bvprv = bvec;
+	}
 
 	return nsegs;
 }

diff .prev/drivers/block/floppy.c ./drivers/block/floppy.c
--- .prev/drivers/block/floppy.c	2007-08-16 15:02:30.000000000 +1000
+++ ./drivers/block/floppy.c	2007-08-16 15:02:31.000000000 +1000
@@ -2450,23 +2450,20 @@ static void rw_interrupt(void)
 /* Compute maximal contiguous buffer size. */
 static int buffer_chain_size(void)
 {
-	struct bio *bio;
-	struct bio_vec *bv;
+	struct bio_vec bv;
 	int size;
-	int i;
+	struct req_iterator i;
 	char *base;
 
 	base = blk_rq_data(current_req);
 	size = 0;
 
-	rq_for_each_bio(bio, current_req) {
-		bio_for_each_segment(bv, bio, i) {
-			if (page_address(bv->bv_page) + bv->bv_offset !=
-			    base + size)
-				break;
+	rq_for_each_segment(current_req, i, bv) {
+		if (page_address(bv.bv_page) + bv.bv_offset !=
+		    base + size)
+			break;
 
-			size += bv->bv_len;
-		}
+		size += bv.bv_len;
 	}
 
 	return size >> 9;
@@ -2492,12 +2489,11 @@ static int transfer_size(int ssize, int 
 static void copy_buffer(int ssize, int max_sector, int max_sector_2)
 {
 	int remaining;		/* number of transferred 512-byte sectors */
-	struct bio_vec *bv;
-	struct bio *bio;
+	struct bio_vec bv;
 	char *buffer;
 	char *dma_buffer;
 	int size;
-	int i;
+	struct req_iterator i;
 
 	max_sector = transfer_size(ssize,
 				   min(max_sector, max_sector_2),
@@ -2530,43 +2526,41 @@ static void copy_buffer(int ssize, int m
 
 	size = current_req->current_nr_sectors << 9;
 
-	rq_for_each_bio(bio, current_req) {
-		bio_for_each_segment(bv, bio, i) {
-			if (!remaining)
-				break;
+	rq_for_each_segment(current_req, i, bv) {
+		if (!remaining)
+			break;
 
-			size = bv->bv_len;
-			SUPBOUND(size, remaining);
+		size = bv.bv_len;
+		SUPBOUND(size, remaining);
 
-			buffer = page_address(bv->bv_page) + bv->bv_offset;
+		buffer = page_address(bv.bv_page) + bv.bv_offset;
 #ifdef FLOPPY_SANITY_CHECK
-			if (dma_buffer + size >
-			    floppy_track_buffer + (max_buffer_sectors << 10) ||
-			    dma_buffer < floppy_track_buffer) {
-				DPRINT("buffer overrun in copy buffer %d\n",
-				       (int)((floppy_track_buffer -
-					      dma_buffer) >> 9));
-				printk("fsector_t=%d buffer_min=%d\n",
-				       fsector_t, buffer_min);
-				printk("current_count_sectors=%ld\n",
-				       current_count_sectors);
-				if (CT(COMMAND) == FD_READ)
-					printk("read\n");
-				if (CT(COMMAND) == FD_WRITE)
-					printk("write\n");
-				break;
-			}
-			if (((unsigned long)buffer) % 512)
-				DPRINT("%p buffer not aligned\n", buffer);
-#endif
+		if (dma_buffer + size >
+		    floppy_track_buffer + (max_buffer_sectors << 10) ||
+		    dma_buffer < floppy_track_buffer) {
+			DPRINT("buffer overrun in copy buffer %d\n",
+			       (int)((floppy_track_buffer -
+				      dma_buffer) >> 9));
+			printk(KERN_DEBUG "fsector_t=%d buffer_min=%d\n",
+			       fsector_t, buffer_min);
+			printk(KERN_DEBUG "current_count_sectors=%ld\n",
+			       current_count_sectors);
 			if (CT(COMMAND) == FD_READ)
-				memcpy(buffer, dma_buffer, size);
-			else
-				memcpy(dma_buffer, buffer, size);
-
-			remaining -= size;
-			dma_buffer += size;
+				printk(KERN_DEBUG "read\n");
+			if (CT(COMMAND) == FD_WRITE)
+				printk(KERN_DEBUG "write\n");
+			break;
 		}
+		if (((unsigned long)buffer) % 512)
+			DPRINT("%p buffer not aligned\n", buffer);
+#endif
+		if (CT(COMMAND) == FD_READ)
+			memcpy(buffer, dma_buffer, size);
+		else
+			memcpy(dma_buffer, buffer, size);
+
+		remaining -= size;
+		dma_buffer += size;
 	}
 #ifdef FLOPPY_SANITY_CHECK
 	if (remaining) {

diff .prev/drivers/block/lguest_blk.c ./drivers/block/lguest_blk.c
--- .prev/drivers/block/lguest_blk.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/block/lguest_blk.c	2007-08-16 15:02:31.000000000 +1000
@@ -142,25 +142,24 @@ static irqreturn_t lgb_irq(int irq, void
  * return the total length. */
 static unsigned int req_to_dma(struct request *req, struct lguest_dma *dma)
 {
-	unsigned int i = 0, idx, len = 0;
-	struct bio *bio;
+	unsigned int i = 0;
+	unsigned int len = 0;
+	struct req_iterator idx;
+	struct bio_vec bvec;
 
-	rq_for_each_bio(bio, req) {
-		struct bio_vec *bvec;
-		bio_for_each_segment(bvec, bio, idx) {
+	rq_for_each_segment(req, idx, bvec) {
 			/* We told the block layer not to give us too many. */
-			BUG_ON(i == LGUEST_MAX_DMA_SECTIONS);
+		BUG_ON(i == LGUEST_MAX_DMA_SECTIONS);
 			/* If we had a zero-length segment, it would look like
 			 * the end of the data referred to by the "struct
 			 * lguest_dma", so make sure that doesn't happen. */
-			BUG_ON(!bvec->bv_len);
+		BUG_ON(!bvec.bv_len);
 			/* Convert page & offset to a physical address */
-			dma->addr[i] = page_to_phys(bvec->bv_page)
-				+ bvec->bv_offset;
-			dma->len[i] = bvec->bv_len;
-			len += bvec->bv_len;
-			i++;
-		}
+		dma->addr[i] = page_to_phys(bvec.bv_page)
+			+ bvec.bv_offset;
+		dma->len[i] = bvec.bv_len;
+		len += bvec.bv_len;
+		i++;
 	}
 	/* If the array isn't full, we mark the end with a 0 length */
 	if (i < LGUEST_MAX_DMA_SECTIONS)

diff .prev/drivers/block/nbd.c ./drivers/block/nbd.c
--- .prev/drivers/block/nbd.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/block/nbd.c	2007-08-16 15:02:31.000000000 +1000
@@ -180,7 +180,8 @@ static inline int sock_send_bvec(struct 
 
 static int nbd_send_req(struct nbd_device *lo, struct request *req)
 {
-	int result, i, flags;
+	int result;
+	int flags;
 	struct nbd_request request;
 	unsigned long size = req->nr_sectors << 9;
 	struct socket *sock = lo->sock;
@@ -205,27 +206,28 @@ static int nbd_send_req(struct nbd_devic
 	}
 
 	if (nbd_cmd(req) == NBD_CMD_WRITE) {
-		struct bio *bio;
 		/*
 		 * we are really probing at internals to determine
 		 * whether to set MSG_MORE or not...
 		 */
-		rq_for_each_bio(bio, req) {
-			struct bio_vec *bvec;
-			bio_for_each_segment(bvec, bio, i) {
-				flags = 0;
-				if ((i < (bio->bi_vcnt - 1)) || bio->bi_next)
-					flags = MSG_MORE;
-				dprintk(DBG_TX, "%s: request %p: sending %d bytes data\n",
-						lo->disk->disk_name, req,
-						bvec->bv_len);
-				result = sock_send_bvec(sock, bvec, flags);
-				if (result <= 0) {
-					printk(KERN_ERR "%s: Send data failed (result %d)\n",
-							lo->disk->disk_name,
-							result);
-					goto error_out;
-				}
+		struct req_iterator i;
+		struct bio_vec bvec;
+
+		rq_for_each_segment(req, i, bvec) {
+			flags = 0;
+			if (!rq_iter_last(req, i))
+				flags = MSG_MORE;
+			dprintk(DBG_TX,
+				"%s: request %p: sending %d bytes data\n",
+				lo->disk->disk_name, req,
+				bvec.bv_len);
+			result = sock_send_bvec(sock, &bvec, flags);
+			if (result <= 0) {
+				printk(KERN_ERR
+				       "%s: Send data failed (result %d)\n",
+				       lo->disk->disk_name,
+				       result);
+				goto error_out;
 			}
 		}
 	}
@@ -317,22 +319,21 @@ static struct request *nbd_read_stat(str
 	dprintk(DBG_RX, "%s: request %p: got reply\n",
 			lo->disk->disk_name, req);
 	if (nbd_cmd(req) == NBD_CMD_READ) {
-		int i;
-		struct bio *bio;
-		rq_for_each_bio(bio, req) {
-			struct bio_vec *bvec;
-			bio_for_each_segment(bvec, bio, i) {
-				result = sock_recv_bvec(sock, bvec);
-				if (result <= 0) {
-					printk(KERN_ERR "%s: Receive data failed (result %d)\n",
-							lo->disk->disk_name,
-							result);
-					req->errors++;
-					return req;
-				}
-				dprintk(DBG_RX, "%s: request %p: got %d bytes data\n",
-					lo->disk->disk_name, req, bvec->bv_len);
+		struct req_iterator i;
+		struct bio_vec bvec;
+
+		rq_for_each_segment(req, i, bvec) {
+			result = sock_recv_bvec(sock, &bvec);
+			if (result <= 0) {
+				printk(KERN_ERR
+				       "%s: Receive data failed (result %d)\n",
+				       lo->disk->disk_name,
+				       result);
+				req->errors++;
+				return req;
 			}
+			dprintk(DBG_RX, "%s: request %p: got %d bytes data\n",
+				lo->disk->disk_name, req, bvec.bv_len);
 		}
 	}
 	return req;

diff .prev/drivers/block/xen-blkfront.c ./drivers/block/xen-blkfront.c
--- .prev/drivers/block/xen-blkfront.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/block/xen-blkfront.c	2007-08-16 15:02:31.000000000 +1000
@@ -150,9 +150,8 @@ static int blkif_queue_request(struct re
 	struct blkfront_info *info = req->rq_disk->private_data;
 	unsigned long buffer_mfn;
 	struct blkif_request *ring_req;
-	struct bio *bio;
-	struct bio_vec *bvec;
-	int idx;
+	struct bio_vec bvec;
+	struct req_iterator idx;
 	unsigned long id;
 	unsigned int fsect, lsect;
 	int ref;
@@ -186,34 +185,32 @@ static int blkif_queue_request(struct re
 		ring_req->operation = BLKIF_OP_WRITE_BARRIER;
 
 	ring_req->nr_segments = 0;
-	rq_for_each_bio (bio, req) {
-		bio_for_each_segment (bvec, bio, idx) {
-			BUG_ON(ring_req->nr_segments
-			       == BLKIF_MAX_SEGMENTS_PER_REQUEST);
-			buffer_mfn = pfn_to_mfn(page_to_pfn(bvec->bv_page));
-			fsect = bvec->bv_offset >> 9;
-			lsect = fsect + (bvec->bv_len >> 9) - 1;
-			/* install a grant reference. */
-			ref = gnttab_claim_grant_reference(&gref_head);
-			BUG_ON(ref == -ENOSPC);
+	rq_for_each_segment(req, idx, bvec) {
+		BUG_ON(ring_req->nr_segments
+		       == BLKIF_MAX_SEGMENTS_PER_REQUEST);
+		buffer_mfn = pfn_to_mfn(page_to_pfn(bvec.bv_page));
+		fsect = bvec.bv_offset >> 9;
+		lsect = fsect + (bvec.bv_len >> 9) - 1;
+		/* install a grant reference. */
+		ref = gnttab_claim_grant_reference(&gref_head);
+		BUG_ON(ref == -ENOSPC);
+
+		gnttab_grant_foreign_access_ref(
+			ref,
+			info->xbdev->otherend_id,
+			buffer_mfn,
+			rq_data_dir(req) );
+
+		info->shadow[id].frame[ring_req->nr_segments] =
+			mfn_to_pfn(buffer_mfn);
+
+		ring_req->seg[ring_req->nr_segments] =
+			(struct blkif_request_segment) {
+				.gref       = ref,
+				.first_sect = fsect,
+				.last_sect  = lsect };
 
-			gnttab_grant_foreign_access_ref(
-				ref,
-				info->xbdev->otherend_id,
-				buffer_mfn,
-				rq_data_dir(req) );
-
-			info->shadow[id].frame[ring_req->nr_segments] =
-				mfn_to_pfn(buffer_mfn);
-
-			ring_req->seg[ring_req->nr_segments] =
-				(struct blkif_request_segment) {
-					.gref       = ref,
-					.first_sect = fsect,
-					.last_sect  = lsect };
-
-			ring_req->nr_segments++;
-		}
+		ring_req->nr_segments++;
 	}
 
 	info->ring.req_prod_pvt++;

diff .prev/drivers/ide/ide-floppy.c ./drivers/ide/ide-floppy.c
--- .prev/drivers/ide/ide-floppy.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/ide/ide-floppy.c	2007-08-16 15:02:31.000000000 +1000
@@ -605,27 +605,26 @@ static int idefloppy_do_end_request(ide_
 static void idefloppy_input_buffers (ide_drive_t *drive, idefloppy_pc_t *pc, unsigned int bcount)
 {
 	struct request *rq = pc->rq;
-	struct bio_vec *bvec;
-	struct bio *bio;
+	struct bio_vec bvec;
 	unsigned long flags;
 	char *data;
-	int count, i, done = 0;
+	int count;
+	int done = 0;
+	struct req_iterator i;
 
-	rq_for_each_bio(bio, rq) {
-		bio_for_each_segment(bvec, bio, i) {
-			if (!bcount)
-				break;
+	rq_for_each_segment(rq, i, bvec) {
+		if (!bcount)
+			break;
 
-			count = min(bvec->bv_len, bcount);
+		count = min(bvec.bv_len, bcount);
 
-			data = bvec_kmap_irq(bvec, &flags);
-			drive->hwif->atapi_input_bytes(drive, data, count);
-			bvec_kunmap_irq(data, &flags);
-
-			bcount -= count;
-			pc->b_count += count;
-			done += count;
-		}
+		data = bvec_kmap_irq(&bvec, &flags);
+		drive->hwif->atapi_input_bytes(drive, data, count);
+		bvec_kunmap_irq(data, &flags);
+
+		bcount -= count;
+		pc->b_count += count;
+		done += count;
 	}
 
 	idefloppy_do_end_request(drive, 1, done >> 9);
@@ -639,27 +638,26 @@ static void idefloppy_input_buffers (ide
 static void idefloppy_output_buffers (ide_drive_t *drive, idefloppy_pc_t *pc, unsigned int bcount)
 {
 	struct request *rq = pc->rq;
-	struct bio *bio;
-	struct bio_vec *bvec;
+	struct bio_vec bvec;
+	struct req_iterator i;
 	unsigned long flags;
-	int count, i, done = 0;
+	int count;
+	int done = 0;
 	char *data;
 
-	rq_for_each_bio(bio, rq) {
-		bio_for_each_segment(bvec, bio, i) {
-			if (!bcount)
-				break;
+	rq_for_each_segment(rq, i, bvec) {
+		if (!bcount)
+			break;
 
-			count = min(bvec->bv_len, bcount);
+		count = min(bvec.bv_len, bcount);
 
-			data = bvec_kmap_irq(bvec, &flags);
-			drive->hwif->atapi_output_bytes(drive, data, count);
-			bvec_kunmap_irq(data, &flags);
-
-			bcount -= count;
-			pc->b_count += count;
-			done += count;
-		}
+		data = bvec_kmap_irq(&bvec, &flags);
+		drive->hwif->atapi_output_bytes(drive, data, count);
+		bvec_kunmap_irq(data, &flags);
+
+		bcount -= count;
+		pc->b_count += count;
+		done += count;
 	}
 
 	idefloppy_do_end_request(drive, 1, done >> 9);

diff .prev/drivers/s390/block/dasd_diag.c ./drivers/s390/block/dasd_diag.c
--- .prev/drivers/s390/block/dasd_diag.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/s390/block/dasd_diag.c	2007-08-16 15:02:31.000000000 +1000
@@ -471,14 +471,13 @@ dasd_diag_build_cp(struct dasd_device * 
 	struct dasd_ccw_req *cqr;
 	struct dasd_diag_req *dreq;
 	struct dasd_diag_bio *dbio;
-	struct bio *bio;
-	struct bio_vec *bv;
+	struct bio_vec bv;
 	char *dst;
 	unsigned int count, datasize;
 	sector_t recid, first_rec, last_rec;
 	unsigned int blksize, off;
 	unsigned char rw_cmd;
-	int i;
+	struct req_iterator i;
 
 	if (rq_data_dir(req) == READ)
 		rw_cmd = MDSK_READ_REQ;
@@ -492,13 +491,11 @@ dasd_diag_build_cp(struct dasd_device * 
 	last_rec = (req->sector + req->nr_sectors - 1) >> device->s2b_shift;
 	/* Check struct bio and count the number of blocks for the request. */
 	count = 0;
-	rq_for_each_bio(bio, req) {
-		bio_for_each_segment(bv, bio, i) {
-			if (bv->bv_len & (blksize - 1))
-				/* Fba can only do full blocks. */
-				return ERR_PTR(-EINVAL);
-			count += bv->bv_len >> (device->s2b_shift + 9);
-		}
+	rq_for_each_segment(req, i, bv) {
+		if (bv.bv_len & (blksize - 1))
+			/* Fba can only do full blocks. */
+			return ERR_PTR(-EINVAL);
+		count += bv.bv_len >> (device->s2b_shift + 9);
 	}
 	/* Paranoia. */
 	if (count != last_rec - first_rec + 1)
@@ -515,20 +512,19 @@ dasd_diag_build_cp(struct dasd_device * 
 	dreq->block_count = count;
 	dbio = dreq->bio;
 	recid = first_rec;
-	rq_for_each_bio(bio, req) {
-		bio_for_each_segment(bv, bio, i) {
-			dst = page_address(bv->bv_page) + bv->bv_offset;
-			for (off = 0; off < bv->bv_len; off += blksize) {
-				memset(dbio, 0, sizeof (struct dasd_diag_bio));
-				dbio->type = rw_cmd;
-				dbio->block_number = recid + 1;
-				dbio->buffer = dst;
-				dbio++;
-				dst += blksize;
-				recid++;
-			}
+	rq_for_each_segment(req, i, bv) {
+		dst = page_address(bv.bv_page) + bv.bv_offset;
+		for (off = 0; off < bv.bv_len; off += blksize) {
+			memset(dbio, 0, sizeof(struct dasd_diag_bio));
+			dbio->type = rw_cmd;
+			dbio->block_number = recid + 1;
+			dbio->buffer = dst;
+			dbio++;
+			dst += blksize;
+			recid++;
 		}
 	}
+
 	cqr->retries = DIAG_MAX_RETRIES;
 	cqr->buildclk = get_clock();
 	if (req->cmd_flags & REQ_FAILFAST)

diff .prev/drivers/s390/block/dasd_eckd.c ./drivers/s390/block/dasd_eckd.c
--- .prev/drivers/s390/block/dasd_eckd.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/s390/block/dasd_eckd.c	2007-08-16 15:02:31.000000000 +1000
@@ -1176,8 +1176,7 @@ dasd_eckd_build_cp(struct dasd_device * 
 	struct LO_eckd_data *LO_data;
 	struct dasd_ccw_req *cqr;
 	struct ccw1 *ccw;
-	struct bio *bio;
-	struct bio_vec *bv;
+	struct bio_vec bv;
 	char *dst;
 	unsigned int blksize, blk_per_trk, off;
 	int count, cidaw, cplength, datasize;
@@ -1185,7 +1184,7 @@ dasd_eckd_build_cp(struct dasd_device * 
 	sector_t first_trk, last_trk;
 	unsigned int first_offs, last_offs;
 	unsigned char cmd, rcmd;
-	int i;
+	struct req_iterator i;
 
 	private = (struct dasd_eckd_private *) device->private;
 	if (rq_data_dir(req) == READ)
@@ -1206,18 +1205,16 @@ dasd_eckd_build_cp(struct dasd_device * 
 	/* Check struct bio and count the number of blocks for the request. */
 	count = 0;
 	cidaw = 0;
-	rq_for_each_bio(bio, req) {
-		bio_for_each_segment(bv, bio, i) {
-			if (bv->bv_len & (blksize - 1))
-				/* Eckd can only do full blocks. */
-				return ERR_PTR(-EINVAL);
-			count += bv->bv_len >> (device->s2b_shift + 9);
+	rq_for_each_segment(req, i, bv) {
+		if (bv.bv_len & (blksize - 1))
+			/* Eckd can only do full blocks. */
+			return ERR_PTR(-EINVAL);
+		count += bv.bv_len >> (device->s2b_shift + 9);
 #if defined(CONFIG_64BIT)
-			if (idal_is_needed (page_address(bv->bv_page),
-					    bv->bv_len))
-				cidaw += bv->bv_len >> (device->s2b_shift + 9);
+		if (idal_is_needed (page_address(bv.bv_page),
+				    bv.bv_len))
+			cidaw += bv.bv_len >> (device->s2b_shift + 9);
 #endif
-		}
 	}
 	/* Paranoia. */
 	if (count != last_rec - first_rec + 1)
@@ -1257,17 +1254,17 @@ dasd_eckd_build_cp(struct dasd_device * 
 		locate_record(ccw++, LO_data++, first_trk, first_offs + 1,
 			      last_rec - recid + 1, cmd, device, blksize);
 	}
-	rq_for_each_bio(bio, req) bio_for_each_segment(bv, bio, i) {
-		dst = page_address(bv->bv_page) + bv->bv_offset;
+	rq_for_each_segment(req, i, bv) {
+		dst = page_address(bv.bv_page) + bv.bv_offset;
 		if (dasd_page_cache) {
 			char *copy = kmem_cache_alloc(dasd_page_cache,
 						      GFP_DMA | __GFP_NOWARN);
 			if (copy && rq_data_dir(req) == WRITE)
-				memcpy(copy + bv->bv_offset, dst, bv->bv_len);
+				memcpy(copy + bv.bv_offset, dst, bv.bv_len);
 			if (copy)
-				dst = copy + bv->bv_offset;
+				dst = copy + bv.bv_offset;
 		}
-		for (off = 0; off < bv->bv_len; off += blksize) {
+		for (off = 0; off < bv.bv_len; off += blksize) {
 			sector_t trkid = recid;
 			unsigned int recoffs = sector_div(trkid, blk_per_trk);
 			rcmd = cmd;
@@ -1328,12 +1325,12 @@ dasd_eckd_free_cp(struct dasd_ccw_req *c
 {
 	struct dasd_eckd_private *private;
 	struct ccw1 *ccw;
-	struct bio *bio;
-	struct bio_vec *bv;
+	struct bio_vec bv;
 	char *dst, *cda;
 	unsigned int blksize, blk_per_trk, off;
 	sector_t recid;
-	int i, status;
+	int status;
+	struct req_iterator i;
 
 	if (!dasd_page_cache)
 		goto out;
@@ -1346,9 +1343,9 @@ dasd_eckd_free_cp(struct dasd_ccw_req *c
 	ccw++;
 	if (private->uses_cdl == 0 || recid > 2*blk_per_trk)
 		ccw++;
-	rq_for_each_bio(bio, req) bio_for_each_segment(bv, bio, i) {
-		dst = page_address(bv->bv_page) + bv->bv_offset;
-		for (off = 0; off < bv->bv_len; off += blksize) {
+	rq_for_each_segment(req, i, bv) {
+		dst = page_address(bv.bv_page) + bv.bv_offset;
+		for (off = 0; off < bv.bv_len; off += blksize) {
 			/* Skip locate record. */
 			if (private->uses_cdl && recid <= 2*blk_per_trk)
 				ccw++;
@@ -1359,7 +1356,7 @@ dasd_eckd_free_cp(struct dasd_ccw_req *c
 					cda = (char *)((addr_t) ccw->cda);
 				if (dst != cda) {
 					if (rq_data_dir(req) == READ)
-						memcpy(dst, cda, bv->bv_len);
+						memcpy(dst, cda, bv.bv_len);
 					kmem_cache_free(dasd_page_cache,
 					    (void *)((addr_t)cda & PAGE_MASK));
 				}

diff .prev/drivers/s390/block/dasd_fba.c ./drivers/s390/block/dasd_fba.c
--- .prev/drivers/s390/block/dasd_fba.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/s390/block/dasd_fba.c	2007-08-16 15:02:31.000000000 +1000
@@ -234,14 +234,13 @@ dasd_fba_build_cp(struct dasd_device * d
 	struct LO_fba_data *LO_data;
 	struct dasd_ccw_req *cqr;
 	struct ccw1 *ccw;
-	struct bio *bio;
-	struct bio_vec *bv;
+	struct bio_vec bv;
 	char *dst;
 	int count, cidaw, cplength, datasize;
 	sector_t recid, first_rec, last_rec;
 	unsigned int blksize, off;
 	unsigned char cmd;
-	int i;
+	struct req_iterator i;
 
 	private = (struct dasd_fba_private *) device->private;
 	if (rq_data_dir(req) == READ) {
@@ -257,18 +256,16 @@ dasd_fba_build_cp(struct dasd_device * d
 	/* Check struct bio and count the number of blocks for the request. */
 	count = 0;
 	cidaw = 0;
-	rq_for_each_bio(bio, req) {
-		bio_for_each_segment(bv, bio, i) {
-			if (bv->bv_len & (blksize - 1))
-				/* Fba can only do full blocks. */
-				return ERR_PTR(-EINVAL);
-			count += bv->bv_len >> (device->s2b_shift + 9);
+	rq_for_each_segment(req, i, bv) {
+		if (bv.bv_len & (blksize - 1))
+			/* Fba can only do full blocks. */
+			return ERR_PTR(-EINVAL);
+		count += bv.bv_len >> (device->s2b_shift + 9);
 #if defined(CONFIG_64BIT)
-			if (idal_is_needed (page_address(bv->bv_page),
-					    bv->bv_len))
-				cidaw += bv->bv_len / blksize;
+		if (idal_is_needed (page_address(bv.bv_page),
+				    bv.bv_len))
+			cidaw += bv.bv_len / blksize;
 #endif
-		}
 	}
 	/* Paranoia. */
 	if (count != last_rec - first_rec + 1)
@@ -304,17 +301,17 @@ dasd_fba_build_cp(struct dasd_device * d
 		locate_record(ccw++, LO_data++, rq_data_dir(req), 0, count);
 	}
 	recid = first_rec;
-	rq_for_each_bio(bio, req) bio_for_each_segment(bv, bio, i) {
-		dst = page_address(bv->bv_page) + bv->bv_offset;
+	rq_for_each_segment(req, i, bv) {
+		dst = page_address(bv.bv_page) + bv.bv_offset;
 		if (dasd_page_cache) {
 			char *copy = kmem_cache_alloc(dasd_page_cache,
 						      GFP_DMA | __GFP_NOWARN);
 			if (copy && rq_data_dir(req) == WRITE)
-				memcpy(copy + bv->bv_offset, dst, bv->bv_len);
+				memcpy(copy + bv.bv_offset, dst, bv.bv_len);
 			if (copy)
-				dst = copy + bv->bv_offset;
+				dst = copy + bv.bv_offset;
 		}
-		for (off = 0; off < bv->bv_len; off += blksize) {
+		for (off = 0; off < bv.bv_len; off += blksize) {
 			/* Locate record for stupid devices. */
 			if (private->rdc_data.mode.bits.data_chain == 0) {
 				ccw[-1].flags |= CCW_FLAG_CC;
@@ -359,11 +356,11 @@ dasd_fba_free_cp(struct dasd_ccw_req *cq
 {
 	struct dasd_fba_private *private;
 	struct ccw1 *ccw;
-	struct bio *bio;
-	struct bio_vec *bv;
+	struct bio_vec bv;
 	char *dst, *cda;
 	unsigned int blksize, off;
-	int i, status;
+	int status;
+	struct req_iterator i;
 
 	if (!dasd_page_cache)
 		goto out;
@@ -374,9 +371,9 @@ dasd_fba_free_cp(struct dasd_ccw_req *cq
 	ccw++;
 	if (private->rdc_data.mode.bits.data_chain != 0)
 		ccw++;
-	rq_for_each_bio(bio, req) bio_for_each_segment(bv, bio, i) {
-		dst = page_address(bv->bv_page) + bv->bv_offset;
-		for (off = 0; off < bv->bv_len; off += blksize) {
+	rq_for_each_segment(req, i, bv) {
+		dst = page_address(bv.bv_page) + bv.bv_offset;
+		for (off = 0; off < bv.bv_len; off += blksize) {
 			/* Skip locate record. */
 			if (private->rdc_data.mode.bits.data_chain == 0)
 				ccw++;
@@ -387,7 +384,7 @@ dasd_fba_free_cp(struct dasd_ccw_req *cq
 					cda = (char *)((addr_t) ccw->cda);
 				if (dst != cda) {
 					if (rq_data_dir(req) == READ)
-						memcpy(dst, cda, bv->bv_len);
+						memcpy(dst, cda, bv.bv_len);
 					kmem_cache_free(dasd_page_cache,
 					    (void *)((addr_t)cda & PAGE_MASK));
 				}

diff .prev/drivers/s390/char/tape_34xx.c ./drivers/s390/char/tape_34xx.c
--- .prev/drivers/s390/char/tape_34xx.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/s390/char/tape_34xx.c	2007-08-16 15:02:31.000000000 +1000
@@ -1134,21 +1134,18 @@ tape_34xx_bread(struct tape_device *devi
 {
 	struct tape_request *request;
 	struct ccw1 *ccw;
-	int count = 0, i;
+	int count = 0;
+	struct req_iterator i;
 	unsigned off;
 	char *dst;
-	struct bio_vec *bv;
-	struct bio *bio;
+	struct bio_vec bv;
 	struct tape_34xx_block_id *	start_block;
 
 	DBF_EVENT(6, "xBREDid:");
 
 	/* Count the number of blocks for the request. */
-	rq_for_each_bio(bio, req) {
-		bio_for_each_segment(bv, bio, i) {
+	rq_for_each_segment(req, i, bv)
 			count += bv->bv_len >> (TAPEBLOCK_HSEC_S2B + 9);
-		}
-	}
 
 	/* Allocate the ccw request. */
 	request = tape_alloc_request(3+count+1, 8);
@@ -1175,18 +1172,16 @@ tape_34xx_bread(struct tape_device *devi
 	ccw = tape_ccw_cc(ccw, NOP, 0, NULL);
 	ccw = tape_ccw_cc(ccw, NOP, 0, NULL);
 
-	rq_for_each_bio(bio, req) {
-		bio_for_each_segment(bv, bio, i) {
-			dst = kmap(bv->bv_page) + bv->bv_offset;
-			for (off = 0; off < bv->bv_len;
-			     off += TAPEBLOCK_HSEC_SIZE) {
-				ccw->flags = CCW_FLAG_CC;
-				ccw->cmd_code = READ_FORWARD;
-				ccw->count = TAPEBLOCK_HSEC_SIZE;
-				set_normalized_cda(ccw, (void*) __pa(dst));
-				ccw++;
-				dst += TAPEBLOCK_HSEC_SIZE;
-			}
+	rq_for_each_segment(req, i, bv) {
+		dst = kmap(bv.bv_page) + bv.bv_offset;
+		for (off = 0; off < bv.bv_len;
+		     off += TAPEBLOCK_HSEC_SIZE) {
+			ccw->flags = CCW_FLAG_CC;
+			ccw->cmd_code = READ_FORWARD;
+			ccw->count = TAPEBLOCK_HSEC_SIZE;
+			set_normalized_cda(ccw, (void *) __pa(dst));
+			ccw++;
+			dst += TAPEBLOCK_HSEC_SIZE;
 		}
 	}
 

diff .prev/drivers/s390/char/tape_3590.c ./drivers/s390/char/tape_3590.c
--- .prev/drivers/s390/char/tape_3590.c	2007-08-16 14:57:22.000000000 +1000
+++ ./drivers/s390/char/tape_3590.c	2007-08-16 15:02:31.000000000 +1000
@@ -623,21 +623,20 @@ tape_3590_bread(struct tape_device *devi
 {
 	struct tape_request *request;
 	struct ccw1 *ccw;
-	int count = 0, start_block, i;
+	int count = 0;
+	int start_block;
+	struct req_iterator i;
 	unsigned off;
 	char *dst;
-	struct bio_vec *bv;
-	struct bio *bio;
+	struct bio_vec bv;
 
 	DBF_EVENT(6, "xBREDid:");
 	start_block = req->sector >> TAPEBLOCK_HSEC_S2B;
 	DBF_EVENT(6, "start_block = %i\n", start_block);
 
-	rq_for_each_bio(bio, req) {
-		bio_for_each_segment(bv, bio, i) {
-			count += bv->bv_len >> (TAPEBLOCK_HSEC_S2B + 9);
-		}
-	}
+	rq_for_each_segment(req, i, bv)
+		count += bv.bv_len >> (TAPEBLOCK_HSEC_S2B + 9);
+
 	request = tape_alloc_request(2 + count + 1, 4);
 	if (IS_ERR(request))
 		return request;
@@ -653,21 +652,19 @@ tape_3590_bread(struct tape_device *devi
 	 */
 	ccw = tape_ccw_cc(ccw, NOP, 0, NULL);
 
-	rq_for_each_bio(bio, req) {
-		bio_for_each_segment(bv, bio, i) {
-			dst = page_address(bv->bv_page) + bv->bv_offset;
-			for (off = 0; off < bv->bv_len;
-			     off += TAPEBLOCK_HSEC_SIZE) {
-				ccw->flags = CCW_FLAG_CC;
-				ccw->cmd_code = READ_FORWARD;
-				ccw->count = TAPEBLOCK_HSEC_SIZE;
-				set_normalized_cda(ccw, (void *) __pa(dst));
-				ccw++;
-				dst += TAPEBLOCK_HSEC_SIZE;
-			}
-			if (off > bv->bv_len)
-				BUG();
+	rq_for_each_segment(req, i, bv) {
+		dst = page_address(bv.bv_page) + bv.bv_offset;
+		for (off = 0; off < bv.bv_len;
+		     off += TAPEBLOCK_HSEC_SIZE) {
+			ccw->flags = CCW_FLAG_CC;
+			ccw->cmd_code = READ_FORWARD;
+			ccw->count = TAPEBLOCK_HSEC_SIZE;
+			set_normalized_cda(ccw, (void *) __pa(dst));
+			ccw++;
+			dst += TAPEBLOCK_HSEC_SIZE;
 		}
+		if (off > bv.bv_len)
+			BUG();
 	}
 	ccw = tape_ccw_end(ccw, NOP, 0, NULL);
 	DBF_EVENT(6, "xBREDccwg\n");

diff .prev/include/linux/blkdev.h ./include/linux/blkdev.h
--- .prev/include/linux/blkdev.h	2007-08-16 15:02:30.000000000 +1000
+++ ./include/linux/blkdev.h	2007-08-16 15:02:31.000000000 +1000
@@ -637,6 +637,24 @@ static inline void blk_queue_bounce(stru
 }
 #endif /* CONFIG_MMU */
 
+struct req_iterator {
+	int i;
+	struct bio *bio;
+};
+
+#define _bio_for_each_segment(bvec, bio, i)		\
+	for (i = (bio)->bi_idx;				\
+	     i < (bio)->bi_vcnt;			\
+	     i++) if ((bvec = *bio_iovec_idx(bio, i)), 1)
+
+#define rq_for_each_segment(rq, _iter, bvec)	\
+	for (_iter.bio = (rq)->bio;		\
+	     _iter.bio;				\
+	     _iter.bio = _iter.bio->bi_next)	\
+		_bio_for_each_segment(bvec, _iter.bio, _iter.i)
+
+#define rq_iter_last(rq, _iter) (_iter.bio->bi_next == NULL && 	\
+				 _iter.i == _iter.bio->bi_vcnt - 1)
 #define rq_for_each_bio(_bio, rq)	\
 	if ((rq->bio))			\
 		for (_bio = (rq)->bio; _bio; _bio = _bio->bi_next)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 005 of 5] Merge blk_recount_segments into blk_recalc_rq_segments
  2007-08-16  5:13 [PATCH 000 of 5] Introductory patches for bio refactor NeilBrown
                   ` (3 preceding siblings ...)
  2007-08-16  5:13 ` [PATCH 004 of 5] Introduce rq_for_each_segment replacing rq_for_each_bio NeilBrown
@ 2007-08-16  5:13 ` NeilBrown
  2007-08-16  6:37 ` [PATCH 000 of 5] Introductory patches for bio refactor Jens Axboe
  5 siblings, 0 replies; 14+ messages in thread
From: NeilBrown @ 2007-08-16  5:13 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tejun Heo


blk_recalc_rq_segments calls blk_recount_segments on each bio,
then does some extra calculations to handle segments that overlap
two bios.

If we merge the code from blk_recount_segments into
blk_recalc_rq_segments, we can process the whole request one bio_vec
at a time, and not need the messy cross-bio calculations.

Then blk_recount_segments can be implemented by calling
blk_recalc_rq_segments, passing it a simple on-stack request which
stores just the bio.  This function is only temporary and will go away
completely by the end of this patch series.

This allows us to remove rq_for_each_bio


Signed-off-by: Neil Brown <neilb@suse.de>

### Diffstat output
 ./block/ll_rw_blk.c      |  125 ++++++++++++++++++++---------------------------
 ./include/linux/blkdev.h |    3 -
 2 files changed, 55 insertions(+), 73 deletions(-)

diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c
--- .prev/block/ll_rw_blk.c	2007-08-16 15:02:31.000000000 +1000
+++ ./block/ll_rw_blk.c	2007-08-16 15:02:31.000000000 +1000
@@ -42,6 +42,7 @@ static void drive_stat_acct(struct reque
 static void init_request_from_bio(struct request *req, struct bio *bio);
 static int __make_request(struct request_queue *q, struct bio *bio);
 static struct io_context *current_io_context(gfp_t gfp_flags, int node);
+static void blk_recalc_rq_segments(struct request *rq);
 
 /*
  * For the allocated request tables
@@ -1209,65 +1210,91 @@ EXPORT_SYMBOL(blk_dump_rq_flags);
 
 void blk_recount_segments(struct request_queue *q, struct bio *bio)
 {
-	struct bio_vec *bv, *bvprv = NULL;
-	int i, nr_phys_segs, nr_hw_segs, seg_size, hw_seg_size, cluster;
+	struct request rq;
+	struct bio *nxt = bio->bi_next;
+	rq.q = q;
+	rq.bio = rq.biotail = bio;
+	bio->bi_next = NULL;
+	blk_recalc_rq_segments(&rq);
+	bio->bi_next = nxt;
+	bio->bi_phys_segments = rq.nr_phys_segments;
+	bio->bi_hw_segments = rq.nr_hw_segments;
+	bio->bi_flags |= (1 << BIO_SEG_VALID);
+}
+EXPORT_SYMBOL(blk_recount_segments);
+
+static void blk_recalc_rq_segments(struct request *rq)
+{
+	int nr_phys_segs;
+	int nr_hw_segs;
+	unsigned int phys_size;
+	unsigned int hw_size;
+	struct bio_vec bv;
+	struct bio_vec bvprv = {0};
+	int seg_size;
+	int hw_seg_size;
+	int cluster;
+	struct req_iterator i;
 	int high, highprv = 1;
+	struct request_queue *q = rq->q;
 
-	if (unlikely(!bio->bi_io_vec))
+	if (!rq->bio)
 		return;
 
 	cluster = q->queue_flags & (1 << QUEUE_FLAG_CLUSTER);
-	hw_seg_size = seg_size = nr_phys_segs = nr_hw_segs = 0;
-	bio_for_each_segment(bv, bio, i) {
+	hw_seg_size = seg_size = 0;
+	phys_size = hw_size = nr_phys_segs = nr_hw_segs = 0;
+	rq_for_each_segment(rq, i, bv) {
 		/*
 		 * the trick here is making sure that a high page is never
 		 * considered part of another segment, since that might
 		 * change with the bounce page.
 		 */
-		high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
+		high = page_to_pfn(bv.bv_page) > q->bounce_pfn;
 		if (high || highprv)
 			goto new_hw_segment;
 		if (cluster) {
-			if (seg_size + bv->bv_len > q->max_segment_size)
+			if (seg_size + bv.bv_len > q->max_segment_size)
 				goto new_segment;
-			if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
+			if (!BIOVEC_PHYS_MERGEABLE(&bvprv, &bv))
 				goto new_segment;
-			if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
+			if (!BIOVEC_SEG_BOUNDARY(q, &bvprv, &bv))
 				goto new_segment;
-			if (BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len))
+			if (BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv.bv_len))
 				goto new_hw_segment;
 
-			seg_size += bv->bv_len;
-			hw_seg_size += bv->bv_len;
+			seg_size += bv.bv_len;
+			hw_seg_size += bv.bv_len;
 			bvprv = bv;
 			continue;
 		}
 new_segment:
-		if (BIOVEC_VIRT_MERGEABLE(bvprv, bv) &&
-		    !BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv->bv_len)) {
-			hw_seg_size += bv->bv_len;
-		} else {
+		if (BIOVEC_VIRT_MERGEABLE(&bvprv, &bv) &&
+		    !BIOVEC_VIRT_OVERSIZE(hw_seg_size + bv.bv_len))
+			hw_seg_size += bv.bv_len;
+		else {
 new_hw_segment:
-			if (hw_seg_size > bio->bi_hw_front_size)
-				bio->bi_hw_front_size = hw_seg_size;
-			hw_seg_size = BIOVEC_VIRT_START_SIZE(bv) + bv->bv_len;
+			if (nr_hw_segs == 1 &&
+			    hw_seg_size > rq->bio->bi_hw_front_size)
+				rq->bio->bi_hw_front_size = hw_seg_size;
+			hw_seg_size = BIOVEC_VIRT_START_SIZE(&bv) + bv.bv_len;
 			nr_hw_segs++;
 		}
 
 		nr_phys_segs++;
 		bvprv = bv;
-		seg_size = bv->bv_len;
+		seg_size = bv.bv_len;
 		highprv = high;
 	}
-	if (hw_seg_size > bio->bi_hw_back_size)
-		bio->bi_hw_back_size = hw_seg_size;
-	if (nr_hw_segs == 1 && hw_seg_size > bio->bi_hw_front_size)
-		bio->bi_hw_front_size = hw_seg_size;
-	bio->bi_phys_segments = nr_phys_segs;
-	bio->bi_hw_segments = nr_hw_segs;
-	bio->bi_flags |= (1 << BIO_SEG_VALID);
+
+	if (nr_hw_segs == 1 &&
+	    hw_seg_size > rq->bio->bi_hw_front_size)
+		rq->bio->bi_hw_front_size = hw_seg_size;
+	if (hw_seg_size > rq->biotail->bi_hw_back_size)
+		rq->biotail->bi_hw_back_size = hw_seg_size;
+	rq->nr_phys_segments = nr_phys_segs;
+	rq->nr_hw_segments = nr_hw_segs;
 }
-EXPORT_SYMBOL(blk_recount_segments);
 
 static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
 				   struct bio *nxt)
@@ -3313,48 +3340,6 @@ void submit_bio(int rw, struct bio *bio)
 
 EXPORT_SYMBOL(submit_bio);
 
-static void blk_recalc_rq_segments(struct request *rq)
-{
-	struct bio *bio, *prevbio = NULL;
-	int nr_phys_segs, nr_hw_segs;
-	unsigned int phys_size, hw_size;
-	struct request_queue *q = rq->q;
-
-	if (!rq->bio)
-		return;
-
-	phys_size = hw_size = nr_phys_segs = nr_hw_segs = 0;
-	rq_for_each_bio(bio, rq) {
-		/* Force bio hw/phys segs to be recalculated. */
-		bio->bi_flags &= ~(1 << BIO_SEG_VALID);
-
-		nr_phys_segs += bio_phys_segments(q, bio);
-		nr_hw_segs += bio_hw_segments(q, bio);
-		if (prevbio) {
-			int pseg = phys_size + prevbio->bi_size + bio->bi_size;
-			int hseg = hw_size + prevbio->bi_size + bio->bi_size;
-
-			if (blk_phys_contig_segment(q, prevbio, bio) &&
-			    pseg <= q->max_segment_size) {
-				nr_phys_segs--;
-				phys_size += prevbio->bi_size + bio->bi_size;
-			} else
-				phys_size = 0;
-
-			if (blk_hw_contig_segment(q, prevbio, bio) &&
-			    hseg <= q->max_segment_size) {
-				nr_hw_segs--;
-				hw_size += prevbio->bi_size + bio->bi_size;
-			} else
-				hw_size = 0;
-		}
-		prevbio = bio;
-	}
-
-	rq->nr_phys_segments = nr_phys_segs;
-	rq->nr_hw_segments = nr_hw_segs;
-}
-
 static void blk_recalc_rq_sectors(struct request *rq, int nsect)
 {
 	if (blk_fs_request(rq)) {

diff .prev/include/linux/blkdev.h ./include/linux/blkdev.h
--- .prev/include/linux/blkdev.h	2007-08-16 15:02:31.000000000 +1000
+++ ./include/linux/blkdev.h	2007-08-16 15:02:31.000000000 +1000
@@ -655,9 +655,6 @@ struct req_iterator {
 
 #define rq_iter_last(rq, _iter) (_iter.bio->bi_next == NULL && 	\
 				 _iter.i == _iter.bio->bi_vcnt - 1)
-#define rq_for_each_bio(_bio, rq)	\
-	if ((rq->bio))			\
-		for (_bio = (rq)->bio; _bio; _bio = _bio->bi_next)
 
 extern int blk_register_queue(struct gendisk *disk);
 extern void blk_unregister_queue(struct gendisk *disk);

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 000 of 5] Introductory patches for bio refactor.
  2007-08-16  5:13 [PATCH 000 of 5] Introductory patches for bio refactor NeilBrown
                   ` (4 preceding siblings ...)
  2007-08-16  5:13 ` [PATCH 005 of 5] Merge blk_recount_segments into blk_recalc_rq_segments NeilBrown
@ 2007-08-16  6:37 ` Jens Axboe
  5 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2007-08-16  6:37 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-kernel, Tejun Heo

On Thu, Aug 16 2007, NeilBrown wrote:
> 
> Hi Jens,
>  I wonder if you would accept these patches the block layer.
>  They are, as far as I can tell, quite uncontroversial and provide
>  good cleanups.
> 
>  The first is a minor bug-fix.
>  The next to replace helper function that take a bio (always the first
>  bio of a request), to instead take a request.
> 
>  The last two combine to replace rq_for_each_bio with
>  rq_for_each_segment which makes blk_recalc_rq_segments a lot
>  cleaner, and improves (to a lesser degree) every other place that
>  used rq_for_each_bio.
> 
>  The net result is a decrease in object code size (for my x86-64 compile)
>  for block/built-in.o of 96 bytes, though there is some growth out in
>  drivers making and over-all decrease of only 48 bytes.

Thanks for seperating these out Neil, will go over them this morning!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge.
  2007-08-16  5:13 ` [PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge NeilBrown
@ 2007-08-16  7:01   ` Jens Axboe
  0 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2007-08-16  7:01 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-kernel, Tejun Heo

On Thu, Aug 16 2007, NeilBrown wrote:
> 
> ll_merge_requests_fn can update bi_hw_*_size in one case where we end
> up not merging.  This is wrong.

This looks good, obviously. Applied.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 002 of 5] Replace bio_data with blk_rq_data
  2007-08-16  5:13 ` [PATCH 002 of 5] Replace bio_data with blk_rq_data NeilBrown
@ 2007-08-16  7:02   ` Jens Axboe
  2007-08-16  7:15     ` Neil Brown
  0 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2007-08-16  7:02 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-kernel, Tejun Heo

On Thu, Aug 16 2007, NeilBrown wrote:
> 
> Almost every call to bio_data is for the first bio
> in a request.  A future patch will add some accounting
> information to 'struct request' which will need to be
> used to find the start of the request in the bio.
> So replace bio_data with blk_rq_data which takes a 'struct request *'
> 
> The one exception is in dm-emc were using
>    page_address(bio->bi_io_vec[0].bv_page);
> is appropriate.

This (and 3+4) just look like preparatory patches if we want to merge
the full patchset, not bug fixes. I seem to recall you had more bug
fixes or cleanups in your patchset, maybe I was mistaken. So nak for now
for 2-4, I'd apply 5 but it depends on the previous.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 002 of 5] Replace bio_data with blk_rq_data
  2007-08-16  7:02   ` Jens Axboe
@ 2007-08-16  7:15     ` Neil Brown
  2007-08-16  7:21       ` Jens Axboe
  0 siblings, 1 reply; 14+ messages in thread
From: Neil Brown @ 2007-08-16  7:15 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tejun Heo

On Thursday August 16, jens.axboe@oracle.com wrote:
> On Thu, Aug 16 2007, NeilBrown wrote:
> > 
> > Almost every call to bio_data is for the first bio
> > in a request.  A future patch will add some accounting
> > information to 'struct request' which will need to be
> > used to find the start of the request in the bio.
> > So replace bio_data with blk_rq_data which takes a 'struct request *'
> > 
> > The one exception is in dm-emc were using
> >    page_address(bio->bi_io_vec[0].bv_page);
> > is appropriate.
> 
> This (and 3+4) just look like preparatory patches if we want to merge
> the full patchset, not bug fixes. I seem to recall you had more bug
> fixes or cleanups in your patchset, maybe I was mistaken. So nak for now
> for 2-4, I'd apply 5 but it depends on the previous.
> 

I don't remember other bug fixes, but I'll look through and check.

2 and 3 are very simple changes that - I think - make it clearer what
is happening.

And I felt 5 was a sufficient improvement to justify it and 4...

I'll go hunting and see what other preliminaries I can find.

NeilBrown

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 002 of 5] Replace bio_data with blk_rq_data
  2007-08-16  7:15     ` Neil Brown
@ 2007-08-16  7:21       ` Jens Axboe
  2007-08-16 11:15         ` Neil Brown
  0 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2007-08-16  7:21 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-kernel, Tejun Heo

On Thu, Aug 16 2007, Neil Brown wrote:
> On Thursday August 16, jens.axboe@oracle.com wrote:
> > On Thu, Aug 16 2007, NeilBrown wrote:
> > > 
> > > Almost every call to bio_data is for the first bio
> > > in a request.  A future patch will add some accounting
> > > information to 'struct request' which will need to be
> > > used to find the start of the request in the bio.
> > > So replace bio_data with blk_rq_data which takes a 'struct request *'
> > > 
> > > The one exception is in dm-emc were using
> > >    page_address(bio->bi_io_vec[0].bv_page);
> > > is appropriate.
> > 
> > This (and 3+4) just look like preparatory patches if we want to merge
> > the full patchset, not bug fixes. I seem to recall you had more bug
> > fixes or cleanups in your patchset, maybe I was mistaken. So nak for now
> > for 2-4, I'd apply 5 but it depends on the previous.
> > 
> 
> I don't remember other bug fixes, but I'll look through and check.

OK, I'm properly remembering incorrectly then.

> 2 and 3 are very simple changes that - I think - make it clearer what
> is happening.

To be honest, I don't see much win in using blk_rq_data() over
bio_data() at all. I'd much much rather just see it go away!

> And I felt 5 was a sufficient improvement to justify it and 4...

5 is nice, I would like to apply that :-)

> I'll go hunting and see what other preliminaries I can find.

Thanks!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 002 of 5] Replace bio_data with blk_rq_data
  2007-08-16  7:21       ` Jens Axboe
@ 2007-08-16 11:15         ` Neil Brown
  2007-08-16 11:19           ` Jens Axboe
  2007-08-16 11:22           ` Jens Axboe
  0 siblings, 2 replies; 14+ messages in thread
From: Neil Brown @ 2007-08-16 11:15 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Tejun Heo

On Thursday August 16, jens.axboe@oracle.com wrote:
> > 2 and 3 are very simple changes that - I think - make it clearer what
> > is happening.
> 
> To be honest, I don't see much win in using blk_rq_data() over
> bio_data() at all. I'd much much rather just see it go away!
> 

Well, as it is always the data at the (current) head of the request...

But I can make it go a way if you like.. What does it become?
bvec_kmap ??

> > And I felt 5 was a sufficient improvement to justify it and 4...
> 
> 5 is nice, I would like to apply that :-)
> 

OK, here comes '5' at the top of a small set of patches.  Take it and
any of the following that you like.

NeilBrown

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 002 of 5] Replace bio_data with blk_rq_data
  2007-08-16 11:15         ` Neil Brown
@ 2007-08-16 11:19           ` Jens Axboe
  2007-08-16 11:22           ` Jens Axboe
  1 sibling, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2007-08-16 11:19 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-kernel, Tejun Heo

On Thu, Aug 16 2007, Neil Brown wrote:
> On Thursday August 16, jens.axboe@oracle.com wrote:
> > > 2 and 3 are very simple changes that - I think - make it clearer what
> > > is happening.
> > 
> > To be honest, I don't see much win in using blk_rq_data() over
> > bio_data() at all. I'd much much rather just see it go away!
> > 
> 
> Well, as it is always the data at the (current) head of the request...
> 
> But I can make it go a way if you like.. What does it become?
> bvec_kmap ??

Yeah, it then just becomes a kmap. And ->buffer goes away completely.

> > > And I felt 5 was a sufficient improvement to justify it and 4...
> > 
> > 5 is nice, I would like to apply that :-)
> > 
> 
> OK, here comes '5' at the top of a small set of patches.  Take it and
> any of the following that you like.

Thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 002 of 5] Replace bio_data with blk_rq_data
  2007-08-16 11:15         ` Neil Brown
  2007-08-16 11:19           ` Jens Axboe
@ 2007-08-16 11:22           ` Jens Axboe
  1 sibling, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2007-08-16 11:22 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-kernel, Tejun Heo

On Thu, Aug 16 2007, Neil Brown wrote:
> On Thursday August 16, jens.axboe@oracle.com wrote:
> > > 2 and 3 are very simple changes that - I think - make it clearer what
> > > is happening.
> > 
> > To be honest, I don't see much win in using blk_rq_data() over
> > bio_data() at all. I'd much much rather just see it go away!
> > 
> 
> Well, as it is always the data at the (current) head of the request...

So is bio_data(bio), and bio_data(rq->bio) is by definition the head of
the request. If anything, you are making it less flexible. Hence my
suggestion to just kill it off completely, it's a long over due cleanup.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2007-08-16 11:24 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-08-16  5:13 [PATCH 000 of 5] Introductory patches for bio refactor NeilBrown
2007-08-16  5:13 ` [PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge NeilBrown
2007-08-16  7:01   ` Jens Axboe
2007-08-16  5:13 ` [PATCH 002 of 5] Replace bio_data with blk_rq_data NeilBrown
2007-08-16  7:02   ` Jens Axboe
2007-08-16  7:15     ` Neil Brown
2007-08-16  7:21       ` Jens Axboe
2007-08-16 11:15         ` Neil Brown
2007-08-16 11:19           ` Jens Axboe
2007-08-16 11:22           ` Jens Axboe
2007-08-16  5:13 ` [PATCH 003 of 5] Replace bio_cur_sectors with blk_rq_cur_sectors NeilBrown
2007-08-16  5:13 ` [PATCH 004 of 5] Introduce rq_for_each_segment replacing rq_for_each_bio NeilBrown
2007-08-16  5:13 ` [PATCH 005 of 5] Merge blk_recount_segments into blk_recalc_rq_segments NeilBrown
2007-08-16  6:37 ` [PATCH 000 of 5] Introductory patches for bio refactor Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox