* [PATCH 0/13] Chaining sg lists for bio IO commands v3
@ 2007-05-10 10:21 Jens Axboe
2007-05-10 10:21 ` [PATCH 1/13] crypto: don't pollute the global namespace with sg_next() Jens Axboe
` (12 more replies)
0 siblings, 13 replies; 28+ messages in thread
From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw)
To: linux-kernel
Hi,
Third version of the patchset. Changes since v2:
- Get rid of ->next and use the ->page field as a next pointer.
This saves some space in the scatterlist structure, at the cost
of one extra sg segment per chained list. Should definitely be
a space win.
- Cleanup linux/scatterlist.h, most of the stuff is outside
the ARCH_HAS_SG_CHAIN ifdef now.
- Convert the x86-64 iommu code, enable it on x86-64.
- Convert all (hopefully) SCSI drivers to use the sg helpers so
chaining can be supported globally. I collated all the SCSI
driver patches into the 13/13 patch, as not to post a huge
patch series. I would much appreciate some eye balls on this part,
catching any dumb mistakes I may have made. The changes are usually
trivial, but some are not and I do clown around occasionally :-)
Like v2, you still need to enable large commands manually for a
device, ala:
# cd /sys/block/sda/queue
# echo 1024 > max_segments
# cat max_hw_sectors_kb > max_sectors_kb
which would limit you to 1024 segments (effectively 8 scatterlists
chained), and should give you IO's of at least 4mb. You can go larger
than 1024, there's no real limit.
arch/ia64/hp/sim/simscsi.c | 23 +-
arch/x86_64/kernel/pci-calgary.c | 25 +-
arch/x86_64/kernel/pci-gart.c | 44 ++--
arch/x86_64/kernel/pci-nommu.c | 5
block/ll_rw_blk.c | 41 +++-
crypto/digest.c | 2
crypto/scatterwalk.c | 2
crypto/scatterwalk.h | 2
drivers/ata/libata-core.c | 30 +--
drivers/infiniband/ulp/srp/ib_srp.c | 22 +-
drivers/scsi/3w-9xxx.c | 8
drivers/scsi/3w-xxxx.c | 8
drivers/scsi/53c700.c | 16 -
drivers/scsi/BusLogic.c | 7
drivers/scsi/NCR53c406a.c | 18 +-
drivers/scsi/a100u2w.c | 9 -
drivers/scsi/aacraid/aachba.c | 29 +--
drivers/scsi/advansys.c | 21 +-
drivers/scsi/aha1542.c | 21 +-
drivers/scsi/aha1740.c | 8
drivers/scsi/aic7xxx/aic79xx_osm.c | 3
drivers/scsi/aic7xxx/aic7xxx_osm.c | 12 -
drivers/scsi/aic94xx/aic94xx_task.c | 6
drivers/scsi/arcmsr/arcmsr_hba.c | 11 -
drivers/scsi/dc395x.c | 7
drivers/scsi/dpt_i2o.c | 13 -
drivers/scsi/eata.c | 8
drivers/scsi/esp_scsi.c | 5
drivers/scsi/gdth.c | 45 ++---
drivers/scsi/hptiop.c | 8
drivers/scsi/ibmmca.c | 11 -
drivers/scsi/ibmvscsi/ibmvscsi.c | 4
drivers/scsi/ide-scsi.c | 31 ++-
drivers/scsi/initio.c | 12 -
drivers/scsi/ipr.c | 9 -
drivers/scsi/ips.c | 74 ++++----
drivers/scsi/iscsi_tcp.c | 43 ++--
drivers/scsi/jazz_esp.c | 27 +--
drivers/scsi/lpfc/lpfc_scsi.c | 9 -
drivers/scsi/mac53c94.c | 9 -
drivers/scsi/megaraid.c | 13 -
drivers/scsi/megaraid/megaraid_mbox.c | 7
drivers/scsi/megaraid/megaraid_sas.c | 16 -
drivers/scsi/mesh.c | 12 -
drivers/scsi/ncr53c8xx.c | 7
drivers/scsi/nsp32.c | 9 -
drivers/scsi/pcmcia/sym53c500_cs.c | 18 +-
drivers/scsi/qla1280.c | 66 ++++---
drivers/scsi/qla2xxx/qla_iocb.c | 9 -
drivers/scsi/qla4xxx/ql4_iocb.c | 8
drivers/scsi/qlogicfas408.c | 9 -
drivers/scsi/qlogicpti.c | 15 -
drivers/scsi/scsi_debug.c | 14 -
drivers/scsi/scsi_lib.c | 230 +++++++++++++++++++-------
drivers/scsi/scsi_tgt_lib.c | 4
drivers/scsi/sym53c416.c | 9 -
drivers/scsi/sym53c8xx_2/sym_glue.c | 7
drivers/scsi/u14-34f.c | 10 -
drivers/scsi/ultrastor.c | 10 -
drivers/scsi/wd7000.c | 7
include/asm-i386/dma-mapping.h | 13 -
include/asm-i386/scatterlist.h | 2
include/asm-x86_64/dma-mapping.h | 3
include/asm-x86_64/scatterlist.h | 2
include/linux/libata.h | 16 +
include/linux/scatterlist.h | 55 ++++++
include/scsi/scsi.h | 7
include/scsi/scsi_cmnd.h | 3
68 files changed, 753 insertions(+), 516 deletions(-)
^ permalink raw reply [flat|nested] 28+ messages in thread* [PATCH 1/13] crypto: don't pollute the global namespace with sg_next() 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 2/13] Add sg helpers for iterating over a scatterlist table Jens Axboe ` (11 subsequent siblings) 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe It's a subsystem function, prefix it as such. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- crypto/digest.c | 2 +- crypto/scatterwalk.c | 2 +- crypto/scatterwalk.h | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/crypto/digest.c b/crypto/digest.c index 1bf7414..e56de67 100644 --- a/crypto/digest.c +++ b/crypto/digest.c @@ -77,7 +77,7 @@ static int update2(struct hash_desc *desc, if (!nbytes) break; - sg = sg_next(sg); + sg = scatterwalk_sg_next(sg); } return 0; diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c index 81afd17..2e51f82 100644 --- a/crypto/scatterwalk.c +++ b/crypto/scatterwalk.c @@ -70,7 +70,7 @@ static void scatterwalk_pagedone(struct scatter_walk *walk, int out, walk->offset += PAGE_SIZE - 1; walk->offset &= PAGE_MASK; if (walk->offset >= walk->sg->offset + walk->sg->length) - scatterwalk_start(walk, sg_next(walk->sg)); + scatterwalk_start(walk, scatterwalk_sg_next(walk->sg)); } } diff --git a/crypto/scatterwalk.h b/crypto/scatterwalk.h index f1592cc..e049c62 100644 --- a/crypto/scatterwalk.h +++ b/crypto/scatterwalk.h @@ -20,7 +20,7 @@ #include "internal.h" -static inline struct scatterlist *sg_next(struct scatterlist *sg) +static inline struct scatterlist *scatterwalk_sg_next(struct scatterlist *sg) { return (++sg)->length ? sg : (void *)sg->page; } -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 2/13] Add sg helpers for iterating over a scatterlist table 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe 2007-05-10 10:21 ` [PATCH 1/13] crypto: don't pollute the global namespace with sg_next() Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:39 ` Andrew Morton 2007-05-10 10:21 ` [PATCH 3/13] libata: convert to using sg helpers Jens Axboe ` (10 subsequent siblings) 12 siblings, 1 reply; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe First step to being able to change the scatterlist setup without having to modify drivers (a lot :-) Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- include/linux/scatterlist.h | 9 +++++++++ 1 files changed, 9 insertions(+), 0 deletions(-) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 4efbd9c..c5bffde 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -20,4 +20,13 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf, sg_set_buf(sg, buf, buflen); } +#define sg_next(sg) ((sg) + 1) +#define sg_last(sg, nents) (&(sg[nents - 1])) + +/* + * Loop over each sg element, following the pointer to a new list if necessary + */ +#define for_each_sg(sglist, sg, nr, __i) \ + for (__i = 0, sg = (sglist); __i < nr; __i++, sg = sg_next(sg)) + #endif /* _LINUX_SCATTERLIST_H */ -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 2/13] Add sg helpers for iterating over a scatterlist table 2007-05-10 10:21 ` [PATCH 2/13] Add sg helpers for iterating over a scatterlist table Jens Axboe @ 2007-05-10 10:39 ` Andrew Morton 2007-05-10 10:42 ` Jens Axboe 0 siblings, 1 reply; 28+ messages in thread From: Andrew Morton @ 2007-05-10 10:39 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-kernel On Thu, 10 May 2007 12:21:44 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > First step to being able to change the scatterlist setup without > having to modify drivers (a lot :-) > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com> > --- > include/linux/scatterlist.h | 9 +++++++++ > 1 files changed, 9 insertions(+), 0 deletions(-) > > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > index 4efbd9c..c5bffde 100644 > --- a/include/linux/scatterlist.h > +++ b/include/linux/scatterlist.h > @@ -20,4 +20,13 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf, > sg_set_buf(sg, buf, buflen); > } > > +#define sg_next(sg) ((sg) + 1) > +#define sg_last(sg, nents) (&(sg[nents - 1])) Looks a bit underparenthesised. > +/* > + * Loop over each sg element, following the pointer to a new list if necessary > + */ > +#define for_each_sg(sglist, sg, nr, __i) \ > + for (__i = 0, sg = (sglist); __i < nr; __i++, sg = sg_next(sg)) > + So does this. I don't see how it "follows the pointer to a new list". All it's doing is iterating across an array? ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 2/13] Add sg helpers for iterating over a scatterlist table 2007-05-10 10:39 ` Andrew Morton @ 2007-05-10 10:42 ` Jens Axboe 0 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:42 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel On Thu, May 10 2007, Andrew Morton wrote: > On Thu, 10 May 2007 12:21:44 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > > > First step to being able to change the scatterlist setup without > > having to modify drivers (a lot :-) > > > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com> > > --- > > include/linux/scatterlist.h | 9 +++++++++ > > 1 files changed, 9 insertions(+), 0 deletions(-) > > > > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > > index 4efbd9c..c5bffde 100644 > > --- a/include/linux/scatterlist.h > > +++ b/include/linux/scatterlist.h > > @@ -20,4 +20,13 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf, > > sg_set_buf(sg, buf, buflen); > > } > > > > +#define sg_next(sg) ((sg) + 1) > > +#define sg_last(sg, nents) (&(sg[nents - 1])) > > Looks a bit underparenthesised. > > > +/* > > + * Loop over each sg element, following the pointer to a new list if necessary > > + */ > > +#define for_each_sg(sglist, sg, nr, __i) \ > > + for (__i = 0, sg = (sglist); __i < nr; __i++, sg = sg_next(sg)) > > + > > So does this. Yeah I know, both of these are fixed up when the chain support is added (patch 07). So I didn't bother fixing these up, but I will make a note of it for the first round. > I don't see how it "follows the pointer to a new list". All it's doing is > iterating across an array? It doesn't, this first patch just allows you to convert drivers to using for_each_sg() to loop over sg elements. Then you can later introduce sg chaining behind their back, they don't have to know about that. Patch 07 is the one that enables sg chaining for x86 and includes the generic bits for linux/scatterlist.h. -- Jens Axboe ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 3/13] libata: convert to using sg helpers 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe 2007-05-10 10:21 ` [PATCH 1/13] crypto: don't pollute the global namespace with sg_next() Jens Axboe 2007-05-10 10:21 ` [PATCH 2/13] Add sg helpers for iterating over a scatterlist table Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 4/13] block: " Jens Axboe ` (9 subsequent siblings) 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe This converts libata to using the sg helpers for looking up sg elements, instead of doing it manually. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- drivers/ata/libata-core.c | 30 ++++++++++++++++-------------- include/linux/libata.h | 16 ++++++++++------ 2 files changed, 26 insertions(+), 20 deletions(-) diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index 4595d1f..fabb1f4 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -1370,7 +1370,7 @@ static void ata_qc_complete_internal(struct ata_queued_cmd *qc) */ unsigned ata_exec_internal_sg(struct ata_device *dev, struct ata_taskfile *tf, const u8 *cdb, - int dma_dir, struct scatterlist *sg, + int dma_dir, struct scatterlist *sgl, unsigned int n_elem) { struct ata_port *ap = dev->ap; @@ -1428,11 +1428,12 @@ unsigned ata_exec_internal_sg(struct ata_device *dev, qc->dma_dir = dma_dir; if (dma_dir != DMA_NONE) { unsigned int i, buflen = 0; + struct scatterlist *sg; - for (i = 0; i < n_elem; i++) - buflen += sg[i].length; + for_each_sg(sgl, sg, n_elem, i) + buflen += sg->length; - ata_sg_init(qc, sg, n_elem); + ata_sg_init(qc, sgl, n_elem); qc->nbytes = buflen; } @@ -3982,7 +3983,7 @@ void ata_sg_clean(struct ata_queued_cmd *qc) if (qc->n_elem) dma_unmap_sg(ap->dev, sg, qc->n_elem, dir); /* restore last sg */ - sg[qc->orig_n_elem - 1].length += qc->pad_len; + sg_last(sg, qc->orig_n_elem)->length += qc->pad_len; if (pad_buf) { struct scatterlist *psg = &qc->pad_sgent; void *addr = kmap_atomic(psg->page, KM_IRQ0); @@ -4141,6 +4142,7 @@ void ata_sg_init_one(struct ata_queued_cmd *qc, void *buf, unsigned int buflen) qc->orig_n_elem = 1; qc->buf_virt = buf; qc->nbytes = buflen; + qc->cursg = qc->__sg; sg_init_one(&qc->sgent, buf, buflen); } @@ -4166,6 +4168,7 @@ void ata_sg_init(struct ata_queued_cmd *qc, struct scatterlist *sg, qc->__sg = sg; qc->n_elem = n_elem; qc->orig_n_elem = n_elem; + qc->cursg = qc->__sg; } /** @@ -4255,7 +4258,7 @@ static int ata_sg_setup(struct ata_queued_cmd *qc) { struct ata_port *ap = qc->ap; struct scatterlist *sg = qc->__sg; - struct scatterlist *lsg = &sg[qc->n_elem - 1]; + struct scatterlist *lsg = sg_last(qc->__sg, qc->n_elem); int n_elem, pre_n_elem, dir, trim_sg = 0; VPRINTK("ENTER, ata%u\n", ap->print_id); @@ -4419,7 +4422,6 @@ void ata_data_xfer_noirq(struct ata_device *adev, unsigned char *buf, static void ata_pio_sector(struct ata_queued_cmd *qc) { int do_write = (qc->tf.flags & ATA_TFLAG_WRITE); - struct scatterlist *sg = qc->__sg; struct ata_port *ap = qc->ap; struct page *page; unsigned int offset; @@ -4428,8 +4430,8 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) if (qc->curbytes == qc->nbytes - qc->sect_size) ap->hsm_task_state = HSM_ST_LAST; - page = sg[qc->cursg].page; - offset = sg[qc->cursg].offset + qc->cursg_ofs; + page = qc->cursg->page; + offset = qc->cursg->offset + qc->cursg_ofs; /* get the current page and offset */ page = nth_page(page, (offset >> PAGE_SHIFT)); @@ -4457,8 +4459,8 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) qc->curbytes += qc->sect_size; qc->cursg_ofs += qc->sect_size; - if (qc->cursg_ofs == (&sg[qc->cursg])->length) { - qc->cursg++; + if (qc->cursg_ofs == qc->cursg->length) { + qc->cursg = sg_next(qc->cursg); qc->cursg_ofs = 0; } } @@ -4551,7 +4553,7 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes) ap->hsm_task_state = HSM_ST_LAST; next_sg: - if (unlikely(qc->cursg >= qc->n_elem)) { + if (unlikely(qc->cursg == sg_last(qc->__sg, qc->n_elem))) { /* * The end of qc->sg is reached and the device expects * more data to transfer. In order not to overrun qc->sg @@ -4574,7 +4576,7 @@ next_sg: return; } - sg = &qc->__sg[qc->cursg]; + sg = qc->cursg; page = sg->page; offset = sg->offset + qc->cursg_ofs; @@ -4613,7 +4615,7 @@ next_sg: qc->cursg_ofs += count; if (qc->cursg_ofs == sg->length) { - qc->cursg++; + qc->cursg = sg_next(qc->cursg); qc->cursg_ofs = 0; } diff --git a/include/linux/libata.h b/include/linux/libata.h index 7906d75..8fad10e 100644 --- a/include/linux/libata.h +++ b/include/linux/libata.h @@ -30,7 +30,7 @@ #include <linux/interrupt.h> #include <linux/pci.h> #include <linux/dma-mapping.h> -#include <asm/scatterlist.h> +#include <linux/scatterlist.h> #include <linux/io.h> #include <linux/ata.h> #include <linux/workqueue.h> @@ -388,6 +388,7 @@ struct ata_queued_cmd { unsigned long flags; /* ATA_QCFLAG_xxx */ unsigned int tag; unsigned int n_elem; + unsigned int n_iter; unsigned int orig_n_elem; int dma_dir; @@ -398,7 +399,7 @@ struct ata_queued_cmd { unsigned int nbytes; unsigned int curbytes; - unsigned int cursg; + struct scatterlist *cursg; unsigned int cursg_ofs; struct scatterlist sgent; @@ -935,7 +936,7 @@ ata_sg_is_last(struct scatterlist *sg, struct ata_queued_cmd *qc) return 1; if (qc->pad_len) return 0; - if (((sg - qc->__sg) + 1) == qc->n_elem) + if (qc->n_iter == qc->n_elem) return 1; return 0; } @@ -943,6 +944,7 @@ ata_sg_is_last(struct scatterlist *sg, struct ata_queued_cmd *qc) static inline struct scatterlist * ata_qc_first_sg(struct ata_queued_cmd *qc) { + qc->n_iter = 0; if (qc->n_elem) return qc->__sg; if (qc->pad_len) @@ -955,8 +957,8 @@ ata_qc_next_sg(struct scatterlist *sg, struct ata_queued_cmd *qc) { if (sg == &qc->pad_sgent) return NULL; - if (++sg - qc->__sg < qc->n_elem) - return sg; + if (++qc->n_iter < qc->n_elem) + return sg_next(sg); if (qc->pad_len) return &qc->pad_sgent; return NULL; @@ -1157,9 +1159,11 @@ static inline void ata_qc_reinit(struct ata_queued_cmd *qc) qc->dma_dir = DMA_NONE; qc->__sg = NULL; qc->flags = 0; - qc->cursg = qc->cursg_ofs = 0; + qc->cursg = NULL; + qc->cursg_ofs = 0; qc->nbytes = qc->curbytes = 0; qc->n_elem = 0; + qc->n_iter = 0; qc->err_mask = 0; qc->pad_len = 0; qc->sect_size = ATA_SECT_SIZE; -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 4/13] block: convert to using sg helpers 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (2 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 3/13] libata: convert to using sg helpers Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 5/13] scsi: " Jens Axboe ` (8 subsequent siblings) 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe Convert the main rq mapper (blk_rq_map_sg()) to the sg helper setup. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- block/ll_rw_blk.c | 19 ++++++++++++------- 1 files changed, 12 insertions(+), 7 deletions(-) diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c index 17e1889..b01a5f2 100644 --- a/block/ll_rw_blk.c +++ b/block/ll_rw_blk.c @@ -30,6 +30,7 @@ #include <linux/cpu.h> #include <linux/blktrace_api.h> #include <linux/fault-inject.h> +#include <linux/scatterlist.h> /* * for max sense size @@ -1307,9 +1308,11 @@ static int blk_hw_contig_segment(request_queue_t *q, struct bio *bio, * map a request to scatterlist, return number of sg entries setup. Caller * must make sure sg can hold rq->nr_phys_segments entries */ -int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg) +int blk_rq_map_sg(request_queue_t *q, struct request *rq, + struct scatterlist *sglist) { struct bio_vec *bvec, *bvprv; + struct scatterlist *next_sg, *sg; struct bio *bio; int nsegs, i, cluster; @@ -1320,6 +1323,7 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg * for each bio in rq */ bvprv = NULL; + sg = next_sg = &sglist[0]; rq_for_each_bio(bio, rq) { /* * for each segment in bio @@ -1328,7 +1332,7 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg int nbytes = bvec->bv_len; if (bvprv && cluster) { - if (sg[nsegs - 1].length + nbytes > q->max_segment_size) + if (sg->length + nbytes > q->max_segment_size) goto new_segment; if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) @@ -1336,14 +1340,15 @@ int blk_rq_map_sg(request_queue_t *q, struct request *rq, struct scatterlist *sg if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec)) goto new_segment; - sg[nsegs - 1].length += nbytes; + sg->length += nbytes; } else { new_segment: - memset(&sg[nsegs],0,sizeof(struct scatterlist)); - sg[nsegs].page = bvec->bv_page; - sg[nsegs].length = nbytes; - sg[nsegs].offset = bvec->bv_offset; + sg = next_sg; + next_sg = sg_next(sg); + sg->page = bvec->bv_page; + sg->length = nbytes; + sg->offset = bvec->bv_offset; nsegs++; } bvprv = bvec; -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 5/13] scsi: convert to using sg helpers 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (3 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 4/13] block: " Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 6/13] i386 dma_map_sg: " Jens Axboe ` (7 subsequent siblings) 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe This converts the SCSI mid layer to using the sg helpers for looking up sg elements, instead of doing it manually. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- drivers/scsi/scsi_lib.c | 20 +++++++++++--------- 1 files changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 1f5a07b..f944690 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -302,14 +302,15 @@ static int scsi_req_map_sg(struct request *rq, struct scatterlist *sgl, struct request_queue *q = rq->q; int nr_pages = (bufflen + sgl[0].offset + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned int data_len = 0, len, bytes, off; + struct scatterlist *sg; struct page *page; struct bio *bio = NULL; int i, err, nr_vecs = 0; - for (i = 0; i < nsegs; i++) { - page = sgl[i].page; - off = sgl[i].offset; - len = sgl[i].length; + for_each_sg(sgl, sg, nsegs, i) { + page = sg->page; + off = sg->offset; + len = sg->length; data_len += len; while (len > 0) { @@ -2240,18 +2241,19 @@ EXPORT_SYMBOL_GPL(scsi_target_unblock); * * Returns virtual address of the start of the mapped page */ -void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count, +void *scsi_kmap_atomic_sg(struct scatterlist *sgl, int sg_count, size_t *offset, size_t *len) { int i; size_t sg_len = 0, len_complete = 0; + struct scatterlist *sg; struct page *page; WARN_ON(!irqs_disabled()); - for (i = 0; i < sg_count; i++) { + for_each_sg(sgl, sg, sg_count, i) { len_complete = sg_len; /* Complete sg-entries */ - sg_len += sg[i].length; + sg_len += sg->length; if (sg_len > *offset) break; } @@ -2265,10 +2267,10 @@ void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count, } /* Offset starting from the beginning of first page in this sg-entry */ - *offset = *offset - len_complete + sg[i].offset; + *offset = *offset - len_complete + sg->offset; /* Assumption: contiguous pages can be accessed as "page + i" */ - page = nth_page(sg[i].page, (*offset >> PAGE_SHIFT)); + page = nth_page(sg->page, (*offset >> PAGE_SHIFT)); *offset &= ~PAGE_MASK; /* Bytes in this sg-entry from *offset to the end of the page */ -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 6/13] i386 dma_map_sg: convert to using sg helpers 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (4 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 5/13] scsi: " Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 7/13] i386 sg: add support for chaining scatterlists Jens Axboe ` (6 subsequent siblings) 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe The dma mapping helpers need to be converted to using sg helpers as well, so they will work with a chained sglist setup. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- include/asm-i386/dma-mapping.h | 13 +++++++------ 1 files changed, 7 insertions(+), 6 deletions(-) diff --git a/include/asm-i386/dma-mapping.h b/include/asm-i386/dma-mapping.h index 183eebe..a956ec1 100644 --- a/include/asm-i386/dma-mapping.h +++ b/include/asm-i386/dma-mapping.h @@ -2,10 +2,10 @@ #define _ASM_I386_DMA_MAPPING_H #include <linux/mm.h> +#include <linux/scatterlist.h> #include <asm/cache.h> #include <asm/io.h> -#include <asm/scatterlist.h> #include <asm/bug.h> #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) @@ -35,18 +35,19 @@ dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, } static inline int -dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, +dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) { + struct scatterlist *sg; int i; BUG_ON(!valid_dma_direction(direction)); - WARN_ON(nents == 0 || sg[0].length == 0); + WARN_ON(nents == 0 || sglist[0].length == 0); - for (i = 0; i < nents; i++ ) { - BUG_ON(!sg[i].page); + for_each_sg(sglist, sg, nents, i) { + BUG_ON(!sg->page); - sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset; + sg->dma_address = page_to_phys(sg->page) + sg->offset; } flush_write_buffers(); -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 7/13] i386 sg: add support for chaining scatterlists 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (5 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 6/13] i386 dma_map_sg: " Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:43 ` Andrew Morton 2007-05-10 10:59 ` Benny Halevy 2007-05-10 10:21 ` [PATCH 8/13] x86-64: update iommu/dma mapping functions to sg helpers Jens Axboe ` (5 subsequent siblings) 12 siblings, 2 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe The core of the patch - allow the last sg element in a scatterlist table to point to the start of a new table. We overload the LSB of the page pointer to indicate whether this is a valid sg entry, or merely a link to the next list. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- include/asm-i386/scatterlist.h | 2 + include/linux/scatterlist.h | 52 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 51 insertions(+), 3 deletions(-) diff --git a/include/asm-i386/scatterlist.h b/include/asm-i386/scatterlist.h index d7e45a8..bd5164a 100644 --- a/include/asm-i386/scatterlist.h +++ b/include/asm-i386/scatterlist.h @@ -10,6 +10,8 @@ struct scatterlist { unsigned int length; }; +#define ARCH_HAS_SG_CHAIN + /* These macros should be used after a pci_map_sg call has been done * to get bus addresses of each of the SG entries and their lengths. * You should only work with the number of sg entries pci_map_sg diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index c5bffde..bad6b9e 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -20,13 +20,59 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf, sg_set_buf(sg, buf, buflen); } -#define sg_next(sg) ((sg) + 1) -#define sg_last(sg, nents) (&(sg[nents - 1])) +#define sg_is_chain(sg) ((unsigned long) (sg)->page & 0x01) +#define sg_chain_ptr(sg) \ + ((struct scatterlist *) ((unsigned long) (sg)->page & ~0x01)) + +/* + * We overload the meaning of ->page for sg chaining. If the LSB is + * set, the page member contains a pointer to the next sgtable. + */ +static inline struct scatterlist *sg_next(struct scatterlist *sg) +{ + if (sg_is_chain(sg)) + return sg_chain_ptr(sg); + + return sg + 1; +} /* * Loop over each sg element, following the pointer to a new list if necessary */ #define for_each_sg(sglist, sg, nr, __i) \ - for (__i = 0, sg = (sglist); __i < nr; __i++, sg = sg_next(sg)) + for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg)) + +/* + * We could improve this by passing in the maximum size of an sglist, so + * we could jump directly to the last table. That would eliminate this + * (potentially) lengthy scan. + */ +static inline struct scatterlist *sg_last(struct scatterlist *sgl, + unsigned int nents) +{ +#ifdef ARCH_HAS_SG_CHAIN + struct scatterlist *ret = &sgl[nents - 1]; +#else + struct scatterlist *sg, *ret = NULL; + int i; + + for_each_sg(sgl, sg, nents, i) + ret = sg; + +#endif + return ret; +} + +/* + * Chain previous sglist to this one + */ +static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, + struct scatterlist *sgl) +{ +#ifndef ARCH_HAS_SG_CHAIN + BUG(); +#endif + prv[prv_nents - 1].page = (struct page *) ((unsigned long) sgl | 0x01); +} #endif /* _LINUX_SCATTERLIST_H */ -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 7/13] i386 sg: add support for chaining scatterlists 2007-05-10 10:21 ` [PATCH 7/13] i386 sg: add support for chaining scatterlists Jens Axboe @ 2007-05-10 10:43 ` Andrew Morton 2007-05-10 10:44 ` Jens Axboe 2007-05-10 10:59 ` Benny Halevy 1 sibling, 1 reply; 28+ messages in thread From: Andrew Morton @ 2007-05-10 10:43 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-kernel On Thu, 10 May 2007 12:21:49 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > The core of the patch - allow the last sg element in a scatterlist > table to point to the start of a new table. We overload the LSB of > the page pointer to indicate whether this is a valid sg entry, or > merely a link to the next list. > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com> > --- > include/asm-i386/scatterlist.h | 2 + > include/linux/scatterlist.h | 52 +++++++++++++++++++++++++++++++++++++-- > 2 files changed, 51 insertions(+), 3 deletions(-) > > diff --git a/include/asm-i386/scatterlist.h b/include/asm-i386/scatterlist.h > index d7e45a8..bd5164a 100644 > --- a/include/asm-i386/scatterlist.h > +++ b/include/asm-i386/scatterlist.h > @@ -10,6 +10,8 @@ struct scatterlist { > unsigned int length; > }; > > +#define ARCH_HAS_SG_CHAIN > + > /* These macros should be used after a pci_map_sg call has been done > * to get bus addresses of each of the SG entries and their lengths. > * You should only work with the number of sg entries pci_map_sg > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > index c5bffde..bad6b9e 100644 > --- a/include/linux/scatterlist.h > +++ b/include/linux/scatterlist.h > @@ -20,13 +20,59 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf, > sg_set_buf(sg, buf, buflen); > } > > ... > > /* > * Loop over each sg element, following the pointer to a new list if necessary > */ > #define for_each_sg(sglist, sg, nr, __i) \ > - for (__i = 0, sg = (sglist); __i < nr; __i++, sg = sg_next(sg)) > + for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg)) > + > +/* > + * We could improve this by passing in the maximum size of an sglist, so > + * we could jump directly to the last table. That would eliminate this > + * (potentially) lengthy scan. > + */ > +static inline struct scatterlist *sg_last(struct scatterlist *sgl, > + unsigned int nents) > +{ > +#ifdef ARCH_HAS_SG_CHAIN > + struct scatterlist *ret = &sgl[nents - 1]; > +#else > + struct scatterlist *sg, *ret = NULL; > + int i; > + > + for_each_sg(sgl, sg, nents, i) > + ret = sg; > + > +#endif > + return ret; > +} Looks too large to be inlined. > +/* > + * Chain previous sglist to this one > + */ > +static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, > + struct scatterlist *sgl) > +{ > +#ifndef ARCH_HAS_SG_CHAIN > + BUG(); > +#endif Can use BUILD_BUG_ON here. Or just #error. > + prv[prv_nents - 1].page = (struct page *) ((unsigned long) sgl | 0x01); > +} ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 7/13] i386 sg: add support for chaining scatterlists 2007-05-10 10:43 ` Andrew Morton @ 2007-05-10 10:44 ` Jens Axboe 2007-05-10 10:46 ` Jens Axboe 0 siblings, 1 reply; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:44 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel On Thu, May 10 2007, Andrew Morton wrote: > On Thu, 10 May 2007 12:21:49 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > > > The core of the patch - allow the last sg element in a scatterlist > > table to point to the start of a new table. We overload the LSB of > > the page pointer to indicate whether this is a valid sg entry, or > > merely a link to the next list. > > > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com> > > --- > > include/asm-i386/scatterlist.h | 2 + > > include/linux/scatterlist.h | 52 +++++++++++++++++++++++++++++++++++++-- > > 2 files changed, 51 insertions(+), 3 deletions(-) > > > > diff --git a/include/asm-i386/scatterlist.h b/include/asm-i386/scatterlist.h > > index d7e45a8..bd5164a 100644 > > --- a/include/asm-i386/scatterlist.h > > +++ b/include/asm-i386/scatterlist.h > > @@ -10,6 +10,8 @@ struct scatterlist { > > unsigned int length; > > }; > > > > +#define ARCH_HAS_SG_CHAIN > > + > > /* These macros should be used after a pci_map_sg call has been done > > * to get bus addresses of each of the SG entries and their lengths. > > * You should only work with the number of sg entries pci_map_sg > > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > > index c5bffde..bad6b9e 100644 > > --- a/include/linux/scatterlist.h > > +++ b/include/linux/scatterlist.h > > @@ -20,13 +20,59 @@ static inline void sg_init_one(struct scatterlist *sg, const void *buf, > > sg_set_buf(sg, buf, buflen); > > } > > > > ... > > > > /* > > * Loop over each sg element, following the pointer to a new list if necessary > > */ > > #define for_each_sg(sglist, sg, nr, __i) \ > > - for (__i = 0, sg = (sglist); __i < nr; __i++, sg = sg_next(sg)) > > + for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg)) > > + > > +/* > > + * We could improve this by passing in the maximum size of an sglist, so > > + * we could jump directly to the last table. That would eliminate this > > + * (potentially) lengthy scan. > > + */ > > +static inline struct scatterlist *sg_last(struct scatterlist *sgl, > > + unsigned int nents) > > +{ > > +#ifdef ARCH_HAS_SG_CHAIN > > + struct scatterlist *ret = &sgl[nents - 1]; > > +#else > > + struct scatterlist *sg, *ret = NULL; > > + int i; > > + > > + for_each_sg(sgl, sg, nents, i) > > + ret = sg; > > + > > +#endif > > + return ret; > > +} > > Looks too large to be inlined. Yeah, I'm inclined to agree. Perhaps it would be better to put this stuff in lib/scatterlist.c or something like that instead? > > +/* > > + * Chain previous sglist to this one > > + */ > > +static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, > > + struct scatterlist *sgl) > > +{ > > +#ifndef ARCH_HAS_SG_CHAIN > > + BUG(); > > +#endif > > Can use BUILD_BUG_ON here. Or just #error. Good idea, thanks! -- Jens Axboe ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 7/13] i386 sg: add support for chaining scatterlists 2007-05-10 10:44 ` Jens Axboe @ 2007-05-10 10:46 ` Jens Axboe 2007-05-10 10:52 ` Andrew Morton 0 siblings, 1 reply; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:46 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel On Thu, May 10 2007, Jens Axboe wrote: > Yeah, I'm inclined to agree. Perhaps it would be better to put this > stuff in lib/scatterlist.c or something like that instead? > > > > +/* > > > + * Chain previous sglist to this one > > > + */ > > > +static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, > > > + struct scatterlist *sgl) > > > +{ > > > +#ifndef ARCH_HAS_SG_CHAIN > > > + BUG(); > > > +#endif > > > > Can use BUILD_BUG_ON here. Or just #error. > > Good idea, thanks! No wait a second, that wont work. The code is always being built in sg scsi_lib.c, it should just not be called unless we can do chaining. We will never have a large number of segments that require chaining without ARCH_HAS_SG_CHAIN, so it'll never be called in that case. So it has to remain as it is, a BUG(). -- Jens Axboe ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 7/13] i386 sg: add support for chaining scatterlists 2007-05-10 10:46 ` Jens Axboe @ 2007-05-10 10:52 ` Andrew Morton 2007-05-10 11:21 ` Jens Axboe 0 siblings, 1 reply; 28+ messages in thread From: Andrew Morton @ 2007-05-10 10:52 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-kernel On Thu, 10 May 2007 12:46:53 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > On Thu, May 10 2007, Jens Axboe wrote: > > Yeah, I'm inclined to agree. Perhaps it would be better to put this > > stuff in lib/scatterlist.c or something like that instead? > > > > > > +/* > > > > + * Chain previous sglist to this one > > > > + */ > > > > +static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, > > > > + struct scatterlist *sgl) > > > > +{ > > > > +#ifndef ARCH_HAS_SG_CHAIN > > > > + BUG(); > > > > +#endif > > > > > > Can use BUILD_BUG_ON here. Or just #error. > > > > Good idea, thanks! > > No wait a second, that wont work. The code is always being built in sg > scsi_lib.c, it should just not be called unless we can do chaining. We > will never have a large number of segments that require chaining without > ARCH_HAS_SG_CHAIN, so it'll never be called in that case. So it has to > remain as it is, a BUG(). Confused. If it should never be called, why does it even get compiled in? ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 7/13] i386 sg: add support for chaining scatterlists 2007-05-10 10:52 ` Andrew Morton @ 2007-05-10 11:21 ` Jens Axboe 0 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 11:21 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel On Thu, May 10 2007, Andrew Morton wrote: > On Thu, 10 May 2007 12:46:53 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > > > On Thu, May 10 2007, Jens Axboe wrote: > > > Yeah, I'm inclined to agree. Perhaps it would be better to put this > > > stuff in lib/scatterlist.c or something like that instead? > > > > > > > > +/* > > > > > + * Chain previous sglist to this one > > > > > + */ > > > > > +static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, > > > > > + struct scatterlist *sgl) > > > > > +{ > > > > > +#ifndef ARCH_HAS_SG_CHAIN > > > > > + BUG(); > > > > > +#endif > > > > > > > > Can use BUILD_BUG_ON here. Or just #error. > > > > > > Good idea, thanks! > > > > No wait a second, that wont work. The code is always being built in sg > > scsi_lib.c, it should just not be called unless we can do chaining. We > > will never have a large number of segments that require chaining without > > ARCH_HAS_SG_CHAIN, so it'll never be called in that case. So it has to > > remain as it is, a BUG(). > > Confused. If it should never be called, why does it even get compiled in? I can hide it behind ARCH_HAS_SG_CHAIN and provide something ala #define sg_chain(prv, x, sgl) BUG() for when that is not defined. It still needs to be visible. -- Jens Axboe ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 7/13] i386 sg: add support for chaining scatterlists 2007-05-10 10:21 ` [PATCH 7/13] i386 sg: add support for chaining scatterlists Jens Axboe 2007-05-10 10:43 ` Andrew Morton @ 2007-05-10 10:59 ` Benny Halevy 2007-05-10 11:23 ` Jens Axboe 1 sibling, 1 reply; 28+ messages in thread From: Benny Halevy @ 2007-05-10 10:59 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-kernel Jens Axboe wrote: > +#define sg_is_chain(sg) ((unsigned long) (sg)->page & 0x01) > +#define sg_chain_ptr(sg) \ > + ((struct scatterlist *) ((unsigned long) (sg)->page & ~0x01)) > + > +/* > + * We overload the meaning of ->page for sg chaining. If the LSB is > + * set, the page member contains a pointer to the next sgtable. > + */ > +static inline struct scatterlist *sg_next(struct scatterlist *sg) > +{ > + if (sg_is_chain(sg)) > + return sg_chain_ptr(sg); > + > + return sg + 1; > +} Jens, should sg_next ever return the sg containing the chain link? If sg points at the entry right before the link entry, don't we want to skip it? E.g.: static inline struct scatterlist *sg_next(struct scatterlist *sg) { struct scatterlist *next; /* just in case, shouldn't really ever be here */ if (sg_is_chain(sg)) return sg_chain_ptr(sg); next = sg + 1; if (sg_is_chain(next)) return sg_chain_ptr(next); return next; } ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 7/13] i386 sg: add support for chaining scatterlists 2007-05-10 10:59 ` Benny Halevy @ 2007-05-10 11:23 ` Jens Axboe 0 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 11:23 UTC (permalink / raw) To: Benny Halevy; +Cc: linux-kernel On Thu, May 10 2007, Benny Halevy wrote: > Jens Axboe wrote: > > +#define sg_is_chain(sg) ((unsigned long) (sg)->page & 0x01) > > +#define sg_chain_ptr(sg) \ > > + ((struct scatterlist *) ((unsigned long) (sg)->page & ~0x01)) > > + > > +/* > > + * We overload the meaning of ->page for sg chaining. If the LSB is > > + * set, the page member contains a pointer to the next sgtable. > > + */ > > +static inline struct scatterlist *sg_next(struct scatterlist *sg) > > +{ > > + if (sg_is_chain(sg)) > > + return sg_chain_ptr(sg); > > + > > + return sg + 1; > > +} > > Jens, should sg_next ever return the sg containing the chain link? > If sg points at the entry right before the link entry, don't we > want to skip it? > > E.g.: > > static inline struct scatterlist *sg_next(struct scatterlist *sg) > { > struct scatterlist *next; > > /* just in case, shouldn't really ever be here */ > if (sg_is_chain(sg)) > return sg_chain_ptr(sg); > > next = sg + 1; > > if (sg_is_chain(next)) > return sg_chain_ptr(next); > > return next; > } You are right, I got that mixed up, I changed the setup to overloading ->page right before posting it. So: static inline struct scatterlist *sg_next(struct scatterlist *sg) { sg++; if (unlikely(sg_is_chain(sg))) sg = sg_chain_ptr(sg); return sg; } -- Jens Axboe ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 8/13] x86-64: update iommu/dma mapping functions to sg helpers 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (6 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 7/13] i386 sg: add support for chaining scatterlists Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 9/13] [PATCH] x86-64: enable sg chaining Jens Axboe ` (4 subsequent siblings) 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe This prepares x86-64 for sg chaining support. igned-off-by: Jens Axboe <jens.axboe@oracle.com> --- arch/x86_64/kernel/pci-calgary.c | 25 ++++++++++++--------- arch/x86_64/kernel/pci-gart.c | 44 ++++++++++++++++++++++--------------- arch/x86_64/kernel/pci-nommu.c | 5 ++- 3 files changed, 43 insertions(+), 31 deletions(-) diff --git a/arch/x86_64/kernel/pci-calgary.c b/arch/x86_64/kernel/pci-calgary.c index 5bd20b5..c472b14 100644 --- a/arch/x86_64/kernel/pci-calgary.c +++ b/arch/x86_64/kernel/pci-calgary.c @@ -35,6 +35,7 @@ #include <linux/pci_ids.h> #include <linux/pci.h> #include <linux/delay.h> +#include <linux/scatterlist.h> #include <asm/proto.h> #include <asm/calgary.h> #include <asm/tce.h> @@ -341,17 +342,19 @@ static void iommu_free(struct iommu_table *tbl, dma_addr_t dma_addr, static void __calgary_unmap_sg(struct iommu_table *tbl, struct scatterlist *sglist, int nelems, int direction) { - while (nelems--) { + struct scatterlist *s; + int i; + + for_each_sg(sglist, s, nelems, i) { unsigned int npages; - dma_addr_t dma = sglist->dma_address; - unsigned int dmalen = sglist->dma_length; + dma_addr_t dma = s->dma_address; + unsigned int dmalen = s->dma_length; if (dmalen == 0) break; npages = num_dma_pages(dma, dmalen); __iommu_free(tbl, dma, npages); - sglist++; } } @@ -374,10 +377,10 @@ void calgary_unmap_sg(struct device *dev, struct scatterlist *sglist, static int calgary_nontranslate_map_sg(struct device* dev, struct scatterlist *sg, int nelems, int direction) { + struct scatterlist *s; int i; - for (i = 0; i < nelems; i++ ) { - struct scatterlist *s = &sg[i]; + for_each_sg(sg, s, nelems, i) { BUG_ON(!s->page); s->dma_address = virt_to_bus(page_address(s->page) +s->offset); s->dma_length = s->length; @@ -389,6 +392,7 @@ int calgary_map_sg(struct device *dev, struct scatterlist *sg, int nelems, int direction) { struct iommu_table *tbl = to_pci_dev(dev)->bus->self->sysdata; + struct scatterlist *s; unsigned long flags; unsigned long vaddr; unsigned int npages; @@ -400,8 +404,7 @@ int calgary_map_sg(struct device *dev, struct scatterlist *sg, spin_lock_irqsave(&tbl->it_lock, flags); - for (i = 0; i < nelems; i++ ) { - struct scatterlist *s = &sg[i]; + for_each_sg(sg, s, nelems, i) { BUG_ON(!s->page); vaddr = (unsigned long)page_address(s->page) + s->offset; @@ -428,9 +431,9 @@ int calgary_map_sg(struct device *dev, struct scatterlist *sg, return nelems; error: __calgary_unmap_sg(tbl, sg, nelems, direction); - for (i = 0; i < nelems; i++) { - sg[i].dma_address = bad_dma_address; - sg[i].dma_length = 0; + for_each_sg(sg, s, nelems, i) { + s->dma_address = bad_dma_address; + s->dma_length = 0; } spin_unlock_irqrestore(&tbl->it_lock, flags); return 0; diff --git a/arch/x86_64/kernel/pci-gart.c b/arch/x86_64/kernel/pci-gart.c index 373ef66..8bc2ed7 100644 --- a/arch/x86_64/kernel/pci-gart.c +++ b/arch/x86_64/kernel/pci-gart.c @@ -23,6 +23,7 @@ #include <linux/interrupt.h> #include <linux/bitops.h> #include <linux/kdebug.h> +#include <linux/scatterlist.h> #include <asm/atomic.h> #include <asm/io.h> #include <asm/mtrr.h> @@ -277,10 +278,10 @@ void gart_unmap_single(struct device *dev, dma_addr_t dma_addr, */ void gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int dir) { + struct scatterlist *s; int i; - for (i = 0; i < nents; i++) { - struct scatterlist *s = &sg[i]; + for_each_sg(sg, s, nents, i) { if (!s->dma_length || !s->length) break; gart_unmap_single(dev, s->dma_address, s->dma_length, dir); @@ -291,14 +292,14 @@ void gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int di static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg, int nents, int dir) { + struct scatterlist *s; int i; #ifdef CONFIG_IOMMU_DEBUG printk(KERN_DEBUG "dma_map_sg overflow\n"); #endif - for (i = 0; i < nents; i++ ) { - struct scatterlist *s = &sg[i]; + for_each_sg(sg, s, nents, i) { unsigned long addr = page_to_phys(s->page) + s->offset; if (nonforced_iommu(dev, addr, s->length)) { addr = dma_map_area(dev, addr, s->length, dir); @@ -323,13 +324,17 @@ static int __dma_map_cont(struct scatterlist *sg, int start, int stopat, { unsigned long iommu_start = alloc_iommu(pages); unsigned long iommu_page = iommu_start; - int i; + struct scatterlist *s; + int i, nelems; if (iommu_start == -1) return -1; + + nelems = stopat - start; + while (start--) + sg = sg_next(sg); - for (i = start; i < stopat; i++) { - struct scatterlist *s = &sg[i]; + for_each_sg(sg, s, nelems, i) { unsigned long pages, addr; unsigned long phys_addr = s->dma_address; @@ -360,12 +365,14 @@ static inline int dma_map_cont(struct scatterlist *sg, int start, int stopat, struct scatterlist *sout, unsigned long pages, int need) { - if (!need) { + if (!need) { BUG_ON(stopat - start != 1); - *sout = sg[start]; - sout->dma_length = sg[start].length; + while (--start) + sg = sg_next(sg); + *sout = *sg; + sout->dma_length = sg->length; return 0; - } + } return __dma_map_cont(sg, start, stopat, sout, pages); } @@ -380,6 +387,7 @@ int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir) int start; unsigned long pages = 0; int need = 0, nextneed; + struct scatterlist *s, *ps; if (nents == 0) return 0; @@ -389,8 +397,8 @@ int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir) out = 0; start = 0; - for (i = 0; i < nents; i++) { - struct scatterlist *s = &sg[i]; + ps = NULL; /* shut up gcc */ + for_each_sg(sg, s, nents, i) { dma_addr_t addr = page_to_phys(s->page) + s->offset; s->dma_address = addr; BUG_ON(s->length == 0); @@ -399,7 +407,6 @@ int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir) /* Handle the previous not yet processed entries */ if (i > start) { - struct scatterlist *ps = &sg[i-1]; /* Can only merge when the last chunk ends on a page boundary and the new one doesn't have an offset. */ if (!iommu_merge || !nextneed || !need || s->offset || @@ -415,13 +422,14 @@ int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir) need = nextneed; pages += to_pages(s->offset, s->length); + ps = s; } if (dma_map_cont(sg, start, i, sg+out, pages, need) < 0) goto error; out++; flush_gart(); - if (out < nents) - sg[out].dma_length = 0; + if (out < nents) + ps->dma_length = 0; return out; error: @@ -436,8 +444,8 @@ error: if (panic_on_overflow) panic("dma_map_sg: overflow on %lu pages\n", pages); iommu_full(dev, pages << PAGE_SHIFT, dir); - for (i = 0; i < nents; i++) - sg[i].dma_address = bad_dma_address; + for_each_sg(sg, s, nents, i) + s->dma_address = bad_dma_address; return 0; } diff --git a/arch/x86_64/kernel/pci-nommu.c b/arch/x86_64/kernel/pci-nommu.c index 6dade0c..24c9faf 100644 --- a/arch/x86_64/kernel/pci-nommu.c +++ b/arch/x86_64/kernel/pci-nommu.c @@ -5,6 +5,7 @@ #include <linux/pci.h> #include <linux/string.h> #include <linux/dma-mapping.h> +#include <linux/scatterlist.h> #include <asm/proto.h> #include <asm/processor.h> @@ -57,10 +58,10 @@ void nommu_unmap_single(struct device *dev, dma_addr_t addr,size_t size, int nommu_map_sg(struct device *hwdev, struct scatterlist *sg, int nents, int direction) { + struct scatterlist *s; int i; - for (i = 0; i < nents; i++ ) { - struct scatterlist *s = &sg[i]; + for_each_sg(sg, s, nents, i) { BUG_ON(!s->page); s->dma_address = virt_to_bus(page_address(s->page) +s->offset); if (!check_addr("map_sg", hwdev, s->dma_address, s->length)) -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 9/13] [PATCH] x86-64: enable sg chaining 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (7 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 8/13] x86-64: update iommu/dma mapping functions to sg helpers Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 10/13] scsi: simplify scsi_free_sgtable() Jens Axboe ` (3 subsequent siblings) 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- include/asm-x86_64/dma-mapping.h | 3 +-- include/asm-x86_64/scatterlist.h | 2 ++ 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h index 6897e2a..ecd0f61 100644 --- a/include/asm-x86_64/dma-mapping.h +++ b/include/asm-x86_64/dma-mapping.h @@ -6,8 +6,7 @@ * documentation. */ - -#include <asm/scatterlist.h> +#include <linux/scatterlist.h> #include <asm/io.h> #include <asm/swiotlb.h> diff --git a/include/asm-x86_64/scatterlist.h b/include/asm-x86_64/scatterlist.h index eaf7ada..ef3986b 100644 --- a/include/asm-x86_64/scatterlist.h +++ b/include/asm-x86_64/scatterlist.h @@ -11,6 +11,8 @@ struct scatterlist { unsigned int dma_length; }; +#define ARCH_HAS_SG_CHAIN + #define ISA_DMA_THRESHOLD (0x00ffffff) /* These macros should be used after a pci_map_sg call has been done -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 10/13] scsi: simplify scsi_free_sgtable() 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (8 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 9/13] [PATCH] x86-64: enable sg chaining Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 11/13] SCSI: support for allocating large scatterlists Jens Axboe ` (2 subsequent siblings) 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe Just pass in the command, no point in passing in the scatterlist and scatterlist pool index seperately. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- drivers/scsi/scsi_lib.c | 9 +++++---- drivers/scsi/scsi_tgt_lib.c | 4 ++-- include/scsi/scsi_cmnd.h | 2 +- 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index f944690..26236b1 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -745,13 +745,14 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) EXPORT_SYMBOL(scsi_alloc_sgtable); -void scsi_free_sgtable(struct scatterlist *sgl, int index) +void scsi_free_sgtable(struct scsi_cmnd *cmd) { + struct scatterlist *sgl = cmd->request_buffer; struct scsi_host_sg_pool *sgp; - BUG_ON(index >= SG_MEMPOOL_NR); + BUG_ON(cmd->sglist_len >= SG_MEMPOOL_NR); - sgp = scsi_sg_pools + index; + sgp = scsi_sg_pools + cmd->sglist_len; mempool_free(sgl, sgp->pool); } @@ -777,7 +778,7 @@ EXPORT_SYMBOL(scsi_free_sgtable); static void scsi_release_buffers(struct scsi_cmnd *cmd) { if (cmd->use_sg) - scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len); + scsi_free_sgtable(cmd); /* * Zero these out. They now point to freed memory, and it is diff --git a/drivers/scsi/scsi_tgt_lib.c b/drivers/scsi/scsi_tgt_lib.c index 2570f48..d6e58e5 100644 --- a/drivers/scsi/scsi_tgt_lib.c +++ b/drivers/scsi/scsi_tgt_lib.c @@ -329,7 +329,7 @@ static void scsi_tgt_cmd_done(struct scsi_cmnd *cmd) scsi_tgt_uspace_send_status(cmd, tcmd->tag); if (cmd->request_buffer) - scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len); + scsi_free_sgtable(cmd); queue_work(scsi_tgtd, &tcmd->work); } @@ -370,7 +370,7 @@ static int scsi_tgt_init_cmd(struct scsi_cmnd *cmd, gfp_t gfp_mask) } eprintk("cmd %p cnt %d\n", cmd, cmd->use_sg); - scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len); + scsi_free_sgtable(cmd); return -EINVAL; } diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h index a2e0c10..d7db992 100644 --- a/include/scsi/scsi_cmnd.h +++ b/include/scsi/scsi_cmnd.h @@ -133,6 +133,6 @@ extern void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count, extern void scsi_kunmap_atomic_sg(void *virt); extern struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *, gfp_t); -extern void scsi_free_sgtable(struct scatterlist *, int); +extern void scsi_free_sgtable(struct scsi_cmnd *); #endif /* _SCSI_SCSI_CMND_H */ -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 11/13] SCSI: support for allocating large scatterlists 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (9 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 10/13] scsi: simplify scsi_free_sgtable() Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:48 ` Andrew Morton 2007-05-10 12:38 ` Alan Cox 2007-05-10 10:21 ` [PATCH 12/13] ll_rw_blk: temporarily enable max_segments tweaking Jens Axboe 2007-05-10 10:21 ` [PATCH 13/13] scsi drivers: sg chaining Jens Axboe 12 siblings, 2 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe This is what enables large commands. If we need to allocate an sgtable that doesn't fit in a single page, allocate several SCSI_MAX_SG_SEGMENTS sized tables and chain them together. We default to the safe setup of NOT chaining, for now. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- drivers/scsi/scsi_lib.c | 201 +++++++++++++++++++++++++++++++++++----------- include/scsi/scsi.h | 7 -- include/scsi/scsi_cmnd.h | 1 + 3 files changed, 155 insertions(+), 54 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 26236b1..c61a41f 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -29,39 +29,26 @@ #include "scsi_priv.h" #include "scsi_logging.h" +#include <linux/scatterlist.h> #define SG_MEMPOOL_NR ARRAY_SIZE(scsi_sg_pools) #define SG_MEMPOOL_SIZE 2 struct scsi_host_sg_pool { size_t size; - char *name; + char *name; struct kmem_cache *slab; mempool_t *pool; }; -#if (SCSI_MAX_PHYS_SEGMENTS < 32) -#error SCSI_MAX_PHYS_SEGMENTS is too small -#endif - -#define SP(x) { x, "sgpool-" #x } +#define SP(x) { x, "sgpool-" #x } static struct scsi_host_sg_pool scsi_sg_pools[] = { SP(8), SP(16), SP(32), -#if (SCSI_MAX_PHYS_SEGMENTS > 32) SP(64), -#if (SCSI_MAX_PHYS_SEGMENTS > 64) SP(128), -#if (SCSI_MAX_PHYS_SEGMENTS > 128) - SP(256), -#if (SCSI_MAX_PHYS_SEGMENTS > 256) -#error SCSI_MAX_PHYS_SEGMENTS is too large -#endif -#endif -#endif -#endif -}; +}; #undef SP static void scsi_run_queue(struct request_queue *q); @@ -702,45 +689,123 @@ static struct scsi_cmnd *scsi_end_request(struct scsi_cmnd *cmd, int uptodate, return NULL; } -struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) -{ - struct scsi_host_sg_pool *sgp; - struct scatterlist *sgl; +/* + * Should fit within a single page, and must be a power-of-2. + */ +#define SCSI_MAX_SG_SEGMENTS 128 - BUG_ON(!cmd->use_sg); +static inline unsigned int scsi_sgtable_index(unsigned short nents) +{ + unsigned int index; - switch (cmd->use_sg) { + switch (nents) { case 1 ... 8: - cmd->sglist_len = 0; + index = 0; break; case 9 ... 16: - cmd->sglist_len = 1; + index = 1; break; case 17 ... 32: - cmd->sglist_len = 2; + index = 2; break; -#if (SCSI_MAX_PHYS_SEGMENTS > 32) case 33 ... 64: - cmd->sglist_len = 3; + index = 3; break; -#if (SCSI_MAX_PHYS_SEGMENTS > 64) - case 65 ... 128: - cmd->sglist_len = 4; + case 65 ... SCSI_MAX_SG_SEGMENTS: + index = 4; break; -#if (SCSI_MAX_PHYS_SEGMENTS > 128) - case 129 ... 256: - cmd->sglist_len = 5; - break; -#endif -#endif -#endif default: - return NULL; + printk(KERN_ERR "scsi: bad segment count=%d\n", nents); + BUG(); } - sgp = scsi_sg_pools + cmd->sglist_len; - sgl = mempool_alloc(sgp->pool, gfp_mask); - return sgl; + return index; +} + +struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) +{ + struct scsi_host_sg_pool *sgp; + struct scatterlist *sgl, *prev, *ret; + unsigned int index; + int this, left; + + BUG_ON(!cmd->use_sg); + + left = cmd->use_sg; + ret = prev = NULL; + do { + this = left; + if (this > SCSI_MAX_SG_SEGMENTS) { + this = SCSI_MAX_SG_SEGMENTS; + index = SG_MEMPOOL_NR - 1; + } else + index = scsi_sgtable_index(this); + + left -= this; + + /* + * if we have more entries after this round, reserve a slot + * for the chain pointer. + */ + if (left) + left++; + + sgp = scsi_sg_pools + index; + + sgl = mempool_alloc(sgp->pool, gfp_mask); + if (unlikely(!sgl)) + goto enomem; + + memset(sgl, 0, sizeof(*sgl) * sgp->size); + + /* + * first loop through, set initial index and return value + */ + if (!ret) { + cmd->sglist_len = index; + ret = sgl; + } + + /* + * chain previous sglist, if any. we know the previous + * sglist must be the biggest one, or we would not have + * ended up doing another loop. + */ + if (prev) + sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl); + + /* + * don't allow subsequent mempool allocs to sleep, it would + * violate the mempool principle. + */ + gfp_mask &= ~__GFP_WAIT; + prev = sgl; + } while (left); + + /* + * ->use_sg may get modified after dma mapping has potentially + * shrunk the number of segments, so keep a copy of it for free. + */ + cmd->__use_sg = cmd->use_sg; + return ret; +enomem: + if (ret) { + /* + * Free entries chained off ret. Since we were trying to + * allocate another sglist, we know that all entries are of + * the max size. + */ + sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1; + prev = &ret[SCSI_MAX_SG_SEGMENTS - 1]; + + while ((sgl = sg_chain_ptr(ret)) != NULL) { + ret = &sgl[SCSI_MAX_SG_SEGMENTS - 1]; + mempool_free(sgl, sgp->pool); + } + + mempool_free(prev, sgp->pool); + } + return NULL; } EXPORT_SYMBOL(scsi_alloc_sgtable); @@ -752,6 +817,42 @@ void scsi_free_sgtable(struct scsi_cmnd *cmd) BUG_ON(cmd->sglist_len >= SG_MEMPOOL_NR); + /* + * if this is the biggest size sglist, check if we have + * chained parts we need to free + */ + if (cmd->__use_sg > SCSI_MAX_SG_SEGMENTS) { + unsigned short this, left; + struct scatterlist *next; + unsigned int index; + + left = cmd->__use_sg - SCSI_MAX_SG_SEGMENTS; + next = sg_chain_ptr(&sgl[SCSI_MAX_SG_SEGMENTS - 1]); + do { + sgl = next; + this = left; + if (this > SCSI_MAX_SG_SEGMENTS) { + this = SCSI_MAX_SG_SEGMENTS; + index = SG_MEMPOOL_NR - 1; + } else + index = scsi_sgtable_index(this); + + left -= this; + + sgp = scsi_sg_pools + index; + + if (left) + next = sg_chain_ptr(&sgl[sgp->size - 1]); + + mempool_free(sgl, sgp->pool); + } while (left); + + /* + * Restore original, will be freed below + */ + sgl = cmd->request_buffer; + } + sgp = scsi_sg_pools + cmd->sglist_len; mempool_free(sgl, sgp->pool); } @@ -993,7 +1094,6 @@ EXPORT_SYMBOL(scsi_io_completion); static int scsi_init_io(struct scsi_cmnd *cmd) { struct request *req = cmd->request; - struct scatterlist *sgpnt; int count; /* @@ -1006,14 +1106,13 @@ static int scsi_init_io(struct scsi_cmnd *cmd) /* * If sg table allocation fails, requeue request later. */ - sgpnt = scsi_alloc_sgtable(cmd, GFP_ATOMIC); - if (unlikely(!sgpnt)) { + cmd->request_buffer = scsi_alloc_sgtable(cmd, GFP_ATOMIC); + if (unlikely(!cmd->request_buffer)) { scsi_unprep_request(req); return BLKPREP_DEFER; } req->buffer = NULL; - cmd->request_buffer = (char *) sgpnt; if (blk_pc_request(req)) cmd->request_bufflen = req->data_len; else @@ -1577,8 +1676,16 @@ struct request_queue *__scsi_alloc_queue(struct Scsi_Host *shost, if (!q) return NULL; + /* + * this limit is imposed by hardware restrictions + */ blk_queue_max_hw_segments(q, shost->sg_tablesize); - blk_queue_max_phys_segments(q, SCSI_MAX_PHYS_SEGMENTS); + + /* + * we can chain scatterlists, so this limit is fairly arbitrary + */ + blk_queue_max_phys_segments(q, SCSI_MAX_SG_SEGMENTS); + blk_queue_max_sectors(q, shost->max_sectors); blk_queue_bounce_limit(q, scsi_calculate_bounce_limit(shost)); blk_queue_segment_boundary(q, shost->dma_boundary); diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h index 9f8f80a..702fcfe 100644 --- a/include/scsi/scsi.h +++ b/include/scsi/scsi.h @@ -11,13 +11,6 @@ #include <linux/types.h> /* - * The maximum sg list length SCSI can cope with - * (currently must be a power of 2 between 32 and 256) - */ -#define SCSI_MAX_PHYS_SEGMENTS MAX_PHYS_SEGMENTS - - -/* * SCSI command lengths */ diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h index d7db992..fc649af 100644 --- a/include/scsi/scsi_cmnd.h +++ b/include/scsi/scsi_cmnd.h @@ -72,6 +72,7 @@ struct scsi_cmnd { /* These elements define the operation we ultimately want to perform */ unsigned short use_sg; /* Number of pieces of scatter-gather */ unsigned short sglist_len; /* size of malloc'd scatter-gather list */ + unsigned short __use_sg; unsigned underflow; /* Return error if less than this amount is transferred */ -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 11/13] SCSI: support for allocating large scatterlists 2007-05-10 10:21 ` [PATCH 11/13] SCSI: support for allocating large scatterlists Jens Axboe @ 2007-05-10 10:48 ` Andrew Morton 2007-05-10 10:52 ` Jens Axboe 2007-05-10 12:38 ` Alan Cox 1 sibling, 1 reply; 28+ messages in thread From: Andrew Morton @ 2007-05-10 10:48 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-kernel On Thu, 10 May 2007 12:21:53 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > This is what enables large commands. If we need to allocate an > sgtable that doesn't fit in a single page, allocate several > SCSI_MAX_SG_SEGMENTS sized tables and chain them together. > > We default to the safe setup of NOT chaining, for now. > > ... > > +/* > + * Should fit within a single page, and must be a power-of-2. > + */ > +#define SCSI_MAX_SG_SEGMENTS 128 But what units is it in? Bytes? sizeof(void*)? > +struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) > +{ > + struct scsi_host_sg_pool *sgp; > + struct scatterlist *sgl, *prev, *ret; > + unsigned int index; > + int this, left; > + > + BUG_ON(!cmd->use_sg); > + > + left = cmd->use_sg; > + ret = prev = NULL; > + do { > + this = left; > + if (this > SCSI_MAX_SG_SEGMENTS) { > + this = SCSI_MAX_SG_SEGMENTS; > + index = SG_MEMPOOL_NR - 1; > + } else > + index = scsi_sgtable_index(this); > + > + left -= this; > + > + /* > + * if we have more entries after this round, reserve a slot > + * for the chain pointer. > + */ > + if (left) > + left++; > + > + sgp = scsi_sg_pools + index; > + > + sgl = mempool_alloc(sgp->pool, gfp_mask); > + if (unlikely(!sgl)) > + goto enomem; > + > + memset(sgl, 0, sizeof(*sgl) * sgp->size); > + > + /* > + * first loop through, set initial index and return value > + */ > + if (!ret) { > + cmd->sglist_len = index; > + ret = sgl; > + } > + > + /* > + * chain previous sglist, if any. we know the previous > + * sglist must be the biggest one, or we would not have > + * ended up doing another loop. > + */ > + if (prev) > + sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl); > + > + /* > + * don't allow subsequent mempool allocs to sleep, it would > + * violate the mempool principle. > + */ > + gfp_mask &= ~__GFP_WAIT; hrm. Might want to set __GFP_HIGH here too. > + prev = sgl; > + } while (left); > + > + /* > + * ->use_sg may get modified after dma mapping has potentially > + * shrunk the number of segments, so keep a copy of it for free. > + */ > + cmd->__use_sg = cmd->use_sg; > + return ret; > +enomem: > + if (ret) { > + /* > + * Free entries chained off ret. Since we were trying to > + * allocate another sglist, we know that all entries are of > + * the max size. > + */ > + sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1; > + prev = &ret[SCSI_MAX_SG_SEGMENTS - 1]; > + > + while ((sgl = sg_chain_ptr(ret)) != NULL) { > + ret = &sgl[SCSI_MAX_SG_SEGMENTS - 1]; > + mempool_free(sgl, sgp->pool); > + } > + > + mempool_free(prev, sgp->pool); > + } > + return NULL; > } ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 11/13] SCSI: support for allocating large scatterlists 2007-05-10 10:48 ` Andrew Morton @ 2007-05-10 10:52 ` Jens Axboe 0 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:52 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel On Thu, May 10 2007, Andrew Morton wrote: > On Thu, 10 May 2007 12:21:53 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > > > This is what enables large commands. If we need to allocate an > > sgtable that doesn't fit in a single page, allocate several > > SCSI_MAX_SG_SEGMENTS sized tables and chain them together. > > > > We default to the safe setup of NOT chaining, for now. > > > > ... > > > > +/* > > + * Should fit within a single page, and must be a power-of-2. > > + */ > > +#define SCSI_MAX_SG_SEGMENTS 128 > > But what units is it in? Bytes? sizeof(void*)? sg elements. It's not new, just juggled around a bit. The comment is a bit stale now though, it doesn't have to be a pow-of-2. Ideally we just want to make sure that sizeof(struct scatterlist) * SCSI_MAX_SG_SEGMENTS <= PAGE_SIZE to avoid 2^1 allocations. > > +struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) > > +{ > > + struct scsi_host_sg_pool *sgp; > > + struct scatterlist *sgl, *prev, *ret; > > + unsigned int index; > > + int this, left; > > + > > + BUG_ON(!cmd->use_sg); > > + > > + left = cmd->use_sg; > > + ret = prev = NULL; > > + do { > > + this = left; > > + if (this > SCSI_MAX_SG_SEGMENTS) { > > + this = SCSI_MAX_SG_SEGMENTS; > > + index = SG_MEMPOOL_NR - 1; > > + } else > > + index = scsi_sgtable_index(this); > > + > > + left -= this; > > + > > + /* > > + * if we have more entries after this round, reserve a slot > > + * for the chain pointer. > > + */ > > + if (left) > > + left++; > > + > > + sgp = scsi_sg_pools + index; > > + > > + sgl = mempool_alloc(sgp->pool, gfp_mask); > > + if (unlikely(!sgl)) > > + goto enomem; > > + > > + memset(sgl, 0, sizeof(*sgl) * sgp->size); > > + > > + /* > > + * first loop through, set initial index and return value > > + */ > > + if (!ret) { > > + cmd->sglist_len = index; > > + ret = sgl; > > + } > > + > > + /* > > + * chain previous sglist, if any. we know the previous > > + * sglist must be the biggest one, or we would not have > > + * ended up doing another loop. > > + */ > > + if (prev) > > + sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl); > > + > > + /* > > + * don't allow subsequent mempool allocs to sleep, it would > > + * violate the mempool principle. > > + */ > > + gfp_mask &= ~__GFP_WAIT; > > hrm. > > Might want to set __GFP_HIGH here too. Agree, I did consider that (clear wait, set ATOMIC to get the __HIGH). -- Jens Axboe ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 11/13] SCSI: support for allocating large scatterlists 2007-05-10 10:21 ` [PATCH 11/13] SCSI: support for allocating large scatterlists Jens Axboe 2007-05-10 10:48 ` Andrew Morton @ 2007-05-10 12:38 ` Alan Cox 1 sibling, 0 replies; 28+ messages in thread From: Alan Cox @ 2007-05-10 12:38 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-kernel, Jens Axboe > sgtable that doesn't fit in a single page, allocate several > SCSI_MAX_SG_SEGMENTS sized tables and chain them together. > > We default to the safe setup of NOT chaining, for now. Presumably you'll be submitting patches to fix the current I/O failure retry in the SCSI layer before enabling it so that we retry most of a problem I/O and don't throw 64MB of database data into the wind on a single bad disk block ? ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 12/13] ll_rw_blk: temporarily enable max_segments tweaking 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (10 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 11/13] SCSI: support for allocating large scatterlists Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 2007-05-10 10:49 ` Andrew Morton 2007-05-10 10:21 ` [PATCH 13/13] scsi drivers: sg chaining Jens Axboe 12 siblings, 1 reply; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe Expose this setting for now, so that users can play with enabling large commands without defaulting it to on globally. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- block/ll_rw_blk.c | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c index b01a5f2..cf05396 100644 --- a/block/ll_rw_blk.c +++ b/block/ll_rw_blk.c @@ -3930,7 +3930,22 @@ static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page) return queue_var_show(max_hw_sectors_kb, (page)); } +static ssize_t queue_max_segments_show(struct request_queue *q, char *page) +{ + return queue_var_show(q->max_phys_segments, page); +} + +static ssize_t queue_max_segments_store(struct request_queue *q, const char *page, size_t count) +{ + unsigned long segments; + ssize_t ret = queue_var_store(&segments, page, count); + spin_lock_irq(q->queue_lock); + q->max_phys_segments = segments; + spin_unlock_irq(q->queue_lock); + + return ret; +} static struct queue_sysfs_entry queue_requests_entry = { .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR }, .show = queue_requests_show, @@ -3954,6 +3969,12 @@ static struct queue_sysfs_entry queue_max_hw_sectors_entry = { .show = queue_max_hw_sectors_show, }; +static struct queue_sysfs_entry queue_max_segments_entry = { + .attr = {.name = "max_segments", .mode = S_IRUGO |S_IWUSR }, + .show = queue_max_segments_show, + .store = queue_max_segments_store, +}; + static struct queue_sysfs_entry queue_iosched_entry = { .attr = {.name = "scheduler", .mode = S_IRUGO | S_IWUSR }, .show = elv_iosched_show, @@ -3965,6 +3986,7 @@ static struct attribute *default_attrs[] = { &queue_ra_entry.attr, &queue_max_hw_sectors_entry.attr, &queue_max_sectors_entry.attr, + &queue_max_segments_entry.attr, &queue_iosched_entry.attr, NULL, }; -- 1.5.2.rc1 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 12/13] ll_rw_blk: temporarily enable max_segments tweaking 2007-05-10 10:21 ` [PATCH 12/13] ll_rw_blk: temporarily enable max_segments tweaking Jens Axboe @ 2007-05-10 10:49 ` Andrew Morton 2007-05-10 11:20 ` Jens Axboe 0 siblings, 1 reply; 28+ messages in thread From: Andrew Morton @ 2007-05-10 10:49 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-kernel On Thu, 10 May 2007 12:21:54 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > Expose this setting for now, so that users can play with enabling > large commands without defaulting it to on globally. > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com> > --- > block/ll_rw_blk.c | 22 ++++++++++++++++++++++ > 1 files changed, 22 insertions(+), 0 deletions(-) > > diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c > index b01a5f2..cf05396 100644 > --- a/block/ll_rw_blk.c > +++ b/block/ll_rw_blk.c > @@ -3930,7 +3930,22 @@ static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page) > return queue_var_show(max_hw_sectors_kb, (page)); > } > > +static ssize_t queue_max_segments_show(struct request_queue *q, char *page) > +{ > + return queue_var_show(q->max_phys_segments, page); > +} > + > +static ssize_t queue_max_segments_store(struct request_queue *q, const char *page, size_t count) 100-col xterm? > +{ > + unsigned long segments; > + ssize_t ret = queue_var_store(&segments, page, count); > > + spin_lock_irq(q->queue_lock); > + q->max_phys_segments = segments; > + spin_unlock_irq(q->queue_lock); Fishy locking? > + return ret; > +} > static struct queue_sysfs_entry queue_requests_entry = { > .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR }, > .show = queue_requests_show, > @@ -3954,6 +3969,12 @@ static struct queue_sysfs_entry queue_max_hw_sectors_entry = { > .show = queue_max_hw_sectors_show, > }; > > +static struct queue_sysfs_entry queue_max_segments_entry = { > + .attr = {.name = "max_segments", .mode = S_IRUGO |S_IWUSR }, whitespace went funny. > + .show = queue_max_segments_show, > + .store = queue_max_segments_store, > +}; > + > static struct queue_sysfs_entry queue_iosched_entry = { > .attr = {.name = "scheduler", .mode = S_IRUGO | S_IWUSR }, > .show = elv_iosched_show, > @@ -3965,6 +3986,7 @@ static struct attribute *default_attrs[] = { > &queue_ra_entry.attr, > &queue_max_hw_sectors_entry.attr, > &queue_max_sectors_entry.attr, > + &queue_max_segments_entry.attr, > &queue_iosched_entry.attr, > NULL, > }; ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 12/13] ll_rw_blk: temporarily enable max_segments tweaking 2007-05-10 10:49 ` Andrew Morton @ 2007-05-10 11:20 ` Jens Axboe 0 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 11:20 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel On Thu, May 10 2007, Andrew Morton wrote: > On Thu, 10 May 2007 12:21:54 +0200 Jens Axboe <jens.axboe@oracle.com> wrote: > > > Expose this setting for now, so that users can play with enabling > > large commands without defaulting it to on globally. > > > > Signed-off-by: Jens Axboe <jens.axboe@oracle.com> > > --- > > block/ll_rw_blk.c | 22 ++++++++++++++++++++++ > > 1 files changed, 22 insertions(+), 0 deletions(-) > > > > diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c > > index b01a5f2..cf05396 100644 > > --- a/block/ll_rw_blk.c > > +++ b/block/ll_rw_blk.c > > @@ -3930,7 +3930,22 @@ static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page) > > return queue_var_show(max_hw_sectors_kb, (page)); > > } > > > > +static ssize_t queue_max_segments_show(struct request_queue *q, char *page) > > +{ > > + return queue_var_show(q->max_phys_segments, page); > > +} > > + > > +static ssize_t queue_max_segments_store(struct request_queue *q, const char *page, size_t count) > > 100-col xterm? It's a debug thing, it'll go away for the final versions. So I didn't pay much attention to details. -- Jens Axboe ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 13/13] scsi drivers: sg chaining 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe ` (11 preceding siblings ...) 2007-05-10 10:21 ` [PATCH 12/13] ll_rw_blk: temporarily enable max_segments tweaking Jens Axboe @ 2007-05-10 10:21 ` Jens Axboe 12 siblings, 0 replies; 28+ messages in thread From: Jens Axboe @ 2007-05-10 10:21 UTC (permalink / raw) To: linux-kernel; +Cc: Jens Axboe Convert SCSI drivers to using the proper sg helpers. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> --- arch/ia64/hp/sim/simscsi.c | 23 ++++++---- drivers/infiniband/ulp/srp/ib_srp.c | 22 +++++----- drivers/scsi/3w-9xxx.c | 8 +-- drivers/scsi/3w-xxxx.c | 8 +-- drivers/scsi/53c700.c | 16 +++---- drivers/scsi/BusLogic.c | 7 +-- drivers/scsi/NCR53c406a.c | 18 ++++---- drivers/scsi/a100u2w.c | 9 ++-- drivers/scsi/aacraid/aachba.c | 29 +++++-------- drivers/scsi/advansys.c | 21 ++++----- drivers/scsi/aha1542.c | 21 ++++----- drivers/scsi/aha1740.c | 8 +-- drivers/scsi/aic7xxx/aic79xx_osm.c | 3 - drivers/scsi/aic7xxx/aic7xxx_osm.c | 12 ++--- drivers/scsi/aic94xx/aic94xx_task.c | 6 +- drivers/scsi/arcmsr/arcmsr_hba.c | 11 ++--- drivers/scsi/dc395x.c | 7 +-- drivers/scsi/dpt_i2o.c | 13 +++-- drivers/scsi/eata.c | 8 +-- drivers/scsi/esp_scsi.c | 5 +- drivers/scsi/gdth.c | 45 ++++++++++---------- drivers/scsi/hptiop.c | 8 +-- drivers/scsi/ibmmca.c | 11 ++--- drivers/scsi/ibmvscsi/ibmvscsi.c | 4 - drivers/scsi/ide-scsi.c | 31 ++++++++------ drivers/scsi/initio.c | 12 +++-- drivers/scsi/ipr.c | 9 +--- drivers/scsi/ips.c | 74 ++++++++++++++++++---------------- drivers/scsi/iscsi_tcp.c | 43 ++++++++++--------- drivers/scsi/jazz_esp.c | 27 ++++++------ drivers/scsi/lpfc/lpfc_scsi.c | 9 +--- drivers/scsi/mac53c94.c | 9 +--- drivers/scsi/megaraid.c | 13 ++--- drivers/scsi/megaraid/megaraid_mbox.c | 7 +-- drivers/scsi/megaraid/megaraid_sas.c | 16 +++---- drivers/scsi/mesh.c | 12 ++--- drivers/scsi/ncr53c8xx.c | 7 +-- drivers/scsi/nsp32.c | 9 +--- drivers/scsi/pcmcia/sym53c500_cs.c | 18 ++++---- drivers/scsi/qla1280.c | 66 +++++++++++++++++------------- drivers/scsi/qla2xxx/qla_iocb.c | 9 +--- drivers/scsi/qla4xxx/ql4_iocb.c | 8 +-- drivers/scsi/qlogicfas408.c | 9 ++-- drivers/scsi/qlogicpti.c | 15 +++--- drivers/scsi/scsi_debug.c | 14 +++--- drivers/scsi/sym53c416.c | 9 +--- drivers/scsi/sym53c8xx_2/sym_glue.c | 7 +-- drivers/scsi/u14-34f.c | 10 ++-- drivers/scsi/ultrastor.c | 10 ++-- drivers/scsi/wd7000.c | 7 +-- 50 files changed, 406 insertions(+), 377 deletions(-) diff --git a/arch/ia64/hp/sim/simscsi.c b/arch/ia64/hp/sim/simscsi.c index bb87682..291e7f4 100644 --- a/arch/ia64/hp/sim/simscsi.c +++ b/arch/ia64/hp/sim/simscsi.c @@ -173,7 +173,7 @@ simscsi_sg_readwrite (struct scsi_cmnd *sc, int mode, unsigned long offset) return; } offset += sl->length; - sl++; + sl = sg_next(sl); list_len--; } sc->result = GOOD; @@ -239,18 +239,23 @@ simscsi_readwrite10 (struct scsi_cmnd *sc, int mode) static void simscsi_fillresult(struct scsi_cmnd *sc, char *buf, unsigned len) { - int scatterlen = sc->use_sg; - struct scatterlist *slp; + int scatterlen = sc->use_sg, i; + struct scatterlist *slp, *sg; if (scatterlen == 0) memcpy(sc->request_buffer, buf, len); - else for (slp = (struct scatterlist *)sc->request_buffer; - scatterlen-- > 0 && len > 0; slp++) { - unsigned thislen = min(len, slp->length); + else { + slp = sc->request_buffer; + for_each_sg(slp, sg, scatterlen, i) { + unsigned thislen; - memcpy(page_address(slp->page) + slp->offset, buf, thislen); - slp++; - len -= thislen; + if (len <= 0) + break; + + thislen = min(len, slp->length); + memcpy(page_address(sg->page) + sg->offset, buf, thislen); + len -= thislen; + } } } diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 39bf057..c31d50d 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -595,6 +595,7 @@ static int srp_map_fmr(struct srp_target_port *target, struct scatterlist *scat, int ret; struct srp_device *dev = target->srp_host->dev; struct ib_device *ibdev = dev->dev; + struct scatterlist *sg; if (!dev->fmr_pool) return -ENODEV; @@ -604,16 +605,16 @@ static int srp_map_fmr(struct srp_target_port *target, struct scatterlist *scat, return -EINVAL; len = page_cnt = 0; - for (i = 0; i < sg_cnt; ++i) { - unsigned int dma_len = ib_sg_dma_len(ibdev, &scat[i]); + for_each_sg(scat, sg, sg_cnt, i) { + unsigned int dma_len = ib_sg_dma_len(ibdev, sg); - if (ib_sg_dma_address(ibdev, &scat[i]) & ~dev->fmr_page_mask) { + if (ib_sg_dma_address(ibdev, sg) & ~dev->fmr_page_mask) { if (i > 0) return -EINVAL; else ++page_cnt; } - if ((ib_sg_dma_address(ibdev, &scat[i]) + dma_len) & + if ((ib_sg_dma_address(ibdev, sg) + dma_len) & ~dev->fmr_page_mask) { if (i < sg_cnt - 1) return -EINVAL; @@ -633,12 +634,12 @@ static int srp_map_fmr(struct srp_target_port *target, struct scatterlist *scat, return -ENOMEM; page_cnt = 0; - for (i = 0; i < sg_cnt; ++i) { - unsigned int dma_len = ib_sg_dma_len(ibdev, &scat[i]); + for_each_sg(scat, sg, sg_cnt, i) { + unsigned int dma_len = ib_sg_dma_len(ibdev, sg); for (j = 0; j < dma_len; j += dev->fmr_page_size) dma_pages[page_cnt++] = - (ib_sg_dma_address(ibdev, &scat[i]) & + (ib_sg_dma_address(ibdev, sg) & dev->fmr_page_mask) + j; } @@ -724,6 +725,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target, * descriptor. */ struct srp_indirect_buf *buf = (void *) cmd->add_data; + struct scatterlist *sg; u32 datalen = 0; int i; @@ -732,11 +734,11 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_target_port *target, sizeof (struct srp_indirect_buf) + count * sizeof (struct srp_direct_buf); - for (i = 0; i < count; ++i) { - unsigned int dma_len = ib_sg_dma_len(ibdev, &scat[i]); + for_each_sg(scat, sg, count, i) { + unsigned int dma_len = ib_sg_dma_len(ibdev, sg); buf->desc_list[i].va = - cpu_to_be64(ib_sg_dma_address(ibdev, &scat[i])); + cpu_to_be64(ib_sg_dma_address(ibdev, sg)); buf->desc_list[i].key = cpu_to_be32(dev->mr->rkey); buf->desc_list[i].len = cpu_to_be32(dma_len); diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c index eb766c3..293cfd2 100644 --- a/drivers/scsi/3w-9xxx.c +++ b/drivers/scsi/3w-9xxx.c @@ -1815,7 +1815,7 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, u32 num_sectors = 0x0; int i, sg_count; struct scsi_cmnd *srb = NULL; - struct scatterlist *sglist = NULL; + struct scatterlist *sglist = NULL, *sg; dma_addr_t buffaddr = 0x0; int retval = 1; @@ -1893,9 +1893,9 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, if (sg_count == 0) goto out; - for (i = 0; i < sg_count; i++) { - command_packet->sg_list[i].address = TW_CPU_TO_SGL(sg_dma_address(&sglist[i])); - command_packet->sg_list[i].length = cpu_to_le32(sg_dma_len(&sglist[i])); + for_each_sg(sglist, sg, sg_count, i) { + command_packet->sg_list[i].address = TW_CPU_TO_SGL(sg_dma_address(sg)); + command_packet->sg_list[i].length = cpu_to_le32(sg_dma_len(sg)); if (command_packet->sg_list[i].address & TW_CPU_TO_SGL(TW_ALIGNMENT_9000_SGL)) { TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2e, "Found unaligned sgl address during execute scsi"); goto out; diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c index 656bdb1..3d005cf 100644 --- a/drivers/scsi/3w-xxxx.c +++ b/drivers/scsi/3w-xxxx.c @@ -1767,7 +1767,7 @@ static int tw_scsiop_read_write(TW_Device_Extension *tw_dev, int request_id) u32 lba = 0x0, num_sectors = 0x0, buffaddr = 0x0; int i, use_sg; struct scsi_cmnd *srb; - struct scatterlist *sglist; + struct scatterlist *sglist, *sg; dprintk(KERN_NOTICE "3w-xxxx: tw_scsiop_read_write()\n"); @@ -1837,9 +1837,9 @@ static int tw_scsiop_read_write(TW_Device_Extension *tw_dev, int request_id) if (use_sg == 0) return 1; - for (i=0;i<use_sg; i++) { - command_packet->byte8.io.sgl[i].address = sg_dma_address(&sglist[i]); - command_packet->byte8.io.sgl[i].length = sg_dma_len(&sglist[i]); + for_each_sg(sglist, sg, use_sg, i) { + command_packet->byte8.io.sgl[i].address = sg_dma_address(sg); + command_packet->byte8.io.sgl[i].length = sg_dma_len(sg); command_packet->size+=2; } } diff --git a/drivers/scsi/53c700.c b/drivers/scsi/53c700.c index cb02656..821163d 100644 --- a/drivers/scsi/53c700.c +++ b/drivers/scsi/53c700.c @@ -1887,6 +1887,7 @@ NCR_700_queuecommand(struct scsi_cmnd *SCp, void (*done)(struct scsi_cmnd *)) int i; int sg_count; dma_addr_t vPtr = 0; + struct scatterlist *sgl, *sg; __u32 count = 0; if(SCp->use_sg) { @@ -1902,15 +1903,12 @@ NCR_700_queuecommand(struct scsi_cmnd *SCp, void (*done)(struct scsi_cmnd *)) slot->dma_handle = vPtr; sg_count = 1; } - - - for(i = 0; i < sg_count; i++) { - - if(SCp->use_sg) { - struct scatterlist *sg = SCp->request_buffer; - - vPtr = sg_dma_address(&sg[i]); - count = sg_dma_len(&sg[i]); + + sgl = SCp->request_buffer; + for_each_sg(sgl, sg, sg_count, i) { + if (SCp->use_sg) { + vPtr = sg_dma_address(sg); + count = sg_dma_len(sg); } slot->SG[i].ins = bS_to_host(move_ins | count); diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c index 96f4cab..7fd1cca 100644 --- a/drivers/scsi/BusLogic.c +++ b/drivers/scsi/BusLogic.c @@ -2862,6 +2862,7 @@ static int BusLogic_QueueCommand(struct scsi_cmnd *Command, void (*CompletionRou Command->sc_data_direction); } else if (SegmentCount != 0) { struct scatterlist *ScatterList = (struct scatterlist *) BufferPointer; + struct scatterlist *sg; int Segment, Count; Count = pci_map_sg(HostAdapter->PCI_Device, ScatterList, SegmentCount, @@ -2872,9 +2873,9 @@ static int BusLogic_QueueCommand(struct scsi_cmnd *Command, void (*CompletionRou CCB->DataPointer = (unsigned int) CCB->DMA_Handle + ((unsigned long) &CCB->ScatterGatherList - (unsigned long) CCB); else CCB->DataPointer = Virtual_to_32Bit_Virtual(CCB->ScatterGatherList); - for (Segment = 0; Segment < Count; Segment++) { - CCB->ScatterGatherList[Segment].SegmentByteCount = sg_dma_len(ScatterList + Segment); - CCB->ScatterGatherList[Segment].SegmentDataPointer = sg_dma_address(ScatterList + Segment); + for_each_sg(ScatterList, sg, Count, Segment) { + CCB->ScatterGatherList[Segment].SegmentByteCount = sg_dma_len(sg); + CCB->ScatterGatherList[Segment].SegmentDataPointer = sg_dma_address(sg); } } else { CCB->Opcode = BusLogic_InitiatorCCB; diff --git a/drivers/scsi/NCR53c406a.c b/drivers/scsi/NCR53c406a.c index 7c0b17f..9402dd9 100644 --- a/drivers/scsi/NCR53c406a.c +++ b/drivers/scsi/NCR53c406a.c @@ -875,12 +875,13 @@ static void NCR53c406a_intr(void *dev_id) if (!current_SC->use_sg) /* Don't use scatter-gather */ NCR53c406a_pio_write(current_SC->request_buffer, current_SC->request_bufflen); else { /* use scatter-gather */ + struct scatterlist *sg; + int i; + sgcount = current_SC->use_sg; sglist = current_SC->request_buffer; - while (sgcount--) { - NCR53c406a_pio_write(page_address(sglist->page) + sglist->offset, sglist->length); - sglist++; - } + for_each_sg(sglist, sg, sgcount, i) + NCR53c406a_pio_write(page_address(sg->page) + sg->offset, sg->length); } REG0; #endif /* USE_PIO */ @@ -902,12 +903,13 @@ static void NCR53c406a_intr(void *dev_id) if (!current_SC->use_sg) /* Don't use scatter-gather */ NCR53c406a_pio_read(current_SC->request_buffer, current_SC->request_bufflen); else { /* Use scatter-gather */ + struct scatterlist *sg; + int i; + sgcount = current_SC->use_sg; sglist = current_SC->request_buffer; - while (sgcount--) { - NCR53c406a_pio_read(page_address(sglist->page) + sglist->offset, sglist->length); - sglist++; - } + for_each_sg(sglist, sg, sgcount, i) + NCR53c406a_pio_read(page_address(sg->page) + sg->offset, sg->length); } REG0; #endif /* USE_PIO */ diff --git a/drivers/scsi/a100u2w.c b/drivers/scsi/a100u2w.c index 7f4241b..1353900 100644 --- a/drivers/scsi/a100u2w.c +++ b/drivers/scsi/a100u2w.c @@ -796,7 +796,7 @@ static void orc_interrupt( *****************************************************************************/ static void inia100BuildSCB(ORC_HCS * pHCB, ORC_SCB * pSCB, struct scsi_cmnd * SCpnt) { /* Create corresponding SCB */ - struct scatterlist *pSrbSG; + struct scatterlist *pSrbSG, *sg; ORC_SG *pSG; /* Pointer to SG list */ int i, count_sg; ESCB *pEScb; @@ -820,9 +820,10 @@ static void inia100BuildSCB(ORC_HCS * pHCB, ORC_SCB * pSCB, struct scsi_cmnd * S count_sg = pci_map_sg(pHCB->pdev, pSrbSG, SCpnt->use_sg, SCpnt->sc_data_direction); pSCB->SCB_SGLen = (U32) (count_sg * 8); - for (i = 0; i < count_sg; i++, pSG++, pSrbSG++) { - pSG->SG_Ptr = (U32) sg_dma_address(pSrbSG); - pSG->SG_Len = (U32) sg_dma_len(pSrbSG); + for_each_sg(pSrbSG, sg, count_sg, i) { + pSG->SG_Ptr = (U32) sg_dma_address(sg); + pSG->SG_Len = (U32) sg_dma_len(sg); + pSG++; } } else if (SCpnt->request_bufflen != 0) {/* Non SG */ pSCB->SCB_SGLen = 0x8; diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c index 1e82c69..7fcac4e 100644 --- a/drivers/scsi/aacraid/aachba.c +++ b/drivers/scsi/aacraid/aachba.c @@ -2347,7 +2347,7 @@ static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* psg) psg->sg[0].addr = 0; psg->sg[0].count = 0; if (scsicmd->use_sg) { - struct scatterlist *sg; + struct scatterlist *sg, *s; int i; int sg_count; sg = (struct scatterlist *) scsicmd->request_buffer; @@ -2356,11 +2356,10 @@ static unsigned long aac_build_sg(struct scsi_cmnd* scsicmd, struct sgmap* psg) scsicmd->sc_data_direction); psg->count = cpu_to_le32(sg_count); - for (i = 0; i < sg_count; i++) { - psg->sg[i].addr = cpu_to_le32(sg_dma_address(sg)); - psg->sg[i].count = cpu_to_le32(sg_dma_len(sg)); - byte_count += sg_dma_len(sg); - sg++; + for_each_sg(sg, s, sg_count, i) { + psg->sg[i].addr = cpu_to_le32(sg_dma_address(s)); + psg->sg[i].count = cpu_to_le32(sg_dma_len(s)); + byte_count += sg_dma_len(s); } /* hba wants the size to be exact */ if(byte_count > scsicmd->request_bufflen){ @@ -2404,7 +2403,7 @@ static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* p psg->sg[0].addr[1] = 0; psg->sg[0].count = 0; if (scsicmd->use_sg) { - struct scatterlist *sg; + struct scatterlist *sg, *s; int i; int sg_count; sg = (struct scatterlist *) scsicmd->request_buffer; @@ -2412,14 +2411,13 @@ static unsigned long aac_build_sg64(struct scsi_cmnd* scsicmd, struct sgmap64* p sg_count = pci_map_sg(dev->pdev, sg, scsicmd->use_sg, scsicmd->sc_data_direction); - for (i = 0; i < sg_count; i++) { - int count = sg_dma_len(sg); - addr = sg_dma_address(sg); + for_each_sg(sg, s, sg_count, i) { + int count = sg_dma_len(s); + addr = sg_dma_address(s); psg->sg[i].addr[0] = cpu_to_le32(addr & 0xffffffff); psg->sg[i].addr[1] = cpu_to_le32(addr>>32); psg->sg[i].count = cpu_to_le32(count); byte_count += count; - sg++; } psg->count = cpu_to_le32(sg_count); /* hba wants the size to be exact */ @@ -2465,7 +2463,7 @@ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw* psg->sg[0].count = 0; psg->sg[0].flags = 0; if (scsicmd->use_sg) { - struct scatterlist *sg; + struct scatterlist *sg, *s; int i; int sg_count; sg = (struct scatterlist *) scsicmd->request_buffer; @@ -2473,9 +2471,9 @@ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw* sg_count = pci_map_sg(dev->pdev, sg, scsicmd->use_sg, scsicmd->sc_data_direction); - for (i = 0; i < sg_count; i++) { - int count = sg_dma_len(sg); - u64 addr = sg_dma_address(sg); + for_each_sg(sg, s, sg_count, i) { + int count = sg_dma_len(s); + u64 addr = sg_dma_address(s); psg->sg[i].next = 0; psg->sg[i].prev = 0; psg->sg[i].addr[1] = cpu_to_le32((u32)(addr>>32)); @@ -2483,7 +2481,6 @@ static unsigned long aac_build_sgraw(struct scsi_cmnd* scsicmd, struct sgmapraw* psg->sg[i].count = cpu_to_le32(count); psg->sg[i].flags = 0; byte_count += count; - sg++; } psg->count = cpu_to_le32(sg_count); /* hba wants the size to be exact */ diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c index 9b3303b..06bf40c 100644 --- a/drivers/scsi/advansys.c +++ b/drivers/scsi/advansys.c @@ -6468,7 +6468,7 @@ asc_build_req(asc_board_t *boardp, struct scsi_cmnd *scp) */ int sgcnt; int use_sg; - struct scatterlist *slp; + struct scatterlist *slp, *sg; slp = (struct scatterlist *)scp->request_buffer; use_sg = dma_map_sg(dev, slp, scp->use_sg, scp->sc_data_direction); @@ -6502,10 +6502,10 @@ asc_build_req(asc_board_t *boardp, struct scsi_cmnd *scp) /* * Convert scatter-gather list into ASC_SG_HEAD list. */ - for (sgcnt = 0; sgcnt < use_sg; sgcnt++, slp++) { - asc_sg_head.sg_list[sgcnt].addr = cpu_to_le32(sg_dma_address(slp)); - asc_sg_head.sg_list[sgcnt].bytes = cpu_to_le32(sg_dma_len(slp)); - ASC_STATS_ADD(scp->device->host, sg_xfer, ASC_CEILING(sg_dma_len(slp), 512)); + for_each_sg(slp, sg, use_sg, sgcnt) { + asc_sg_head.sg_list[sgcnt].addr = cpu_to_le32(sg_dma_address(sg)); + asc_sg_head.sg_list[sgcnt].bytes = cpu_to_le32(sg_dma_len(sg)); + ASC_STATS_ADD(scp->device->host, sg_xfer, ASC_CEILING(sg_dma_len(sg), 512)); } } @@ -6700,7 +6700,7 @@ adv_get_sglist(asc_board_t *boardp, adv_req_t *reqp, struct scsi_cmnd *scp, int { adv_sgblk_t *sgblkp; ADV_SCSI_REQ_Q *scsiqp; - struct scatterlist *slp; + struct scatterlist *slp, *sg; int sg_elem_cnt; ADV_SG_BLOCK *sg_block, *prev_sg_block; ADV_PADDR sg_block_paddr; @@ -6778,11 +6778,11 @@ adv_get_sglist(asc_board_t *boardp, adv_req_t *reqp, struct scsi_cmnd *scp, int } } - for (i = 0; i < NO_OF_SG_PER_BLOCK; i++) + for_each_sg(slp, sg, NO_OF_SG_PER_BLOCK, i) { - sg_block->sg_list[i].sg_addr = cpu_to_le32(sg_dma_address(slp)); - sg_block->sg_list[i].sg_count = cpu_to_le32(sg_dma_len(slp)); - ASC_STATS_ADD(scp->device->host, sg_xfer, ASC_CEILING(sg_dma_len(slp), 512)); + sg_block->sg_list[i].sg_addr = cpu_to_le32(sg_dma_address(sg)); + sg_block->sg_list[i].sg_count = cpu_to_le32(sg_dma_len(sg)); + ASC_STATS_ADD(scp->device->host, sg_xfer, ASC_CEILING(sg_dma_len(sg), 512)); if (--sg_elem_cnt == 0) { /* Last ADV_SG_BLOCK and scatter-gather entry. */ @@ -6790,7 +6790,6 @@ adv_get_sglist(asc_board_t *boardp, adv_req_t *reqp, struct scsi_cmnd *scp, int sg_block->sg_ptr = 0L; /* Last ADV_SG_BLOCK in list. */ return ADV_SUCCESS; } - slp++; } sg_block->sg_cnt = NO_OF_SG_PER_BLOCK; prev_sg_block = sg_block; diff --git a/drivers/scsi/aha1542.c b/drivers/scsi/aha1542.c index cbbfbc9..0a4d34b 100644 --- a/drivers/scsi/aha1542.c +++ b/drivers/scsi/aha1542.c @@ -691,7 +691,7 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) memcpy(ccb[mbo].cdb, cmd, ccb[mbo].cdblen); if (SCpnt->use_sg) { - struct scatterlist *sgpnt; + struct scatterlist *sgpnt, *sg; struct chain *cptr; #ifdef DEBUG unsigned char *ptr; @@ -706,16 +706,15 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) HOSTDATA(SCpnt->device->host)->SCint[mbo] = NULL; return SCSI_MLQUEUE_HOST_BUSY; } - for (i = 0; i < SCpnt->use_sg; i++) { - if (sgpnt[i].length == 0 || SCpnt->use_sg > 16 || - (((int) sgpnt[i].offset) & 1) || (sgpnt[i].length & 1)) { + for_each_sg(sgpnt, sg, SCpnt->use_sg, i) { + if (sg->length == 0 || SCpnt->use_sg > 16 || + (((int) sg->offset) & 1) || (sg->length & 1)) { unsigned char *ptr; printk(KERN_CRIT "Bad segment list supplied to aha1542.c (%d, %d)\n", SCpnt->use_sg, i); - for (i = 0; i < SCpnt->use_sg; i++) { + for_each_sg(sgpnt, sg, SCpnt->use_sg, i) { printk(KERN_CRIT "%d: %p %d\n", i, - (page_address(sgpnt[i].page) + - sgpnt[i].offset), - sgpnt[i].length); + (page_address(sg->page) + + sg->offset), sg->length); }; printk(KERN_CRIT "cptr %x: ", (unsigned int) cptr); ptr = (unsigned char *) &cptr[i]; @@ -723,10 +722,10 @@ static int aha1542_queuecommand(Scsi_Cmnd * SCpnt, void (*done) (Scsi_Cmnd *)) printk("%02x ", ptr[i]); panic("Foooooooood fight!"); }; - any2scsi(cptr[i].dataptr, SCSI_SG_PA(&sgpnt[i])); - if (SCSI_SG_PA(&sgpnt[i]) + sgpnt[i].length - 1 > ISA_DMA_THRESHOLD) + any2scsi(cptr[i].dataptr, SCSI_SG_PA(sg)); + if (SCSI_SG_PA(sg) + sg->length - 1 > ISA_DMA_THRESHOLD) BAD_SG_DMA(SCpnt, sgpnt, SCpnt->use_sg, i); - any2scsi(cptr[i].datalen, sgpnt[i].length); + any2scsi(cptr[i].datalen, sg->length); }; any2scsi(ccb[mbo].datalen, SCpnt->use_sg * sizeof(struct chain)); any2scsi(ccb[mbo].dataptr, SCSI_BUF_PA(cptr)); diff --git a/drivers/scsi/aha1740.c b/drivers/scsi/aha1740.c index d7af9c6..2641c24 100644 --- a/drivers/scsi/aha1740.c +++ b/drivers/scsi/aha1740.c @@ -425,7 +425,7 @@ static int aha1740_queuecommand(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *)) sgptr->sg_dma_addr = sg_dma; if (SCpnt->use_sg) { - struct scatterlist * sgpnt; + struct scatterlist * sgpnt, * sg; struct aha1740_chain * cptr; int i, count; DEB(unsigned char * ptr); @@ -436,9 +436,9 @@ static int aha1740_queuecommand(Scsi_Cmnd * SCpnt, void (*done)(Scsi_Cmnd *)) cptr = sgptr->sg_chain; count = dma_map_sg (&host->edev->dev, sgpnt, SCpnt->use_sg, SCpnt->sc_data_direction); - for(i=0; i < count; i++) { - cptr[i].datalen = sg_dma_len (sgpnt + i); - cptr[i].dataptr = sg_dma_address (sgpnt + i); + for_each_sg(sgpnt, sg, count, i) { + cptr[i].datalen = sg_dma_len (sg); + cptr[i].dataptr = sg_dma_address (sg); } host->ecb[ecbno].datalen = count*sizeof(struct aha1740_chain); host->ecb[ecbno].dataptr = sg_dma; diff --git a/drivers/scsi/aic7xxx/aic79xx_osm.c b/drivers/scsi/aic7xxx/aic79xx_osm.c index 6054881..e7ab15b 100644 --- a/drivers/scsi/aic7xxx/aic79xx_osm.c +++ b/drivers/scsi/aic7xxx/aic79xx_osm.c @@ -1505,7 +1505,7 @@ ahd_linux_run_command(struct ahd_softc *ahd, struct ahd_linux_device *dev, nseg = pci_map_sg(ahd->dev_softc, cur_seg, cmd->use_sg, dir); scb->platform_data->xfer_len = 0; - for (sg = scb->sg_list; nseg > 0; nseg--, cur_seg++) { + for (sg = scb->sg_list; nseg > 0; nseg--) { dma_addr_t addr; bus_size_t len; @@ -1514,6 +1514,7 @@ ahd_linux_run_command(struct ahd_softc *ahd, struct ahd_linux_device *dev, scb->platform_data->xfer_len += len; sg = ahd_sg_setup(ahd, scb, sg, addr, len, /*last*/nseg == 1); + cur_seg = sg_next(cur_seg); } } else if (cmd->request_bufflen != 0) { void *sg; diff --git a/drivers/scsi/aic7xxx/aic7xxx_osm.c b/drivers/scsi/aic7xxx/aic7xxx_osm.c index 660f26e..bf85297 100644 --- a/drivers/scsi/aic7xxx/aic7xxx_osm.c +++ b/drivers/scsi/aic7xxx/aic7xxx_osm.c @@ -1475,20 +1475,19 @@ ahc_linux_run_command(struct ahc_softc *ahc, struct ahc_linux_device *dev, if (cmd->use_sg != 0) { struct ahc_dma_seg *sg; struct scatterlist *cur_seg; - struct scatterlist *end_seg; - int nseg; + struct scatterlist *sgl; + int nseg, i; - cur_seg = (struct scatterlist *)cmd->request_buffer; - nseg = pci_map_sg(ahc->dev_softc, cur_seg, cmd->use_sg, + sgl = (struct scatterlist *)cmd->request_buffer; + nseg = pci_map_sg(ahc->dev_softc, sgl, cmd->use_sg, cmd->sc_data_direction); - end_seg = cur_seg + nseg; /* Copy the segments into the SG list. */ sg = scb->sg_list; /* * The sg_count may be larger than nseg if * a transfer crosses a 32bit page. */ - while (cur_seg < end_seg) { + for_each_sg(sgl, cur_seg, nseg, i) { dma_addr_t addr; bus_size_t len; int consumed; @@ -1499,7 +1498,6 @@ ahc_linux_run_command(struct ahc_softc *ahc, struct ahc_linux_device *dev, sg, addr, len); sg += consumed; scb->sg_count += consumed; - cur_seg++; } sg--; sg->len |= ahc_htole32(AHC_DMA_LAST_SEG); diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c index e2ad5be..1327281 100644 --- a/drivers/scsi/aic94xx/aic94xx_task.c +++ b/drivers/scsi/aic94xx/aic94xx_task.c @@ -89,7 +89,7 @@ static inline int asd_map_scatterlist(struct sas_task *task, res = -ENOMEM; goto err_unmap; } - for (sc = task->scatter, i = 0; i < num_sg; i++, sc++) { + for_each_sg(task->scatter, sc, num_sg, i) { struct sg_el *sg = &((struct sg_el *)ascb->sg_arr->vaddr)[i]; sg->bus_addr = cpu_to_le64((u64)sg_dma_address(sc)); @@ -98,7 +98,7 @@ static inline int asd_map_scatterlist(struct sas_task *task, sg->flags |= ASD_SG_EL_LIST_EOL; } - for (sc = task->scatter, i = 0; i < 2; i++, sc++) { + for_each_sg(task->scatter, sc, 2, i) { sg_arr[i].bus_addr = cpu_to_le64((u64)sg_dma_address(sc)); sg_arr[i].size = cpu_to_le32((u32)sg_dma_len(sc)); @@ -110,7 +110,7 @@ static inline int asd_map_scatterlist(struct sas_task *task, sg_arr[2].bus_addr=cpu_to_le64((u64)ascb->sg_arr->dma_handle); } else { int i; - for (sc = task->scatter, i = 0; i < num_sg; i++, sc++) { + for_each_sg(task->scatter, sc, num_sg, i) { sg_arr[i].bus_addr = cpu_to_le64((u64)sg_dma_address(sc)); sg_arr[i].size = cpu_to_le32((u32)sg_dma_len(sc)); diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c index 8b46158..c2e2e95 100644 --- a/drivers/scsi/arcmsr/arcmsr_hba.c +++ b/drivers/scsi/arcmsr/arcmsr_hba.c @@ -563,18 +563,18 @@ static void arcmsr_build_ccb(struct AdapterControlBlock *acb, memcpy(arcmsr_cdb->Cdb, pcmd->cmnd, pcmd->cmd_len); if (pcmd->use_sg) { int length, sgcount, i, cdb_sgcount = 0; - struct scatterlist *sl; + struct scatterlist *sl, *sg; /* Get Scatter Gather List from scsiport. */ sl = (struct scatterlist *) pcmd->request_buffer; sgcount = pci_map_sg(acb->pdev, sl, pcmd->use_sg, pcmd->sc_data_direction); /* map stor port SG list to our iop SG List. */ - for (i = 0; i < sgcount; i++) { + for_each_sg(sl, sg, sgcount, i) { /* Get the physical address of the current data pointer */ - length = cpu_to_le32(sg_dma_len(sl)); - address_lo = cpu_to_le32(dma_addr_lo32(sg_dma_address(sl))); - address_hi = cpu_to_le32(dma_addr_hi32(sg_dma_address(sl))); + length = cpu_to_le32(sg_dma_len(sg)); + address_lo = cpu_to_le32(dma_addr_lo32(sg_dma_address(sg))); + address_hi = cpu_to_le32(dma_addr_hi32(sg_dma_address(sg))); if (address_hi == 0) { struct SG32ENTRY *pdma_sg = (struct SG32ENTRY *)psge; @@ -591,7 +591,6 @@ static void arcmsr_build_ccb(struct AdapterControlBlock *acb, psge += sizeof (struct SG64ENTRY); arccdbsize += sizeof (struct SG64ENTRY); } - sl++; cdb_sgcount++; } arcmsr_cdb->sgcount = (uint8_t)cdb_sgcount; diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c index 564ea90..4e0f073 100644 --- a/drivers/scsi/dc395x.c +++ b/drivers/scsi/dc395x.c @@ -1010,6 +1010,7 @@ static void build_srb(struct scsi_cmnd *cmd, struct DeviceCtlBlk *dcb, u32 reqlen = cmd->request_bufflen; struct scatterlist *sl = (struct scatterlist *) cmd->request_buffer; + struct scatterlist *sg; struct SGentry *sgp = srb->segment_x; srb->sg_count = pci_map_sg(dcb->acb->dev, sl, cmd->use_sg, dir); @@ -1018,9 +1019,9 @@ static void build_srb(struct scsi_cmnd *cmd, struct DeviceCtlBlk *dcb, reqlen, cmd->request_buffer, cmd->use_sg, srb->sg_count); - for (i = 0; i < srb->sg_count; i++) { - u32 busaddr = (u32)sg_dma_address(&sl[i]); - u32 seglen = (u32)sl[i].length; + for_each_sg(sl, sg, srb->sg_count, i) { + u32 busaddr = (u32)sg_dma_address(sg); + u32 seglen = (u32)sg->length; sgp[i].address = busaddr; sgp[i].length = seglen; srb->total_xfer_length += seglen; diff --git a/drivers/scsi/dpt_i2o.c b/drivers/scsi/dpt_i2o.c index 8c7d2bb..b63d4b2 100644 --- a/drivers/scsi/dpt_i2o.c +++ b/drivers/scsi/dpt_i2o.c @@ -2141,20 +2141,21 @@ static s32 adpt_scsi_to_i2o(adpt_hba* pHba, struct scsi_cmnd* cmd, struct adpt_d reqlen = 14; // SINGLE SGE /* Now fill in the SGList and command */ if(cmd->use_sg) { - struct scatterlist *sg = (struct scatterlist *)cmd->request_buffer; - int sg_count = pci_map_sg(pHba->pDev, sg, cmd->use_sg, + struct scatterlist *sgl = (struct scatterlist *)cmd->request_buffer; + struct scatterlist *sg; + int sg_count = pci_map_sg(pHba->pDev, sgl, cmd->use_sg, cmd->sc_data_direction); len = 0; - for(i = 0 ; i < sg_count; i++) { + for_each_sg(sgl, sg, sg_count, i) { *mptr++ = direction|0x10000000|sg_dma_len(sg); len+=sg_dma_len(sg); *mptr++ = sg_dma_address(sg); - sg++; + /* Make this an end of list */ + if (i == sg_count - 1) + mptr[-2] = direction|0xD0000000|sg_dma_len(sg); } - /* Make this an end of list */ - mptr[-2] = direction|0xD0000000|sg_dma_len(sg-1); reqlen = mptr - msg; *lenptr = len; diff --git a/drivers/scsi/eata.c b/drivers/scsi/eata.c index 2d38025..9526fa9 100644 --- a/drivers/scsi/eata.c +++ b/drivers/scsi/eata.c @@ -1610,7 +1610,7 @@ static int eata2x_detect(struct scsi_host_template *tpnt) static void map_dma(unsigned int i, struct hostdata *ha) { unsigned int k, count, pci_dir; - struct scatterlist *sgpnt; + struct scatterlist *sgpnt, *sg; struct mscp *cpp; struct scsi_cmnd *SCpnt; @@ -1646,9 +1646,9 @@ static void map_dma(unsigned int i, struct hostdata *ha) sgpnt = (struct scatterlist *)SCpnt->request_buffer; count = pci_map_sg(ha->pdev, sgpnt, SCpnt->use_sg, pci_dir); - for (k = 0; k < count; k++) { - cpp->sglist[k].address = H2DEV(sg_dma_address(&sgpnt[k])); - cpp->sglist[k].num_bytes = H2DEV(sg_dma_len(&sgpnt[k])); + for_each_sg(sgpnt, sg, count, k) { + cpp->sglist[k].address = H2DEV(sg_dma_address(sg)); + cpp->sglist[k].num_bytes = H2DEV(sg_dma_len(sg)); } cpp->sg = 1; diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c index ec71061..4c29d29 100644 --- a/drivers/scsi/esp_scsi.c +++ b/drivers/scsi/esp_scsi.c @@ -325,6 +325,7 @@ static void esp_map_dma(struct esp *esp, struct scsi_cmnd *cmd) { struct esp_cmd_priv *spriv = ESP_CMD_PRIV(cmd); struct scatterlist *sg = cmd->request_buffer; + struct scatterlist *s; int dir = cmd->sc_data_direction; int total, i; @@ -339,8 +340,8 @@ static void esp_map_dma(struct esp *esp, struct scsi_cmnd *cmd) spriv->cur_sg = sg; total = 0; - for (i = 0; i < spriv->u.num_sg; i++) - total += sg_dma_len(&sg[i]); + for_each_sg(sg, s, spriv->u.num_sg, i) + total += sg_dma_len(s); spriv->tot_residue = total; } diff --git a/drivers/scsi/gdth.c b/drivers/scsi/gdth.c index 60446b8..3efc084 100644 --- a/drivers/scsi/gdth.c +++ b/drivers/scsi/gdth.c @@ -2656,7 +2656,7 @@ static void gdth_copy_internal_data(int hanum,Scsi_Cmnd *scp, { ushort cpcount,i; ushort cpsum,cpnow; - struct scatterlist *sl; + struct scatterlist *sl, *sg; gdth_ha_str *ha; char *address; @@ -2665,29 +2665,30 @@ static void gdth_copy_internal_data(int hanum,Scsi_Cmnd *scp, if (scp->use_sg) { sl = (struct scatterlist *)scp->request_buffer; - for (i=0,cpsum=0; i<scp->use_sg; ++i,++sl) { + cpsum = 0; + for_each_sg(sl, sg, scp->use_sg, i) { unsigned long flags; - cpnow = (ushort)sl->length; + cpnow = (ushort)sg->length; TRACE(("copy_internal() now %d sum %d count %d %d\n", cpnow,cpsum,cpcount,(ushort)scp->bufflen)); if (cpsum+cpnow > cpcount) cpnow = cpcount - cpsum; cpsum += cpnow; - if (!sl->page) { + if (!sg->page) { printk("GDT-HA %d: invalid sc/gt element in gdth_copy_internal_data()\n", hanum); return; } local_irq_save(flags); #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,0) - address = kmap_atomic(sl->page, KM_BIO_SRC_IRQ) + sl->offset; + address = kmap_atomic(sg->page, KM_BIO_SRC_IRQ) + sg->offset; memcpy(address,buffer,cpnow); - flush_dcache_page(sl->page); + flush_dcache_page(sg->page); kunmap_atomic(address, KM_BIO_SRC_IRQ); #else - address = kmap_atomic(sl->page, KM_BH_IRQ) + sl->offset; + address = kmap_atomic(sg->page, KM_BH_IRQ) + sg->offset; memcpy(address,buffer,cpnow); - flush_dcache_page(sl->page); + flush_dcache_page(sg->page); kunmap_atomic(address, KM_BH_IRQ); #endif local_irq_restore(flags); @@ -2807,7 +2808,7 @@ static int gdth_fill_cache_cmd(int hanum,Scsi_Cmnd *scp,ushort hdrive) { register gdth_ha_str *ha; register gdth_cmd_str *cmdp; - struct scatterlist *sl; + struct scatterlist *sl, *sg; ulong32 cnt, blockcnt; ulong64 no, blockno; dma_addr_t phys_addr; @@ -2913,25 +2914,25 @@ static int gdth_fill_cache_cmd(int hanum,Scsi_Cmnd *scp,ushort hdrive) if (mode64) { cmdp->u.cache64.DestAddr= (ulong64)-1; cmdp->u.cache64.sg_canz = sgcnt; - for (i=0; i<sgcnt; ++i,++sl) { - cmdp->u.cache64.sg_lst[i].sg_ptr = sg_dma_address(sl); + for_each_sg(sl, sg, sgcnt, i) { + cmdp->u.cache64.sg_lst[i].sg_ptr = sg_dma_address(sg); #ifdef GDTH_DMA_STATISTICS if (cmdp->u.cache64.sg_lst[i].sg_ptr > (ulong64)0xffffffff) ha->dma64_cnt++; else ha->dma32_cnt++; #endif - cmdp->u.cache64.sg_lst[i].sg_len = sg_dma_len(sl); + cmdp->u.cache64.sg_lst[i].sg_len = sg_dma_len(sg); } } else { cmdp->u.cache.DestAddr= 0xffffffff; cmdp->u.cache.sg_canz = sgcnt; - for (i=0; i<sgcnt; ++i,++sl) { - cmdp->u.cache.sg_lst[i].sg_ptr = sg_dma_address(sl); + for_each_sg(sl, sg, sgcnt, i) { + cmdp->u.cache.sg_lst[i].sg_ptr = sg_dma_address(sg); #ifdef GDTH_DMA_STATISTICS ha->dma32_cnt++; #endif - cmdp->u.cache.sg_lst[i].sg_len = sg_dma_len(sl); + cmdp->u.cache.sg_lst[i].sg_len = sg_dma_len(sg); } } @@ -3017,7 +3018,7 @@ static int gdth_fill_raw_cmd(int hanum,Scsi_Cmnd *scp,unchar b) { register gdth_ha_str *ha; register gdth_cmd_str *cmdp; - struct scatterlist *sl; + struct scatterlist *sl, *sg; ushort i; dma_addr_t phys_addr, sense_paddr; int cmd_index, sgcnt, mode64; @@ -3120,25 +3121,25 @@ static int gdth_fill_raw_cmd(int hanum,Scsi_Cmnd *scp,unchar b) if (mode64) { cmdp->u.raw64.sdata = (ulong64)-1; cmdp->u.raw64.sg_ranz = sgcnt; - for (i=0; i<sgcnt; ++i,++sl) { - cmdp->u.raw64.sg_lst[i].sg_ptr = sg_dma_address(sl); + for_each_sg(sl, sg, sgcnt, i) { + cmdp->u.raw64.sg_lst[i].sg_ptr = sg_dma_address(sg); #ifdef GDTH_DMA_STATISTICS if (cmdp->u.raw64.sg_lst[i].sg_ptr > (ulong64)0xffffffff) ha->dma64_cnt++; else ha->dma32_cnt++; #endif - cmdp->u.raw64.sg_lst[i].sg_len = sg_dma_len(sl); + cmdp->u.raw64.sg_lst[i].sg_len = sg_dma_len(sg); } } else { cmdp->u.raw.sdata = 0xffffffff; cmdp->u.raw.sg_ranz = sgcnt; - for (i=0; i<sgcnt; ++i,++sl) { - cmdp->u.raw.sg_lst[i].sg_ptr = sg_dma_address(sl); + for_each_sg(sl, sg, sgcnt, i) { + cmdp->u.raw.sg_lst[i].sg_ptr = sg_dma_address(sg); #ifdef GDTH_DMA_STATISTICS ha->dma32_cnt++; #endif - cmdp->u.raw.sg_lst[i].sg_len = sg_dma_len(sl); + cmdp->u.raw.sg_lst[i].sg_len = sg_dma_len(sg); } } diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c index bec83cb..5b1c210 100644 --- a/drivers/scsi/hptiop.c +++ b/drivers/scsi/hptiop.c @@ -449,6 +449,7 @@ static int hptiop_buildsgl(struct scsi_cmnd *scp, struct hpt_iopsg *psg) struct Scsi_Host *host = scp->device->host; struct hptiop_hba *hba = (struct hptiop_hba *)host->hostdata; struct scatterlist *sglist = (struct scatterlist *)scp->request_buffer; + struct scatterlist *sg; /* * though we'll not get non-use_sg fields anymore, @@ -463,10 +464,9 @@ static int hptiop_buildsgl(struct scsi_cmnd *scp, struct hpt_iopsg *psg) HPT_SCP(scp)->mapped = 1; BUG_ON(HPT_SCP(scp)->sgcnt > hba->max_sg_descriptors); - for (idx = 0; idx < HPT_SCP(scp)->sgcnt; idx++) { - psg[idx].pci_address = - cpu_to_le64(sg_dma_address(&sglist[idx])); - psg[idx].size = cpu_to_le32(sg_dma_len(&sglist[idx])); + for_each_sg(sglist, sg, HPT_SCP(scp)->sgcnt, idx) { + psg[idx].pci_address = cpu_to_le64(sg_dma_address(sg)); + psg[idx].size = cpu_to_le32(sg_dma_len(sg)); psg[idx].eot = (idx == HPT_SCP(scp)->sgcnt - 1) ? cpu_to_le32(1) : 0; } diff --git a/drivers/scsi/ibmmca.c b/drivers/scsi/ibmmca.c index 0e57fb6..f6c2581 100644 --- a/drivers/scsi/ibmmca.c +++ b/drivers/scsi/ibmmca.c @@ -1808,7 +1808,7 @@ static int ibmmca_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) int host_index; int max_pun; int i; - struct scatterlist *sl; + struct scatterlist *sl, *sg; shpnt = cmd->device->host; /* search for the right hostadapter */ @@ -1938,13 +1938,12 @@ static int ibmmca_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *)) scsi_cmd = cmd->cmnd[0]; if (cmd->use_sg) { - i = cmd->use_sg; sl = (struct scatterlist *) (cmd->request_buffer); - if (i > 16) + if (cmd->use_sg > 16) panic("IBM MCA SCSI: scatter-gather list too long.\n"); - while (--i >= 0) { - ld(host_index)[ldn].sge[i].address = (void *) (isa_page_to_bus(sl[i].page) + sl[i].offset); - ld(host_index)[ldn].sge[i].byte_length = sl[i].length; + for_each_sg(sl, sg, cmd->use_sg, i) { + ld(host_index)[ldn].sge[i].address = (void *) (isa_page_to_bus(sg->page) + sg->offset); + ld(host_index)[ldn].sge[i].byte_length = sg->length; } scb->enable |= IM_POINTER_TO_LIST; scb->sys_buf_adr = isa_virt_to_bus(&(ld(host_index)[ldn].sge[0])); diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c index b10eefe..943a01e 100644 --- a/drivers/scsi/ibmvscsi/ibmvscsi.c +++ b/drivers/scsi/ibmvscsi/ibmvscsi.c @@ -359,10 +359,10 @@ static int map_sg_list(int num_entries, { int i; u64 total_length = 0; + struct scatterlist *sg_entry; - for (i = 0; i < num_entries; ++i) { + for_each_sg(sg, sg_entry, num_entries, i) { struct srp_direct_buf *descr = md + i; - struct scatterlist *sg_entry = &sg[i]; descr->va = sg_dma_address(sg_entry); descr->len = sg_dma_len(sg_entry); descr->key = 0; diff --git a/drivers/scsi/ide-scsi.c b/drivers/scsi/ide-scsi.c index 8263f75..9101928 100644 --- a/drivers/scsi/ide-scsi.c +++ b/drivers/scsi/ide-scsi.c @@ -70,6 +70,7 @@ typedef struct idescsi_pc_s { u8 *buffer; /* Data buffer */ u8 *current_position; /* Pointer into the above buffer */ struct scatterlist *sg; /* Scatter gather table */ + struct scatterlist *last_sg; /* Last sg element */ int b_count; /* Bytes transferred from current entry */ struct scsi_cmnd *scsi_cmd; /* SCSI command */ void (*done)(struct scsi_cmnd *); /* Scsi completion routine */ @@ -175,11 +176,6 @@ static void idescsi_input_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsigne char *buf; while (bcount) { - if (pc->sg - (struct scatterlist *) pc->scsi_cmd->request_buffer > pc->scsi_cmd->use_sg) { - printk (KERN_ERR "ide-scsi: scatter gather table too small, discarding data\n"); - idescsi_discard_data (drive, bcount); - return; - } count = min(pc->sg->length - pc->b_count, bcount); if (PageHighMem(pc->sg->page)) { unsigned long flags; @@ -198,10 +194,17 @@ static void idescsi_input_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsigne } bcount -= count; pc->b_count += count; if (pc->b_count == pc->sg->length) { - pc->sg++; + if (pc->sg == pc->last_sg) + break; + pc->sg = sg_next(pc->sg); pc->b_count = 0; } } + + if (bcount) { + printk (KERN_ERR "ide-scsi: scatter gather table too small, discarding data\n"); + idescsi_discard_data (drive, bcount); + } } static void idescsi_output_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsigned int bcount) @@ -210,11 +213,6 @@ static void idescsi_output_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsign char *buf; while (bcount) { - if (pc->sg - (struct scatterlist *) pc->scsi_cmd->request_buffer > pc->scsi_cmd->use_sg) { - printk (KERN_ERR "ide-scsi: scatter gather table too small, padding with zeros\n"); - idescsi_output_zeros (drive, bcount); - return; - } count = min(pc->sg->length - pc->b_count, bcount); if (PageHighMem(pc->sg->page)) { unsigned long flags; @@ -233,10 +231,17 @@ static void idescsi_output_buffers (ide_drive_t *drive, idescsi_pc_t *pc, unsign } bcount -= count; pc->b_count += count; if (pc->b_count == pc->sg->length) { - pc->sg++; + if (pc->sg == pc->last_sg) + break; + pc->sg = sg_next(pc->sg); pc->b_count = 0; } } + + if (bcount) { + printk (KERN_ERR "ide-scsi: scatter gather table too small, padding with zeros\n"); + idescsi_output_zeros (drive, bcount); + } } /* @@ -910,9 +915,11 @@ static int idescsi_queue (struct scsi_cmnd *cmd, if (cmd->use_sg) { pc->buffer = NULL; pc->sg = cmd->request_buffer; + pc->last_sg = sg_last(pc->sg, cmd->use_sg); } else { pc->buffer = cmd->request_buffer; pc->sg = NULL; + pc->last_sg = NULL; } pc->b_count = 0; pc->request_transfer = pc->buffer_size = cmd->request_bufflen; diff --git a/drivers/scsi/initio.c b/drivers/scsi/initio.c index 7e7635c..300d33d 100644 --- a/drivers/scsi/initio.c +++ b/drivers/scsi/initio.c @@ -2882,7 +2882,7 @@ static int i91u_detect(struct scsi_host_template * tpnt) static void i91uBuildSCB(HCS * pHCB, SCB * pSCB, struct scsi_cmnd * SCpnt) { /* Create corresponding SCB */ - struct scatterlist *pSrbSG; + struct scatterlist *pSrbSG, *sg; SG *pSG; /* Pointer to SG list */ int i; long TotalLen; @@ -2926,10 +2926,12 @@ static void i91uBuildSCB(HCS * pHCB, SCB * pSCB, struct scsi_cmnd * SCpnt) SCpnt->use_sg, SCpnt->sc_data_direction); pSCB->SCB_Flags |= SCF_SG; /* Turn on SG list flag */ - for (i = 0, TotalLen = 0, pSG = &pSCB->SCB_SGList[0]; /* 1.01g */ - i < pSCB->SCB_SGLen; i++, pSG++, pSrbSG++) { - pSG->SG_Ptr = cpu_to_le32((u32)sg_dma_address(pSrbSG)); - TotalLen += pSG->SG_Len = cpu_to_le32((u32)sg_dma_len(pSrbSG)); + pSG = &pSCB->SCB_SGList[0]; + TotalLen = 0; + for_each_sg(pSrbSG, sg, pSCB->SCB_SGLen, i) { + pSG->SG_Ptr = cpu_to_le32((u32)sg_dma_address(sg)); + TotalLen += pSG->SG_Len = cpu_to_le32((u32)sg_dma_len(sg)); + pSG++; } pSCB->SCB_BufLen = (SCpnt->request_bufflen > TotalLen) ? diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c index 4baa79e..01b0476 100644 --- a/drivers/scsi/ipr.c +++ b/drivers/scsi/ipr.c @@ -4286,7 +4286,7 @@ static int ipr_build_ioadl(struct ipr_ioa_cfg *ioa_cfg, struct ipr_cmnd *ipr_cmd) { int i; - struct scatterlist *sglist; + struct scatterlist *sglist, *sg; u32 length; u32 ioadl_flags = 0; struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd; @@ -4327,11 +4327,10 @@ static int ipr_build_ioadl(struct ipr_ioa_cfg *ioa_cfg, ioarcb->read_ioadl_addr = ioarcb->write_ioadl_addr; } - for (i = 0; i < ipr_cmd->dma_use_sg; i++) { + for_each_sg(sglist, sg, ipr_cmd->dma_use_sg, i) { ioadl[i].flags_and_data_len = - cpu_to_be32(ioadl_flags | sg_dma_len(&sglist[i])); - ioadl[i].address = - cpu_to_be32(sg_dma_address(&sglist[i])); + cpu_to_be32(ioadl_flags | sg_dma_len(sg)); + ioadl[i].address = cpu_to_be32(sg_dma_address(sg)); } if (likely(ipr_cmd->dma_use_sg)) { diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c index 8b704f7..ba1957a 100644 --- a/drivers/scsi/ips.c +++ b/drivers/scsi/ips.c @@ -1687,10 +1687,11 @@ ips_make_passthru(ips_ha_t *ha, struct scsi_cmnd *SC, ips_scb_t *scb, int intr) if (!SC->use_sg) { length = SC->request_bufflen; } else { - struct scatterlist *sg = SC->request_buffer; + struct scatterlist *sgl = SC->request_buffer; + struct scatterlist *sg; int i; - for (i = 0; i < SC->use_sg; i++) - length += sg[i].length; + for_each_sg(sgl, sg, SC->use_sg, i) + length += sg->length; } if (length < sizeof (ips_passthru_t)) { /* wrong size */ @@ -2868,17 +2869,17 @@ ips_next(ips_ha_t * ha, int intr) /* Now handle the data buffer */ if (SC->use_sg) { - struct scatterlist *sg; + struct scatterlist *sg, *sgl; int i; - sg = SC->request_buffer; - scb->sg_count = pci_map_sg(ha->pcidev, sg, SC->use_sg, + sgl = SC->request_buffer; + scb->sg_count = pci_map_sg(ha->pcidev, sgl, SC->use_sg, SC->sc_data_direction); scb->flags |= IPS_SCB_MAP_SG; - for (i = 0; i < scb->sg_count; i++) { + for_each_sg(sgl, sg, scb->sg_count, i) { if (ips_fill_scb_sg_single - (ha, sg_dma_address(&sg[i]), scb, i, - sg_dma_len(&sg[i])) < 0) + (ha, sg_dma_address(sg), scb, i, + sg_dma_len(sg)) < 0) break; } scb->dcdb.transfer_length = scb->data_len; @@ -3382,32 +3383,31 @@ ips_done(ips_ha_t * ha, ips_scb_t * scb) if (scb->sg_count) { /* S/G request */ - struct scatterlist *sg; + struct scatterlist *sg, *sgl; int ips_sg_index = 0; - int sg_dma_index; + int sg_dma_index, left, i; - sg = scb->scsi_cmd->request_buffer; + sgl = scb->scsi_cmd->request_buffer; /* Spin forward to last dma chunk */ sg_dma_index = scb->breakup; + sg = sg_last(sgl, sg_dma_index); /* Take care of possible partial on last chunk */ ips_fill_scb_sg_single(ha, - sg_dma_address(&sg - [sg_dma_index]), + sg_dma_address(sg), scb, ips_sg_index++, - sg_dma_len(&sg - [sg_dma_index])); + sg_dma_len(sg)); - for (; sg_dma_index < scb->sg_count; - sg_dma_index++) { + sgl = sg; + left = scb->sg_count - sg_dma_index; + for_each_sg(sgl, sg, left, i) { if (ips_fill_scb_sg_single (ha, - sg_dma_address(&sg[sg_dma_index]), + sg_dma_address(sg), scb, ips_sg_index++, - sg_dma_len(&sg[sg_dma_index])) < 0) + sg_dma_len(sg)) < 0) break; - } } else { @@ -3659,17 +3659,21 @@ ips_scmd_buf_write(struct scsi_cmnd *scmd, void *data, unsigned int count) char *cdata = (char *) data; unsigned char *buffer; unsigned long flags; - struct scatterlist *sg = scmd->request_buffer; - for (i = 0, xfer_cnt = 0; - (i < scmd->use_sg) && (xfer_cnt < count); i++) { - min_cnt = min(count - xfer_cnt, sg[i].length); + struct scatterlist *sgl = scmd->request_buffer; + struct scatterlist *sg; + + xfer_cnt = 0; + for_each_sg(sgl, sg, scmd->use_sg, i) { + if (xfer_cnt >= count) + break; + min_cnt = min(count - xfer_cnt, sg->length); /* kmap_atomic() ensures addressability of the data buffer.*/ /* local_irq_save() protects the KM_IRQ0 address slot. */ local_irq_save(flags); - buffer = kmap_atomic(sg[i].page, KM_IRQ0) + sg[i].offset; + buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; memcpy(buffer, &cdata[xfer_cnt], min_cnt); - kunmap_atomic(buffer - sg[i].offset, KM_IRQ0); + kunmap_atomic(buffer - sg->offset, KM_IRQ0); local_irq_restore(flags); xfer_cnt += min_cnt; @@ -3697,17 +3701,21 @@ ips_scmd_buf_read(struct scsi_cmnd *scmd, void *data, unsigned int count) char *cdata = (char *) data; unsigned char *buffer; unsigned long flags; - struct scatterlist *sg = scmd->request_buffer; - for (i = 0, xfer_cnt = 0; - (i < scmd->use_sg) && (xfer_cnt < count); i++) { - min_cnt = min(count - xfer_cnt, sg[i].length); + struct scatterlist *sgl = scmd->request_buffer; + struct scatterlist *sg; + + xfer_cnt = 0; + for_each_sg(sgl, sg, scmd->use_sg, i) { + if (xfer_cnt >= count) + break; + min_cnt = min(count - xfer_cnt, sg->length); /* kmap_atomic() ensures addressability of the data buffer.*/ /* local_irq_save() protects the KM_IRQ0 address slot. */ local_irq_save(flags); - buffer = kmap_atomic(sg[i].page, KM_IRQ0) + sg[i].offset; + buffer = kmap_atomic(sg->page, KM_IRQ0) + sg->offset; memcpy(&cdata[xfer_cnt], buffer, min_cnt); - kunmap_atomic(buffer - sg[i].offset, KM_IRQ0); + kunmap_atomic(buffer - sg->offset, KM_IRQ0); local_irq_restore(flags); xfer_cnt += min_cnt; diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c index c9a3abf..8f709fa 100644 --- a/drivers/scsi/iscsi_tcp.c +++ b/drivers/scsi/iscsi_tcp.c @@ -310,10 +310,11 @@ iscsi_solicit_data_init(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask, if (sc->use_sg) { int i, sg_count = 0; - struct scatterlist *sg = sc->request_buffer; + struct scatterlist *sgl = sc->request_buffer; + struct scatterlist *sg; r2t->sg = NULL; - for (i = 0; i < sc->use_sg; i++, sg += 1) { + for_each_sg(sgl, sg, sc->use_sg, i) { /* FIXME: prefetch ? */ if (sg_count + sg->length > r2t->data_offset) { int page_offset; @@ -329,7 +330,7 @@ iscsi_solicit_data_init(struct iscsi_conn *conn, struct iscsi_cmd_task *ctask, r2t->sendbuf.sg.length -= page_offset; /* xmit logic will continue with next one */ - r2t->sg = sg + 1; + r2t->sg = sg_next(sg); break; } sg_count += sg->length; @@ -702,7 +703,7 @@ static int iscsi_scsi_data_in(struct iscsi_conn *conn) struct iscsi_cmd_task *ctask = tcp_conn->in.ctask; struct iscsi_tcp_cmd_task *tcp_ctask = ctask->dd_data; struct scsi_cmnd *sc = ctask->sc; - struct scatterlist *sg; + struct scatterlist *sg, *sgl; int i, offset, rc = 0; BUG_ON((void*)ctask != sc->SCp.ptr); @@ -725,21 +726,21 @@ static int iscsi_scsi_data_in(struct iscsi_conn *conn) } offset = tcp_ctask->data_offset; - sg = sc->request_buffer; + sgl = sc->request_buffer; if (tcp_ctask->data_offset) - for (i = 0; i < tcp_ctask->sg_count; i++) - offset -= sg[i].length; + for_each_sg(sgl, sg, tcp_ctask->sg_count, i) + offset -= sg->length; /* we've passed through partial sg*/ if (offset < 0) offset = 0; - for (i = tcp_ctask->sg_count; i < sc->use_sg; i++) { + for_each_sg(sgl, sg, tcp_ctask->sg_count, i) { char *dest; - dest = kmap_atomic(sg[i].page, KM_SOFTIRQ0); - rc = iscsi_ctask_copy(tcp_conn, ctask, dest + sg[i].offset, - sg[i].length, offset); + dest = kmap_atomic(sg->page, KM_SOFTIRQ0); + rc = iscsi_ctask_copy(tcp_conn, ctask, dest + sg->offset, + sg->length, offset); kunmap_atomic(dest, KM_SOFTIRQ0); if (rc == -EAGAIN) /* continue with the next SKB/PDU */ @@ -749,13 +750,13 @@ static int iscsi_scsi_data_in(struct iscsi_conn *conn) if (!offset) crypto_hash_update( &tcp_conn->rx_hash, - &sg[i], sg[i].length); + sg, sg->length); else partial_sg_digest_update( &tcp_conn->rx_hash, - &sg[i], - sg[i].offset + offset, - sg[i].length - offset); + sg, + sg->offset + offset, + sg->length - offset); } offset = 0; tcp_ctask->sg_count++; @@ -767,9 +768,9 @@ static int iscsi_scsi_data_in(struct iscsi_conn *conn) * data-in is complete, but buffer not... */ partial_sg_digest_update(&tcp_conn->rx_hash, - &sg[i], - sg[i].offset, - sg[i].length-rc); + sg, + sg->offset, + sg->length-rc); rc = 0; break; } @@ -1294,8 +1295,8 @@ iscsi_tcp_cmd_init(struct iscsi_cmd_task *ctask) struct scatterlist *sg = sc->request_buffer; iscsi_buf_init_sg(&tcp_ctask->sendbuf, sg); - tcp_ctask->sg = sg + 1; - tcp_ctask->bad_sg = sg + sc->use_sg; + tcp_ctask->sg = sg_next(sg); + tcp_ctask->bad_sg = sg_last(sg, sc->use_sg); } else { iscsi_buf_init_iov(&tcp_ctask->sendbuf, sc->request_buffer, @@ -1522,7 +1523,7 @@ iscsi_send_data(struct iscsi_cmd_task *ctask, struct iscsi_buf *sendbuf, buf_sent); if (!iscsi_buf_left(sendbuf) && *sg != tcp_ctask->bad_sg) { iscsi_buf_init_sg(sendbuf, *sg); - *sg = *sg + 1; + *sg = sg_next(*sg); } if (rc) diff --git a/drivers/scsi/jazz_esp.c b/drivers/scsi/jazz_esp.c index 19dd4b9..d742c73 100644 --- a/drivers/scsi/jazz_esp.c +++ b/drivers/scsi/jazz_esp.c @@ -15,6 +15,7 @@ #include <linux/blkdev.h> #include <linux/proc_fs.h> #include <linux/stat.h> +#include <linux/dma-mapping.h> #include "scsi.h" #include <scsi/scsi_host.h> @@ -240,12 +241,13 @@ static void dma_mmu_get_scsi_one (struct NCR_ESP *esp, struct scsi_cmnd *sp) static void dma_mmu_get_scsi_sgl (struct NCR_ESP *esp, struct scsi_cmnd *sp) { int sz = sp->SCp.buffers_residual; - struct scatterlist *sg = (struct scatterlist *) sp->SCp.buffer; - - while (sz >= 0) { - sg[sz].dma_address = vdma_alloc(CPHYSADDR(page_address(sg[sz].page) + sg[sz].offset), sg[sz].length); - sz--; - } + struct scatterlist *sgl = (struct scatterlist *) sp->SCp.buffer; + struct scatterlist *sg; + int i; + + for_each_sg(sgl, sg, sz, i) + sg->dma_address = vdma_alloc(CPHYSADDR(page_address(sg->page) + sg->offset), sg->length); + sp->SCp.ptr=(char *)(sp->SCp.buffer->dma_address); } @@ -256,13 +258,12 @@ static void dma_mmu_release_scsi_one (struct NCR_ESP *esp, struct scsi_cmnd *sp) static void dma_mmu_release_scsi_sgl (struct NCR_ESP *esp, struct scsi_cmnd *sp) { - int sz = sp->use_sg - 1; - struct scatterlist *sg = (struct scatterlist *)sp->request_buffer; - - while(sz >= 0) { - vdma_free(sg[sz].dma_address); - sz--; - } + struct scatterlist *sgl = (struct scatterlist *)sp->request_buffer; + struct scatterlist *sg; + int i; + + for_each_sg(sgl, sg, sp->use_sg, i) + vdma_free(sg->dma_address); } static void dma_advance_sg (struct scsi_cmnd *sp) diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c index 9a12d05..d1f6654 100644 --- a/drivers/scsi/lpfc/lpfc_scsi.c +++ b/drivers/scsi/lpfc/lpfc_scsi.c @@ -169,7 +169,7 @@ static int lpfc_scsi_prep_dma_buf(struct lpfc_hba * phba, struct lpfc_scsi_buf * lpfc_cmd) { struct scsi_cmnd *scsi_cmnd = lpfc_cmd->pCmd; - struct scatterlist *sgel = NULL; + struct scatterlist *sgel = NULL, *sg; struct fcp_cmnd *fcp_cmnd = lpfc_cmd->fcp_cmnd; struct ulp_bde64 *bpl = lpfc_cmd->fcp_bpl; IOCB_t *iocb_cmd = &lpfc_cmd->cur_iocbq.iocb; @@ -214,18 +214,17 @@ lpfc_scsi_prep_dma_buf(struct lpfc_hba * phba, struct lpfc_scsi_buf * lpfc_cmd) * single scsi command. Just run through the seg_cnt and format * the bde's. */ - for (i = 0; i < lpfc_cmd->seg_cnt; i++) { - physaddr = sg_dma_address(sgel); + for_each_sg(sgel, sg, lpfc_cmd->seg_cnt, i) { + physaddr = sg_dma_address(sg); bpl->addrLow = le32_to_cpu(putPaddrLow(physaddr)); bpl->addrHigh = le32_to_cpu(putPaddrHigh(physaddr)); - bpl->tus.f.bdeSize = sg_dma_len(sgel); + bpl->tus.f.bdeSize = sg_dma_len(sg); if (datadir == DMA_TO_DEVICE) bpl->tus.f.bdeFlags = 0; else bpl->tus.f.bdeFlags = BUFF_USE_RCV; bpl->tus.w = le32_to_cpu(bpl->tus.w); bpl++; - sgel++; num_bde++; } } else if (scsi_cmnd->request_buffer && scsi_cmnd->request_bufflen) { diff --git a/drivers/scsi/mac53c94.c b/drivers/scsi/mac53c94.c index 5806ede..cb569c6 100644 --- a/drivers/scsi/mac53c94.c +++ b/drivers/scsi/mac53c94.c @@ -366,7 +366,7 @@ static void cmd_done(struct fsc_state *state, int result) static void set_dma_cmds(struct fsc_state *state, struct scsi_cmnd *cmd) { int i, dma_cmd, total; - struct scatterlist *scl; + struct scatterlist *scl, *sg; struct dbdma_cmd *dcmds; dma_addr_t dma_addr; u32 dma_len; @@ -381,9 +381,9 @@ static void set_dma_cmds(struct fsc_state *state, struct scsi_cmnd *cmd) scl = (struct scatterlist *) cmd->request_buffer; nseg = pci_map_sg(state->pdev, scl, cmd->use_sg, cmd->sc_data_direction); - for (i = 0; i < nseg; ++i) { - dma_addr = sg_dma_address(scl); - dma_len = sg_dma_len(scl); + for_each_sg(scl, sg, nseg, i) { + dma_addr = sg_dma_address(sg); + dma_len = sg_dma_len(sg); if (dma_len > 0xffff) panic("mac53c94: scatterlist element >= 64k"); total += dma_len; @@ -391,7 +391,6 @@ static void set_dma_cmds(struct fsc_state *state, struct scsi_cmnd *cmd) st_le16(&dcmds->command, dma_cmd); st_le32(&dcmds->phy_addr, dma_addr); dcmds->xfer_status = 0; - ++scl; ++dcmds; } } else { diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c index 3cce75d..0ea9ef4 100644 --- a/drivers/scsi/megaraid.c +++ b/drivers/scsi/megaraid.c @@ -1767,7 +1767,7 @@ __mega_busywait_mbox (adapter_t *adapter) static int mega_build_sglist(adapter_t *adapter, scb_t *scb, u32 *buf, u32 *len) { - struct scatterlist *sgl; + struct scatterlist *sgl, *sg; struct page *page; unsigned long offset; unsigned int length; @@ -1832,15 +1832,14 @@ mega_build_sglist(adapter_t *adapter, scb_t *scb, u32 *buf, u32 *len) *len = 0; - for( idx = 0; idx < sgcnt; idx++, sgl++ ) { - + for_each_sg(sgl, sg, sgcnt, idx) { if( adapter->has_64bit_addr ) { - scb->sgl64[idx].address = sg_dma_address(sgl); - *len += scb->sgl64[idx].length = sg_dma_len(sgl); + scb->sgl64[idx].address = sg_dma_address(sg); + *len += scb->sgl64[idx].length = sg_dma_len(sg); } else { - scb->sgl[idx].address = sg_dma_address(sgl); - *len += scb->sgl[idx].length = sg_dma_len(sgl); + scb->sgl[idx].address = sg_dma_address(sg); + *len += scb->sgl[idx].length = sg_dma_len(sg); } } diff --git a/drivers/scsi/megaraid/megaraid_mbox.c b/drivers/scsi/megaraid/megaraid_mbox.c index 04d0b69..6b1ee7f 100644 --- a/drivers/scsi/megaraid/megaraid_mbox.c +++ b/drivers/scsi/megaraid/megaraid_mbox.c @@ -1377,6 +1377,7 @@ static int megaraid_mbox_mksgl(adapter_t *adapter, scb_t *scb) { struct scatterlist *sgl; + struct scatterlist *sg; mbox_ccb_t *ccb; struct page *page; unsigned long offset; @@ -1429,9 +1430,9 @@ megaraid_mbox_mksgl(adapter_t *adapter, scb_t *scb) scb->dma_type = MRAID_DMA_WSG; - for (i = 0; i < sgcnt; i++, sgl++) { - ccb->sgl64[i].address = sg_dma_address(sgl); - ccb->sgl64[i].length = sg_dma_len(sgl); + for_each_sg(sgl, sg, sgcnt, i) { + ccb->sgl64[i].address = sg_dma_address(sg); + ccb->sgl64[i].length = sg_dma_len(sg); } // Return count of SG nodes diff --git a/drivers/scsi/megaraid/megaraid_sas.c b/drivers/scsi/megaraid/megaraid_sas.c index 7a81267..5260bc6 100644 --- a/drivers/scsi/megaraid/megaraid_sas.c +++ b/drivers/scsi/megaraid/megaraid_sas.c @@ -431,7 +431,7 @@ megasas_make_sgl32(struct megasas_instance *instance, struct scsi_cmnd *scp, { int i; int sge_count; - struct scatterlist *os_sgl; + struct scatterlist *os_sgl, *sg; /* * Return 0 if there is no data transfer @@ -456,9 +456,9 @@ megasas_make_sgl32(struct megasas_instance *instance, struct scsi_cmnd *scp, sge_count = pci_map_sg(instance->pdev, os_sgl, scp->use_sg, scp->sc_data_direction); - for (i = 0; i < sge_count; i++, os_sgl++) { - mfi_sgl->sge32[i].length = sg_dma_len(os_sgl); - mfi_sgl->sge32[i].phys_addr = sg_dma_address(os_sgl); + for_each_sg(os_sgl, sg, sge_count, i) { + mfi_sgl->sge32[i].length = sg_dma_len(sg); + mfi_sgl->sge32[i].phys_addr = sg_dma_address(sg); } return sge_count; @@ -479,7 +479,7 @@ megasas_make_sgl64(struct megasas_instance *instance, struct scsi_cmnd *scp, { int i; int sge_count; - struct scatterlist *os_sgl; + struct scatterlist *os_sgl, *sg; /* * Return 0 if there is no data transfer @@ -505,9 +505,9 @@ megasas_make_sgl64(struct megasas_instance *instance, struct scsi_cmnd *scp, sge_count = pci_map_sg(instance->pdev, os_sgl, scp->use_sg, scp->sc_data_direction); - for (i = 0; i < sge_count; i++, os_sgl++) { - mfi_sgl->sge64[i].length = sg_dma_len(os_sgl); - mfi_sgl->sge64[i].phys_addr = sg_dma_address(os_sgl); + for_each_sg(os_sgl, sg, sge_count, i) { + mfi_sgl->sge64[i].length = sg_dma_len(sg); + mfi_sgl->sge64[i].phys_addr = sg_dma_address(sg); } return sge_count; diff --git a/drivers/scsi/mesh.c b/drivers/scsi/mesh.c index e64d1a1..66519aa 100644 --- a/drivers/scsi/mesh.c +++ b/drivers/scsi/mesh.c @@ -1257,7 +1257,7 @@ static void handle_msgin(struct mesh_state *ms) static void set_dma_cmds(struct mesh_state *ms, struct scsi_cmnd *cmd) { int i, dma_cmd, total, off, dtot; - struct scatterlist *scl; + struct scatterlist *scl, *sg; struct dbdma_cmd *dcmds; dma_cmd = ms->tgts[ms->conn_tgt].data_goes_out? @@ -1267,17 +1267,17 @@ static void set_dma_cmds(struct mesh_state *ms, struct scsi_cmnd *cmd) if (cmd) { cmd->SCp.this_residual = cmd->request_bufflen; if (cmd->use_sg > 0) { - int nseg; + int nseg, i; total = 0; scl = (struct scatterlist *) cmd->request_buffer; off = ms->data_ptr; nseg = pci_map_sg(ms->pdev, scl, cmd->use_sg, cmd->sc_data_direction); - for (i = 0; i <nseg; ++i, ++scl) { - u32 dma_addr = sg_dma_address(scl); - u32 dma_len = sg_dma_len(scl); + for_each_sg(scl, sg, nseg, i) { + u32 dma_addr = sg_dma_address(sg); + u32 dma_len = sg_dma_len(sg); - total += scl->length; + total += sg->length; if (off >= dma_len) { off -= dma_len; continue; diff --git a/drivers/scsi/ncr53c8xx.c b/drivers/scsi/ncr53c8xx.c index bbf521c..e6210c6 100644 --- a/drivers/scsi/ncr53c8xx.c +++ b/drivers/scsi/ncr53c8xx.c @@ -7700,6 +7700,7 @@ static int ncr_scatter(struct ncb *np, struct ccb *cp, struct scsi_cmnd *cmd) segment = ncr_scatter_no_sglist(np, cp, cmd); else if ((use_sg = map_scsi_sg_data(np, cmd)) > 0) { struct scatterlist *scatter = (struct scatterlist *)cmd->request_buffer; + struct scatterlist *sg; struct scr_tblmove *data; if (use_sg > MAX_SCATTER) { @@ -7709,9 +7710,9 @@ static int ncr_scatter(struct ncb *np, struct ccb *cp, struct scsi_cmnd *cmd) data = &cp->phys.data[MAX_SCATTER - use_sg]; - for (segment = 0; segment < use_sg; segment++) { - dma_addr_t baddr = sg_dma_address(&scatter[segment]); - unsigned int len = sg_dma_len(&scatter[segment]); + for_each_sg(scatter, sg, use_sg, segment) { + dma_addr_t baddr = sg_dma_address(sg); + unsigned int len = sg_dma_len(sg); ncr_build_sge(np, &data[segment], baddr, len); cp->data_len += len; diff --git a/drivers/scsi/nsp32.c b/drivers/scsi/nsp32.c index f6f561d..32426a9 100644 --- a/drivers/scsi/nsp32.c +++ b/drivers/scsi/nsp32.c @@ -888,7 +888,7 @@ static int nsp32_reselection(struct scsi_cmnd *SCpnt, unsigned char newlun) static int nsp32_setup_sg_table(struct scsi_cmnd *SCpnt) { nsp32_hw_data *data = (nsp32_hw_data *)SCpnt->device->host->hostdata; - struct scatterlist *sgl; + struct scatterlist *sgl, *sg; nsp32_sgtable *sgt = data->cur_lunt->sglun->sgt; int num, i; u32_le l; @@ -906,13 +906,12 @@ static int nsp32_setup_sg_table(struct scsi_cmnd *SCpnt) sgl = (struct scatterlist *)SCpnt->request_buffer; num = pci_map_sg(data->Pci, sgl, SCpnt->use_sg, SCpnt->sc_data_direction); - for (i = 0; i < num; i++) { + for_each_sg(sgl, sg, num, i) { /* * Build nsp32_sglist, substitute sg dma addresses. */ - sgt[i].addr = cpu_to_le32(sg_dma_address(sgl)); - sgt[i].len = cpu_to_le32(sg_dma_len(sgl)); - sgl++; + sgt[i].addr = cpu_to_le32(sg_dma_address(sg)); + sgt[i].len = cpu_to_le32(sg_dma_len(sg)); if (le32_to_cpu(sgt[i].len) > 0x10000) { nsp32_msg(KERN_ERR, diff --git a/drivers/scsi/pcmcia/sym53c500_cs.c b/drivers/scsi/pcmcia/sym53c500_cs.c index ffe75c4..b1e9270 100644 --- a/drivers/scsi/pcmcia/sym53c500_cs.c +++ b/drivers/scsi/pcmcia/sym53c500_cs.c @@ -442,12 +442,13 @@ SYM53C500_intr(int irq, void *dev_id) if (!curSC->use_sg) /* Don't use scatter-gather */ SYM53C500_pio_write(fast_pio, port_base, curSC->request_buffer, curSC->request_bufflen); else { /* use scatter-gather */ + struct scatterlist *sg; + int i; + sgcount = curSC->use_sg; sglist = curSC->request_buffer; - while (sgcount--) { - SYM53C500_pio_write(fast_pio, port_base, page_address(sglist->page) + sglist->offset, sglist->length); - sglist++; - } + for_each_sg(sglist, sg, sgcount, i) + SYM53C500_pio_write(fast_pio, port_base, page_address(sg->page) + sg->offset, sg->length); } REG0(port_base); } @@ -463,12 +464,13 @@ SYM53C500_intr(int irq, void *dev_id) if (!curSC->use_sg) /* Don't use scatter-gather */ SYM53C500_pio_read(fast_pio, port_base, curSC->request_buffer, curSC->request_bufflen); else { /* Use scatter-gather */ + struct scatterlist *sg; + int i; + sgcount = curSC->use_sg; sglist = curSC->request_buffer; - while (sgcount--) { - SYM53C500_pio_read(fast_pio, port_base, page_address(sglist->page) + sglist->offset, sglist->length); - sglist++; - } + for_each_sg(sglist, sg, sgcount, i) + SYM53C500_pio_read(fast_pio, port_base, page_address(sg->page) + sg->offset, sg->length); } REG0(port_base); } diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c index 54d8bdf..bd805ec 100644 --- a/drivers/scsi/qla1280.c +++ b/drivers/scsi/qla1280.c @@ -2775,7 +2775,7 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) struct device_reg __iomem *reg = ha->iobase; struct scsi_cmnd *cmd = sp->cmd; cmd_a64_entry_t *pkt; - struct scatterlist *sg = NULL; + struct scatterlist *sg = NULL, *s; __le32 *dword_ptr; dma_addr_t dma_handle; int status = 0; @@ -2889,13 +2889,16 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) * Load data segments. */ if (seg_cnt) { /* If data transfer. */ + int remseg = seg_cnt; /* Setup packet address segment pointer. */ dword_ptr = (u32 *)&pkt->dseg_0_address; if (cmd->use_sg) { /* If scatter gather */ /* Load command entry data segments. */ - for (cnt = 0; cnt < 2 && seg_cnt; cnt++, seg_cnt--) { - dma_handle = sg_dma_address(sg); + for_each_sg(sg, s, seg_cnt, cnt) { + if (cnt == 2) + break; + dma_handle = sg_dma_address(s); #if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_SGI_SN2) if (ha->flags.use_pci_vchannel) sn_pci_set_vchan(ha->pdev, @@ -2906,12 +2909,12 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) cpu_to_le32(pci_dma_lo32(dma_handle)); *dword_ptr++ = cpu_to_le32(pci_dma_hi32(dma_handle)); - *dword_ptr++ = cpu_to_le32(sg_dma_len(sg)); - sg++; + *dword_ptr++ = cpu_to_le32(sg_dma_len(s)); dprintk(3, "S/G Segment phys_addr=%x %x, len=0x%x\n", cpu_to_le32(pci_dma_hi32(dma_handle)), cpu_to_le32(pci_dma_lo32(dma_handle)), - cpu_to_le32(sg_dma_len(sg))); + cpu_to_le32(sg_dma_len(sg_next(s)))); + remseg--; } dprintk(5, "qla1280_64bit_start_scsi: Scatter/gather " "command packet data - b %i, t %i, l %i \n", @@ -2926,7 +2929,9 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) dprintk(3, "S/G Building Continuation...seg_cnt=0x%x " "remains\n", seg_cnt); - while (seg_cnt > 0) { + while (remseg > 0) { + /* Update sg start */ + sg = s; /* Adjust ring index. */ ha->req_ring_index++; if (ha->req_ring_index == REQUEST_ENTRY_CNT) { @@ -2952,9 +2957,10 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) (u32 *)&((struct cont_a64_entry *) pkt)->dseg_0_address; /* Load continuation entry data segments. */ - for (cnt = 0; cnt < 5 && seg_cnt; - cnt++, seg_cnt--) { - dma_handle = sg_dma_address(sg); + for_each_sg(sg, s, remseg, cnt) { + if (cnt == 5) + break; + dma_handle = sg_dma_address(s); #if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_SGI_SN2) if (ha->flags.use_pci_vchannel) sn_pci_set_vchan(ha->pdev, @@ -2966,12 +2972,12 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) *dword_ptr++ = cpu_to_le32(pci_dma_hi32(dma_handle)); *dword_ptr++ = - cpu_to_le32(sg_dma_len(sg)); + cpu_to_le32(sg_dma_len(s)); dprintk(3, "S/G Segment Cont. phys_addr=%x %x, len=0x%x\n", cpu_to_le32(pci_dma_hi32(dma_handle)), cpu_to_le32(pci_dma_lo32(dma_handle)), - cpu_to_le32(sg_dma_len(sg))); - sg++; + cpu_to_le32(sg_dma_len(s))); + remseg--; } dprintk(5, "qla1280_64bit_start_scsi: " "continuation packet data - b %i, t " @@ -3062,7 +3068,7 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) struct device_reg __iomem *reg = ha->iobase; struct scsi_cmnd *cmd = sp->cmd; struct cmd_entry *pkt; - struct scatterlist *sg = NULL; + struct scatterlist *sg = NULL, *s; __le32 *dword_ptr; int status = 0; int cnt; @@ -3188,6 +3194,7 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) * Load data segments. */ if (seg_cnt) { + int remseg = seg_cnt; /* Setup packet address segment pointer. */ dword_ptr = &pkt->dseg_0_address; @@ -3196,22 +3203,25 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) qla1280_dump_buffer(1, (char *)sg, 4 * 16); /* Load command entry data segments. */ - for (cnt = 0; cnt < 4 && seg_cnt; cnt++, seg_cnt--) { + for_each_sg(sg, s, seg_cnt, cnt) { + if (cnt == 4) + break; *dword_ptr++ = - cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))); - *dword_ptr++ = - cpu_to_le32(sg_dma_len(sg)); + cpu_to_le32(pci_dma_lo32(sg_dma_address(s))); + *dword_ptr++ = cpu_to_le32(sg_dma_len(s)); dprintk(3, "S/G Segment phys_addr=0x%lx, len=0x%x\n", - (pci_dma_lo32(sg_dma_address(sg))), - (sg_dma_len(sg))); - sg++; + (pci_dma_lo32(sg_dma_address(s))), + (sg_dma_len(s))); + remseg--; } /* * Build continuation packets. */ dprintk(3, "S/G Building Continuation" "...seg_cnt=0x%x remains\n", seg_cnt); - while (seg_cnt > 0) { + while (remseg > 0) { + /* Continue from end point */ + sg = s; /* Adjust ring index. */ ha->req_ring_index++; if (ha->req_ring_index == REQUEST_ENTRY_CNT) { @@ -3239,18 +3249,16 @@ qla1280_32bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) &((struct cont_entry *) pkt)->dseg_0_address; /* Load continuation entry data segments. */ - for (cnt = 0; cnt < 7 && seg_cnt; - cnt++, seg_cnt--) { + for_each_sg(sg, s, remseg, cnt) { *dword_ptr++ = - cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))); + cpu_to_le32(pci_dma_lo32(sg_dma_address(s))); *dword_ptr++ = - cpu_to_le32(sg_dma_len(sg)); + cpu_to_le32(sg_dma_len(s)); dprintk(1, "S/G Segment Cont. phys_addr=0x%x, " "len=0x%x\n", - cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))), - cpu_to_le32(sg_dma_len(sg))); - sg++; + cpu_to_le32(pci_dma_lo32(sg_dma_address(s))), + cpu_to_le32(sg_dma_len(s))); } dprintk(5, "qla1280_32bit_start_scsi: " "continuation packet data - " diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c index c5b3c61..10251bf 100644 --- a/drivers/scsi/qla2xxx/qla_iocb.c +++ b/drivers/scsi/qla2xxx/qla_iocb.c @@ -671,12 +671,11 @@ qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt, /* Load data segments */ if (cmd->use_sg != 0) { + struct scatterlist *sgl = cmd->request_buffer; struct scatterlist *cur_seg; - struct scatterlist *end_seg; + int i; - cur_seg = (struct scatterlist *)cmd->request_buffer; - end_seg = cur_seg + tot_dsds; - while (cur_seg < end_seg) { + for_each_sg(sgl, cur_seg, tot_dsds, i) { dma_addr_t sle_dma; cont_a64_entry_t *cont_pkt; @@ -696,8 +695,6 @@ qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt, *cur_dsd++ = cpu_to_le32(MSD(sle_dma)); *cur_dsd++ = cpu_to_le32(sg_dma_len(cur_seg)); avail_dsds--; - - cur_seg++; } } else { *cur_dsd++ = cpu_to_le32(LSD(sp->dma_handle)); diff --git a/drivers/scsi/qla4xxx/ql4_iocb.c b/drivers/scsi/qla4xxx/ql4_iocb.c index a216a17..82a894b 100644 --- a/drivers/scsi/qla4xxx/ql4_iocb.c +++ b/drivers/scsi/qla4xxx/ql4_iocb.c @@ -156,12 +156,11 @@ static void qla4xxx_build_scsi_iocbs(struct srb *srb, /* Load data segments */ if (cmd->use_sg) { + struct scatterlist *sgl = cmd->request_buffer; struct scatterlist *cur_seg; - struct scatterlist *end_seg; + int i; - cur_seg = (struct scatterlist *)cmd->request_buffer; - end_seg = cur_seg + tot_dsds; - while (cur_seg < end_seg) { + for_each_sg(sgl, cur_seg, tot_dsds, i) { dma_addr_t sle_dma; /* Allocate additional continuation packets? */ @@ -182,7 +181,6 @@ static void qla4xxx_build_scsi_iocbs(struct srb *srb, avail_dsds--; cur_dsd++; - cur_seg++; } } else { cur_dsd->base.addrLow = cpu_to_le32(LSDW(srb->dma_handle)); diff --git a/drivers/scsi/qlogicfas408.c b/drivers/scsi/qlogicfas408.c index 2e7db18..ccc7d3f 100644 --- a/drivers/scsi/qlogicfas408.c +++ b/drivers/scsi/qlogicfas408.c @@ -315,18 +315,19 @@ static unsigned int ql_pcmd(struct scsi_cmnd *cmd) ql_pdma(priv, phase, cmd->request_buffer, cmd->request_bufflen); else { + struct scatterlist *sg; + sgcount = cmd->use_sg; sglist = cmd->request_buffer; - while (sgcount--) { + for_each_sg(sglist, sg, sgcount, i) { if (priv->qabort) { REG0; return ((priv->qabort == 1 ? DID_ABORT : DID_RESET) << 16); } - buf = page_address(sglist->page) + sglist->offset; - if (ql_pdma(priv, phase, buf, sglist->length)) + buf = page_address(sg->page) + sg->offset; + if (ql_pdma(priv, phase, buf, sg->length)) break; - sglist++; } } REG0; diff --git a/drivers/scsi/qlogicpti.c b/drivers/scsi/qlogicpti.c index c4195ea..e36e6cd 100644 --- a/drivers/scsi/qlogicpti.c +++ b/drivers/scsi/qlogicpti.c @@ -868,7 +868,7 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd, struct qlogicpti *qpti, u_int in_ptr, u_int out_ptr) { struct dataseg *ds; - struct scatterlist *sg; + struct scatterlist *sg, *s; int i, n; if (Cmnd->use_sg) { @@ -884,11 +884,12 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd, n = sg_count; if (n > 4) n = 4; - for (i = 0; i < n; i++, sg++) { - ds[i].d_base = sg_dma_address(sg); - ds[i].d_count = sg_dma_len(sg); + for_each_sg(sg, s, n, i) { + ds[i].d_base = sg_dma_address(s); + ds[i].d_count = sg_dma_len(g); } sg_count -= 4; + sg = s; while (sg_count > 0) { struct Continuation_Entry *cont; @@ -907,9 +908,9 @@ static inline int load_cmd(struct scsi_cmnd *Cmnd, struct Command_Entry *cmd, n = sg_count; if (n > 7) n = 7; - for (i = 0; i < n; i++, sg++) { - ds[i].d_base = sg_dma_address(sg); - ds[i].d_count = sg_dma_len(sg); + for_each_sg(sg, s, n, i) { + ds[i].d_base = sg_dma_address(s); + ds[i].d_count = sg_dma_len(s); } sg_count -= n; } diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 06229f2..08ad9ea 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -38,6 +38,7 @@ #include <linux/proc_fs.h> #include <linux/vmalloc.h> #include <linux/moduleparam.h> +#include <linux/scatterlist.h> #include <linux/blkdev.h> #include "scsi.h" @@ -600,7 +601,7 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr, int k, req_len, act_len, len, active; void * kaddr; void * kaddr_off; - struct scatterlist * sgpnt; + struct scatterlist * sgpnt, * sg; if (0 == scp->request_bufflen) return 0; @@ -621,14 +622,15 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr, } sgpnt = (struct scatterlist *)scp->request_buffer; active = 1; - for (k = 0, req_len = 0, act_len = 0; k < scp->use_sg; ++k, ++sgpnt) { + req_len = act_len = 0; + for_each_sg(sgpnt, sg, scp->use_sg, k) { if (active) { kaddr = (unsigned char *) - kmap_atomic(sgpnt->page, KM_USER0); + kmap_atomic(sg->page, KM_USER0); if (NULL == kaddr) return (DID_ERROR << 16); - kaddr_off = (unsigned char *)kaddr + sgpnt->offset; - len = sgpnt->length; + kaddr_off = (unsigned char *)kaddr + sg->offset; + len = sg->length; if ((req_len + len) > arr_len) { active = 0; len = arr_len - req_len; @@ -637,7 +639,7 @@ static int fill_from_dev_buffer(struct scsi_cmnd * scp, unsigned char * arr, kunmap_atomic(kaddr, KM_USER0); act_len += len; } - req_len += sgpnt->length; + req_len += sg->length; } if (scp->resid) scp->resid -= act_len; diff --git a/drivers/scsi/sym53c416.c b/drivers/scsi/sym53c416.c index 2ca9505..06c765c 100644 --- a/drivers/scsi/sym53c416.c +++ b/drivers/scsi/sym53c416.c @@ -332,7 +332,7 @@ static irqreturn_t sym53c416_intr_handle(int irq, void *dev_id) int i; unsigned long flags = 0; unsigned char status_reg, pio_int_reg, int_reg; - struct scatterlist *sglist; + struct scatterlist *sglist, *sg; unsigned int sgcount; unsigned int tot_trans = 0; @@ -437,11 +437,8 @@ static irqreturn_t sym53c416_intr_handle(int irq, void *dev_id) { sgcount = current_command->use_sg; sglist = current_command->request_buffer; - while(sgcount--) - { - tot_trans += sym53c416_write(base, SG_ADDRESS(sglist), sglist->length); - sglist++; - } + for_each_sg(sglist, sg, sgcount, i) + tot_trans += sym53c416_write(base, SG_ADDRESS(sg), sg->length); } if(tot_trans < current_command->underflow) printk(KERN_WARNING "sym53c416: Underflow, wrote %d bytes, request for %d bytes.\n", tot_trans, current_command->underflow); diff --git a/drivers/scsi/sym53c8xx_2/sym_glue.c b/drivers/scsi/sym53c8xx_2/sym_glue.c index 4d78c7e..f39e3ec 100644 --- a/drivers/scsi/sym53c8xx_2/sym_glue.c +++ b/drivers/scsi/sym53c8xx_2/sym_glue.c @@ -371,6 +371,7 @@ static int sym_scatter(struct sym_hcb *np, struct sym_ccb *cp, struct scsi_cmnd segment = sym_scatter_no_sglist(np, cp, cmd); else if ((use_sg = map_scsi_sg_data(np, cmd)) > 0) { struct scatterlist *scatter = (struct scatterlist *)cmd->request_buffer; + struct scatterlist *sg; struct sym_tcb *tp = &np->target[cp->target]; struct sym_tblmove *data; @@ -381,9 +382,9 @@ static int sym_scatter(struct sym_hcb *np, struct sym_ccb *cp, struct scsi_cmnd data = &cp->phys.data[SYM_CONF_MAX_SG - use_sg]; - for (segment = 0; segment < use_sg; segment++) { - dma_addr_t baddr = sg_dma_address(&scatter[segment]); - unsigned int len = sg_dma_len(&scatter[segment]); + for_each_sg(scatter, sg, use_sg, segment) { + dma_addr_t baddr = sg_dma_address(sg); + unsigned int len = sg_dma_len(sg); if ((len & 1) && (tp->head.wval & EWS)) { len++; diff --git a/drivers/scsi/u14-34f.c b/drivers/scsi/u14-34f.c index 3de08a1..db17ba4 100644 --- a/drivers/scsi/u14-34f.c +++ b/drivers/scsi/u14-34f.c @@ -1111,7 +1111,7 @@ static int u14_34f_detect(struct scsi_host_template *tpnt) { static void map_dma(unsigned int i, unsigned int j) { unsigned int data_len = 0; unsigned int k, count, pci_dir; - struct scatterlist *sgpnt; + struct scatterlist *sgpnt, *sg; struct mscp *cpp; struct scsi_cmnd *SCpnt; @@ -1140,10 +1140,10 @@ static void map_dma(unsigned int i, unsigned int j) { sgpnt = (struct scatterlist *) SCpnt->request_buffer; count = pci_map_sg(HD(j)->pdev, sgpnt, SCpnt->use_sg, pci_dir); - for (k = 0; k < count; k++) { - cpp->sglist[k].address = H2DEV(sg_dma_address(&sgpnt[k])); - cpp->sglist[k].num_bytes = H2DEV(sg_dma_len(&sgpnt[k])); - data_len += sgpnt[k].length; + for_each_sg(sgpnt, sg, count, k) { + cpp->sglist[k].address = H2DEV(sg_dma_address(sg)); + cpp->sglist[k].num_bytes = H2DEV(sg_dma_len(sg)); + data_len += sg->length; } cpp->sg = TRUE; diff --git a/drivers/scsi/ultrastor.c b/drivers/scsi/ultrastor.c index 56906ab..1b721ec 100644 --- a/drivers/scsi/ultrastor.c +++ b/drivers/scsi/ultrastor.c @@ -675,16 +675,16 @@ static const char *ultrastor_info(struct Scsi_Host * shpnt) static inline void build_sg_list(struct mscp *mscp, struct scsi_cmnd *SCpnt) { - struct scatterlist *sl; + struct scatterlist *sl, *sg; long transfer_length = 0; int i, max; sl = (struct scatterlist *) SCpnt->request_buffer; max = SCpnt->use_sg; - for (i = 0; i < max; i++) { - mscp->sglist[i].address = isa_page_to_bus(sl[i].page) + sl[i].offset; - mscp->sglist[i].num_bytes = sl[i].length; - transfer_length += sl[i].length; + for_each_sg(sl, sg, max, i) { + mscp->sglist[i].address = isa_page_to_bus(sg->page) + sg->offset; + mscp->sglist[i].num_bytes = sg->length; + transfer_length += sg->length; } mscp->number_of_sg_list = max; mscp->transfer_data = isa_virt_to_bus(mscp->sglist); diff --git a/drivers/scsi/wd7000.c b/drivers/scsi/wd7000.c index 30be765..85aaa5e 100644 --- a/drivers/scsi/wd7000.c +++ b/drivers/scsi/wd7000.c @@ -1108,6 +1108,7 @@ static int wd7000_queuecommand(struct scsi_cmnd *SCpnt, if (SCpnt->use_sg) { struct scatterlist *sg = (struct scatterlist *) SCpnt->request_buffer; + struct scatterlist *s; unsigned i; if (SCpnt->device->host->sg_tablesize == SG_NONE) { @@ -1120,9 +1121,9 @@ static int wd7000_queuecommand(struct scsi_cmnd *SCpnt, any2scsi(scb->dataptr, (int) sgb); any2scsi(scb->maxlen, SCpnt->use_sg * sizeof(Sgb)); - for (i = 0; i < SCpnt->use_sg; i++) { - any2scsi(sgb[i].ptr, isa_page_to_bus(sg[i].page) + sg[i].offset); - any2scsi(sgb[i].len, sg[i].length); + for_each_sg(sg, s, SCpnt->use_sg, i) { + any2scsi(sgb[i].ptr, isa_page_to_bus(s->page) + s->offset); + any2scsi(sgb[i].len, s->length); } } else { scb->op = 0; ^ permalink raw reply related [flat|nested] 28+ messages in thread
end of thread, other threads:[~2007-05-10 12:35 UTC | newest] Thread overview: 28+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2007-05-10 10:21 [PATCH 0/13] Chaining sg lists for bio IO commands v3 Jens Axboe 2007-05-10 10:21 ` [PATCH 1/13] crypto: don't pollute the global namespace with sg_next() Jens Axboe 2007-05-10 10:21 ` [PATCH 2/13] Add sg helpers for iterating over a scatterlist table Jens Axboe 2007-05-10 10:39 ` Andrew Morton 2007-05-10 10:42 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 3/13] libata: convert to using sg helpers Jens Axboe 2007-05-10 10:21 ` [PATCH 4/13] block: " Jens Axboe 2007-05-10 10:21 ` [PATCH 5/13] scsi: " Jens Axboe 2007-05-10 10:21 ` [PATCH 6/13] i386 dma_map_sg: " Jens Axboe 2007-05-10 10:21 ` [PATCH 7/13] i386 sg: add support for chaining scatterlists Jens Axboe 2007-05-10 10:43 ` Andrew Morton 2007-05-10 10:44 ` Jens Axboe 2007-05-10 10:46 ` Jens Axboe 2007-05-10 10:52 ` Andrew Morton 2007-05-10 11:21 ` Jens Axboe 2007-05-10 10:59 ` Benny Halevy 2007-05-10 11:23 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 8/13] x86-64: update iommu/dma mapping functions to sg helpers Jens Axboe 2007-05-10 10:21 ` [PATCH 9/13] [PATCH] x86-64: enable sg chaining Jens Axboe 2007-05-10 10:21 ` [PATCH 10/13] scsi: simplify scsi_free_sgtable() Jens Axboe 2007-05-10 10:21 ` [PATCH 11/13] SCSI: support for allocating large scatterlists Jens Axboe 2007-05-10 10:48 ` Andrew Morton 2007-05-10 10:52 ` Jens Axboe 2007-05-10 12:38 ` Alan Cox 2007-05-10 10:21 ` [PATCH 12/13] ll_rw_blk: temporarily enable max_segments tweaking Jens Axboe 2007-05-10 10:49 ` Andrew Morton 2007-05-10 11:20 ` Jens Axboe 2007-05-10 10:21 ` [PATCH 13/13] scsi drivers: sg chaining Jens Axboe
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox