linux-ide.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHSET #upstream] block/libata: update and use block layer padding and draining
@ 2008-02-05  7:53 Tejun Heo
  2008-02-05  7:53 ` [PATCH 1/5] block: update bio according to DMA alignment padding Tejun Heo
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Tejun Heo @ 2008-02-05  7:53 UTC (permalink / raw)
  To: jeff, linux-ide, linux-scsi, fujita.tomonori, Jens.Axboe,
	James.Bottomley


This patchset updates block layer padding and draining support and
make libata use it.  It's based on James Bottomley's initial work and,
of the five, the last two patches are from James with some
modifications.

Please read the following thread for more info.

  http://thread.gmane.org/gmane.linux.scsi/37185

This patchset is on top of

  upstream (a6af42fc9a12165136d82206ad52f18c5955ce87)
+ kill-n_iter-and-fix-fsl patch [1]

 block/blk-core.c              |    2
 block/blk-map.c               |   19 +++++
 block/blk-merge.c             |    3
 block/blk-settings.c          |    7 +-
 block/bsg.c                   |    8 +-
 block/scsi_ioctl.c            |    3
 drivers/ata/ahci.c            |    5 -
 drivers/ata/libata-core.c     |  145 ++----------------------------------------
 drivers/ata/libata-scsi.c     |   54 ++++++++++-----
 drivers/ata/pata_icside.c     |    8 --
 drivers/ata/sata_fsl.c        |   13 ---
 drivers/ata/sata_mv.c         |    6 -
 drivers/ata/sata_sil24.c      |    5 -
 drivers/scsi/ipr.c            |    4 -
 drivers/scsi/libsas/sas_ata.c |    4 -
 drivers/scsi/scsi_lib.c       |    8 +-
 include/linux/blkdev.h        |    8 +-
 include/linux/libata.h        |   28 --------
 18 files changed, 95 insertions(+), 235 deletions(-)

Thanks.

--
tejun

[1] http://article.gmane.org/gmane.linux.ide/28038

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] block: update bio according to DMA alignment padding
  2008-02-05  7:53 [PATCHSET #upstream] block/libata: update and use block layer padding and draining Tejun Heo
@ 2008-02-05  7:53 ` Tejun Heo
  2008-02-05  7:53 ` [PATCH 2/5] block: add request->raw_data_len Tejun Heo
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2008-02-05  7:53 UTC (permalink / raw)
  To: jeff, linux-ide, linux-scsi, fujita.tomonori, Jens.Axboe
  Cc: Tejun Heo, James Bottomley

DMA start address and transfer size alignment for PC requests are
achieved using bio_copy_user() instead of bio_map_user().  This works
because bio_copy_user() always uses full pages and block DMA alignment
isn't allowed to go over PAGE_SIZE.

However, the implementation didn't update the last bio of the request
to make this padding visible to lower layers.  This patch makes
blk_rq_map_user() extend the last bio such that it includes the
padding area and the size of area pointed to by the request is
properly aligned.

Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
---
 block/blk-map.c |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/block/blk-map.c b/block/blk-map.c
index 955d75c..103b1df 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -139,6 +139,23 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq,
 		ubuf += ret;
 	}
 
+	/*
+	 * __blk_rq_map_user() copies the buffers if starting address
+	 * or length isn't aligned.  As the copied buffer is always
+	 * page aligned, we know that there's enough room for padding.
+	 * Extend the last bio and update rq->data_len accordingly.
+	 *
+	 * On unmap, bio_uncopy_user() will use unmodified
+	 * bio_map_data pointed to by bio->bi_private.
+	 */
+	if (len & queue_dma_alignment(q)) {
+		unsigned int pad_len = (queue_dma_alignment(q) & ~len) + 1;
+		struct bio *bio = rq->biotail;
+
+		bio->bi_io_vec[bio->bi_vcnt - 1].bv_len += pad_len;
+		bio->bi_size += pad_len;
+	}
+
 	rq->buffer = rq->data = NULL;
 	return 0;
 unmap_rq:
-- 
1.5.2.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] block: add request->raw_data_len
  2008-02-05  7:53 [PATCHSET #upstream] block/libata: update and use block layer padding and draining Tejun Heo
  2008-02-05  7:53 ` [PATCH 1/5] block: update bio according to DMA alignment padding Tejun Heo
@ 2008-02-05  7:53 ` Tejun Heo
  2008-02-05  7:53 ` [PATCH 3/5] block: implement request_queue->dma_drain_needed Tejun Heo
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2008-02-05  7:53 UTC (permalink / raw)
  To: jeff, linux-ide, linux-scsi, fujita.tomonori, Jens.Axboe
  Cc: Tejun Heo, James Bottomley

With padding and draining moved into it, block layer now may extend
requests as directed by queue parameters, so now a request has two
sizes - the original request size and the extended size which matches
the size of area pointed to by bios and later by sgs.  The latter size
is what lower layers are primarily interested in when allocating,
filling up DMA tables and setting up the controller.

Both padding and draining extend the data area to accomodate
controller characteristics.  As any controller which speaks SCSI can
handle underflows, feeding larger data area is safe.

So, this patch makes the primary data length field, request->data_len,
indicate the size of full data area and add a separate length field,
request->raw_data_len, for the unmodified request size.  The latter is
used to report to higher layer (userland) and where the original
request size should be fed to the controller or device.

Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
---
 block/blk-core.c        |    2 ++
 block/blk-map.c         |    2 ++
 block/blk-merge.c       |    1 +
 block/bsg.c             |    8 ++++----
 block/scsi_ioctl.c      |    3 ++-
 drivers/scsi/scsi_lib.c |    8 ++++----
 include/linux/blkdev.h  |    1 +
 7 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 4afb39c..8b004e1 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -116,6 +116,7 @@ void rq_init(struct request_queue *q, struct request *rq)
 	rq->ref_count = 1;
 	rq->q = q;
 	rq->special = NULL;
+	rq->raw_data_len = 0;
 	rq->data_len = 0;
 	rq->data = NULL;
 	rq->nr_phys_segments = 0;
@@ -1982,6 +1983,7 @@ void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
 	rq->hard_cur_sectors = rq->current_nr_sectors;
 	rq->hard_nr_sectors = rq->nr_sectors = bio_sectors(bio);
 	rq->buffer = bio_data(bio);
+	rq->raw_data_len = bio->bi_size;
 	rq->data_len = bio->bi_size;
 
 	rq->bio = rq->biotail = bio;
diff --git a/block/blk-map.c b/block/blk-map.c
index 103b1df..1588ea3 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -19,6 +19,7 @@ int blk_rq_append_bio(struct request_queue *q, struct request *rq,
 		rq->biotail->bi_next = bio;
 		rq->biotail = bio;
 
+		rq->raw_data_len += bio->bi_size;
 		rq->data_len += bio->bi_size;
 	}
 	return 0;
@@ -154,6 +155,7 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq,
 
 		bio->bi_io_vec[bio->bi_vcnt - 1].bv_len += pad_len;
 		bio->bi_size += pad_len;
+		rq->data_len += pad_len;
 	}
 
 	rq->buffer = rq->data = NULL;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 845ef81..480d2bc 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -228,6 +228,7 @@ new_segment:
 			    ((unsigned long)q->dma_drain_buffer) &
 			    (PAGE_SIZE - 1));
 		nsegs++;
+		rq->data_len += q->dma_drain_size;
 	}
 
 	if (sg)
diff --git a/block/bsg.c b/block/bsg.c
index 8917c51..7f3c095 100644
--- a/block/bsg.c
+++ b/block/bsg.c
@@ -437,14 +437,14 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
 	}
 
 	if (rq->next_rq) {
-		hdr->dout_resid = rq->data_len;
-		hdr->din_resid = rq->next_rq->data_len;
+		hdr->dout_resid = rq->raw_data_len;
+		hdr->din_resid = rq->next_rq->raw_data_len;
 		blk_rq_unmap_user(bidi_bio);
 		blk_put_request(rq->next_rq);
 	} else if (rq_data_dir(rq) == READ)
-		hdr->din_resid = rq->data_len;
+		hdr->din_resid = rq->raw_data_len;
 	else
-		hdr->dout_resid = rq->data_len;
+		hdr->dout_resid = rq->raw_data_len;
 
 	/*
 	 * If the request generated a negative error number, return it
diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index 9675b34..e993cac 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -266,7 +266,7 @@ static int blk_complete_sghdr_rq(struct request *rq, struct sg_io_hdr *hdr,
 	hdr->info = 0;
 	if (hdr->masked_status || hdr->host_status || hdr->driver_status)
 		hdr->info |= SG_INFO_CHECK;
-	hdr->resid = rq->data_len;
+	hdr->resid = rq->raw_data_len;
 	hdr->sb_len_wr = 0;
 
 	if (rq->sense_len && hdr->sbp) {
@@ -528,6 +528,7 @@ static int __blk_send_generic(struct request_queue *q, struct gendisk *bd_disk,
 	rq = blk_get_request(q, WRITE, __GFP_WAIT);
 	rq->cmd_type = REQ_TYPE_BLOCK_PC;
 	rq->data = NULL;
+	rq->raw_data_len = 0;
 	rq->data_len = 0;
 	rq->timeout = BLK_DEFAULT_SG_TIMEOUT;
 	memset(rq->cmd, 0, sizeof(rq->cmd));
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index b12fb31..810f87d 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1015,10 +1015,6 @@ static int scsi_init_sgtable(struct request *req, struct scsi_data_buffer *sdb,
 	}
 
 	req->buffer = NULL;
-	if (blk_pc_request(req))
-		sdb->length = req->data_len;
-	else
-		sdb->length = req->nr_sectors << 9;
 
 	/* 
 	 * Next, walk the list, and fill in the addresses and sizes of
@@ -1027,6 +1023,10 @@ static int scsi_init_sgtable(struct request *req, struct scsi_data_buffer *sdb,
 	count = blk_rq_map_sg(req->q, req, sdb->table.sgl);
 	BUG_ON(count > sdb->table.nents);
 	sdb->table.nents = count;
+	if (blk_pc_request(req))
+		sdb->length = req->data_len;
+	else
+		sdb->length = req->nr_sectors << 9;
 	return BLKPREP_OK;
 }
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 90392a9..ee0b021 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -214,6 +214,7 @@ struct request {
 	unsigned int cmd_len;
 	unsigned char cmd[BLK_MAX_CDB];
 
+	unsigned int raw_data_len;
 	unsigned int data_len;
 	unsigned int sense_len;
 	void *data;
-- 
1.5.2.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] block: implement request_queue->dma_drain_needed
  2008-02-05  7:53 [PATCHSET #upstream] block/libata: update and use block layer padding and draining Tejun Heo
  2008-02-05  7:53 ` [PATCH 1/5] block: update bio according to DMA alignment padding Tejun Heo
  2008-02-05  7:53 ` [PATCH 2/5] block: add request->raw_data_len Tejun Heo
@ 2008-02-05  7:53 ` Tejun Heo
  2008-02-05  7:53 ` [PATCH 4/5] libata: eliminate the home grown dma padding in favour of that provided by the block layer Tejun Heo
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2008-02-05  7:53 UTC (permalink / raw)
  To: jeff, linux-ide, linux-scsi, fujita.tomonori, Jens.Axboe
  Cc: Tejun Heo, James Bottomley

Draining shouldn't be done for commands where overflow may indicate
data integrity issues.  Add dma_drain_needed callback to
request_queue.  Drain buffer is appened iff this function returns
non-zero.

Signed-off-by: Tejun Heo <htejun@gmail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
---
 block/blk-merge.c      |    2 +-
 block/blk-settings.c   |    7 +++++--
 include/linux/blkdev.h |    7 +++++--
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 480d2bc..d50cfc8 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -220,7 +220,7 @@ new_segment:
 		bvprv = bvec;
 	} /* segments in rq */
 
-	if (q->dma_drain_size) {
+	if (q->dma_drain_size && q->dma_drain_needed(rq)) {
 		sg->page_link &= ~0x02;
 		sg = sg_next(sg);
 		sg_set_page(sg, virt_to_page(q->dma_drain_buffer),
diff --git a/block/blk-settings.c b/block/blk-settings.c
index c8d0c57..0a0b3a4 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -296,6 +296,7 @@ EXPORT_SYMBOL(blk_queue_stack_limits);
  * blk_queue_dma_drain - Set up a drain buffer for excess dma.
  *
  * @q:  the request queue for the device
+ * @dma_drain_needed: fn which returns non-zero if drain is necessary
  * @buf:	physically contiguous buffer
  * @size:	size of the buffer in bytes
  *
@@ -315,14 +316,16 @@ EXPORT_SYMBOL(blk_queue_stack_limits);
  * device can support otherwise there won't be room for the drain
  * buffer.
  */
-int blk_queue_dma_drain(struct request_queue *q, void *buf,
-				unsigned int size)
+extern int blk_queue_dma_drain(struct request_queue *q,
+			       dma_drain_needed_fn *dma_drain_needed,
+			       void *buf, unsigned int size)
 {
 	if (q->max_hw_segments < 2 || q->max_phys_segments < 2)
 		return -EINVAL;
 	/* make room for appending the drain */
 	--q->max_hw_segments;
 	--q->max_phys_segments;
+	q->dma_drain_needed = dma_drain_needed;
 	q->dma_drain_buffer = buf;
 	q->dma_drain_size = size;
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ee0b021..3912c5d 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -257,6 +257,7 @@ struct bio_vec;
 typedef int (merge_bvec_fn) (struct request_queue *, struct bio *, struct bio_vec *);
 typedef void (prepare_flush_fn) (struct request_queue *, struct request *);
 typedef void (softirq_done_fn)(struct request *);
+typedef int (dma_drain_needed_fn)(struct request *);
 
 enum blk_queue_state {
 	Queue_down,
@@ -293,6 +294,7 @@ struct request_queue
 	merge_bvec_fn		*merge_bvec_fn;
 	prepare_flush_fn	*prepare_flush_fn;
 	softirq_done_fn		*softirq_done_fn;
+	dma_drain_needed_fn	*dma_drain_needed;
 
 	/*
 	 * Dispatch queue sorting
@@ -697,8 +699,9 @@ extern void blk_queue_max_hw_segments(struct request_queue *, unsigned short);
 extern void blk_queue_max_segment_size(struct request_queue *, unsigned int);
 extern void blk_queue_hardsect_size(struct request_queue *, unsigned short);
 extern void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b);
-extern int blk_queue_dma_drain(struct request_queue *q, void *buf,
-			       unsigned int size);
+extern int blk_queue_dma_drain(struct request_queue *q,
+			       dma_drain_needed_fn *dma_drain_needed,
+			       void *buf, unsigned int size);
 extern void blk_queue_segment_boundary(struct request_queue *, unsigned long);
 extern void blk_queue_prep_rq(struct request_queue *, prep_rq_fn *pfn);
 extern void blk_queue_merge_bvec(struct request_queue *, merge_bvec_fn *);
-- 
1.5.2.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] libata: eliminate the home grown dma padding in favour of that provided by the block layer
  2008-02-05  7:53 [PATCHSET #upstream] block/libata: update and use block layer padding and draining Tejun Heo
                   ` (2 preceding siblings ...)
  2008-02-05  7:53 ` [PATCH 3/5] block: implement request_queue->dma_drain_needed Tejun Heo
@ 2008-02-05  7:53 ` Tejun Heo
  2008-02-05  7:53 ` [PATCH 5/5] libata: implement drain buffers Tejun Heo
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2008-02-05  7:53 UTC (permalink / raw)
  To: jeff, linux-ide, linux-scsi, fujita.tomonori, Jens.Axboe
  Cc: James Bottomley, Tejun Heo

From: James Bottomley <James.Bottomley@HansenPartnership.com>

From: James Bottomley <James.Bottomley@HansenPartnership.com>

ATA requires that all DMA transfers begin and end on word boundaries.
Because of this, a large amount of machinery grew up in ide to adjust
scatterlists on this basis.  However, as of 2.5, the block layer has a
dma_alignment variable which ensures both the beginning and length of a
DMA transfer are aligned on the dma_alignment boundary.  Although the
block layer does adjust the beginning of the transfer to ensure this
happens, it doesn't actually adjust the length, it merely makes sure
that space is allocated for transfers beyond the declared length.  The
upshot of this is that scatterlists may be padded to any size between
the actual length and the length adjusted to the dma_alignment safely
knowing that memory is allocated in this region.

Right at the moment, SCSI takes the default dma_aligment which is on a
512 byte boundary.  Note that this aligment only applies to transfers
coming in from user space.  However, since all kernel allocations are
automatically aligned on a minimum of 32 byte boundaries, it is safe to
adjust them in this manner as well.

tj: * Adjusting sg after padding is done in block layer.  Make libata
      set queue alignment correctly for ATAPI devices and drop broken
      sg mangling from ata_sg_setup().
    * Use request->raw_data_len for ATAPI transfer chunk size.
    * Killed qc->raw_nbytes.
    * Separated out killing qc->n_iter.

Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Tejun Heo <htejun@gmail.com>
---
 drivers/ata/ahci.c            |    5 --
 drivers/ata/libata-core.c     |  145 +++--------------------------------------
 drivers/ata/libata-scsi.c     |   23 ++-----
 drivers/ata/pata_icside.c     |    8 --
 drivers/ata/sata_fsl.c        |   13 ----
 drivers/ata/sata_mv.c         |    6 +--
 drivers/ata/sata_sil24.c      |    5 --
 drivers/scsi/ipr.c            |    4 +-
 drivers/scsi/libsas/sas_ata.c |    4 +-
 include/linux/libata.h        |   28 +--------
 10 files changed, 21 insertions(+), 220 deletions(-)

diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index 27c8d56..e75966b 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1979,16 +1979,11 @@ static int ahci_port_start(struct ata_port *ap)
 	struct ahci_port_priv *pp;
 	void *mem;
 	dma_addr_t mem_dma;
-	int rc;
 
 	pp = devm_kzalloc(dev, sizeof(*pp), GFP_KERNEL);
 	if (!pp)
 		return -ENOMEM;
 
-	rc = ata_pad_alloc(ap, dev);
-	if (rc)
-		return rc;
-
 	mem = dmam_alloc_coherent(dev, AHCI_PORT_PRIV_DMA_SZ, &mem_dma,
 				  GFP_KERNEL);
 	if (!mem)
diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index bdbd55a..490e8d4 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -4476,30 +4476,13 @@ void ata_sg_clean(struct ata_queued_cmd *qc)
 	struct ata_port *ap = qc->ap;
 	struct scatterlist *sg = qc->sg;
 	int dir = qc->dma_dir;
-	void *pad_buf = NULL;
 
 	WARN_ON(sg == NULL);
 
-	VPRINTK("unmapping %u sg elements\n", qc->mapped_n_elem);
+	VPRINTK("unmapping %u sg elements\n", qc->n_elem);
 
-	/* if we padded the buffer out to 32-bit bound, and data
-	 * xfer direction is from-device, we must copy from the
-	 * pad buffer back into the supplied buffer
-	 */
-	if (qc->pad_len && !(qc->tf.flags & ATA_TFLAG_WRITE))
-		pad_buf = ap->pad + (qc->tag * ATA_DMA_PAD_SZ);
-
-	if (qc->mapped_n_elem)
-		dma_unmap_sg(ap->dev, sg, qc->mapped_n_elem, dir);
-	/* restore last sg */
-	if (qc->last_sg)
-		*qc->last_sg = qc->saved_last_sg;
-	if (pad_buf) {
-		struct scatterlist *psg = &qc->extra_sg[1];
-		void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
-		memcpy(addr + psg->offset, pad_buf, qc->pad_len);
-		kunmap_atomic(addr, KM_IRQ0);
-	}
+	if (qc->n_elem)
+		dma_unmap_sg(ap->dev, sg, qc->n_elem, dir);
 
 	qc->flags &= ~ATA_QCFLAG_DMAMAP;
 	qc->sg = NULL;
@@ -4765,97 +4748,6 @@ void ata_sg_init(struct ata_queued_cmd *qc, struct scatterlist *sg,
 	qc->cursg = qc->sg;
 }
 
-static unsigned int ata_sg_setup_extra(struct ata_queued_cmd *qc,
-				       unsigned int *n_elem_extra,
-				       unsigned int *nbytes_extra)
-{
-	struct ata_port *ap = qc->ap;
-	unsigned int n_elem = qc->n_elem;
-	struct scatterlist *lsg, *copy_lsg = NULL, *tsg = NULL, *esg = NULL;
-
-	*n_elem_extra = 0;
-	*nbytes_extra = 0;
-
-	/* needs padding? */
-	qc->pad_len = qc->nbytes & 3;
-
-	if (likely(!qc->pad_len))
-		return n_elem;
-
-	/* locate last sg and save it */
-	lsg = sg_last(qc->sg, n_elem);
-	qc->last_sg = lsg;
-	qc->saved_last_sg = *lsg;
-
-	sg_init_table(qc->extra_sg, ARRAY_SIZE(qc->extra_sg));
-
-	if (qc->pad_len) {
-		struct scatterlist *psg = &qc->extra_sg[1];
-		void *pad_buf = ap->pad + (qc->tag * ATA_DMA_PAD_SZ);
-		unsigned int offset;
-
-		WARN_ON(qc->dev->class != ATA_DEV_ATAPI);
-
-		memset(pad_buf, 0, ATA_DMA_PAD_SZ);
-
-		/* psg->page/offset are used to copy to-be-written
-		 * data in this function or read data in ata_sg_clean.
-		 */
-		offset = lsg->offset + lsg->length - qc->pad_len;
-		sg_set_page(psg, nth_page(sg_page(lsg), offset >> PAGE_SHIFT),
-			    qc->pad_len, offset_in_page(offset));
-
-		if (qc->tf.flags & ATA_TFLAG_WRITE) {
-			void *addr = kmap_atomic(sg_page(psg), KM_IRQ0);
-			memcpy(pad_buf, addr + psg->offset, qc->pad_len);
-			kunmap_atomic(addr, KM_IRQ0);
-		}
-
-		sg_dma_address(psg) = ap->pad_dma + (qc->tag * ATA_DMA_PAD_SZ);
-		sg_dma_len(psg) = ATA_DMA_PAD_SZ;
-
-		/* Trim the last sg entry and chain the original and
-		 * padding sg lists.
-		 *
-		 * Because chaining consumes one sg entry, one extra
-		 * sg entry is allocated and the last sg entry is
-		 * copied to it if the length isn't zero after padded
-		 * amount is removed.
-		 *
-		 * If the last sg entry is completely replaced by
-		 * padding sg entry, the first sg entry is skipped
-		 * while chaining.
-		 */
-		lsg->length -= qc->pad_len;
-		if (lsg->length) {
-			copy_lsg = &qc->extra_sg[0];
-			tsg = &qc->extra_sg[0];
-		} else {
-			n_elem--;
-			tsg = &qc->extra_sg[1];
-		}
-
-		esg = &qc->extra_sg[1];
-
-		(*n_elem_extra)++;
-		(*nbytes_extra) += 4 - qc->pad_len;
-	}
-
-	if (copy_lsg)
-		sg_set_page(copy_lsg, sg_page(lsg), lsg->length, lsg->offset);
-
-	sg_chain(lsg, 1, tsg);
-	sg_mark_end(esg);
-
-	/* sglist can't start with chaining sg entry, fast forward */
-	if (qc->sg == lsg) {
-		qc->sg = tsg;
-		qc->cursg = tsg;
-	}
-
-	return n_elem;
-}
-
 /**
  *	ata_sg_setup - DMA-map the scatter-gather table associated with a command.
  *	@qc: Command with scatter-gather table to be mapped.
@@ -4872,26 +4764,17 @@ static unsigned int ata_sg_setup_extra(struct ata_queued_cmd *qc,
 static int ata_sg_setup(struct ata_queued_cmd *qc)
 {
 	struct ata_port *ap = qc->ap;
-	unsigned int n_elem, n_elem_extra, nbytes_extra;
+	unsigned int n_elem;
 
 	VPRINTK("ENTER, ata%u\n", ap->print_id);
 
-	n_elem = ata_sg_setup_extra(qc, &n_elem_extra, &nbytes_extra);
+	n_elem = dma_map_sg(ap->dev, qc->sg, qc->n_elem, qc->dma_dir);
+	if (n_elem < 1)
+		return -1;
 
-	if (n_elem) {
-		n_elem = dma_map_sg(ap->dev, qc->sg, n_elem, qc->dma_dir);
-		if (n_elem < 1) {
-			/* restore last sg */
-			if (qc->last_sg)
-				*qc->last_sg = qc->saved_last_sg;
-			return -1;
-		}
-		DPRINTK("%d sg elements mapped\n", n_elem);
-	}
+	DPRINTK("%d sg elements mapped\n", n_elem);
 
-	qc->n_elem = qc->mapped_n_elem = n_elem;
-	qc->n_elem += n_elem_extra;
-	qc->nbytes += nbytes_extra;
+	qc->n_elem = n_elem;
 	qc->flags |= ATA_QCFLAG_DMAMAP;
 
 	return 0;
@@ -5955,9 +5838,6 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
 	 */
 	BUG_ON(ata_is_data(prot) && (!qc->sg || !qc->n_elem || !qc->nbytes));
 
-	/* ata_sg_setup() may update nbytes */
-	qc->raw_nbytes = qc->nbytes;
-
 	if (ata_is_dma(prot) || (ata_is_pio(prot) &&
 				 (ap->flags & ATA_FLAG_PIO_DMA)))
 		if (ata_sg_setup(qc))
@@ -6566,19 +6446,12 @@ void ata_host_resume(struct ata_host *host)
 int ata_port_start(struct ata_port *ap)
 {
 	struct device *dev = ap->dev;
-	int rc;
 
 	ap->prd = dmam_alloc_coherent(dev, ATA_PRD_TBL_SZ, &ap->prd_dma,
 				      GFP_KERNEL);
 	if (!ap->prd)
 		return -ENOMEM;
 
-	rc = ata_pad_alloc(ap, dev);
-	if (rc)
-		return rc;
-
-	DPRINTK("prd alloc, virt %p, dma %llx\n", ap->prd,
-		(unsigned long long)ap->prd_dma);
 	return 0;
 }
 
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index c02c490..e54ee6e 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -832,24 +832,16 @@ static void ata_scsi_dev_config(struct scsi_device *sdev,
 	/* configure max sectors */
 	blk_queue_max_sectors(sdev->request_queue, dev->max_sectors);
 
-	/* SATA DMA transfers must be multiples of 4 byte, so
-	 * we need to pad ATAPI transfers using an extra sg.
-	 * Decrement max hw segments accordingly.
-	 */
-	if (dev->class == ATA_DEV_ATAPI) {
-		struct request_queue *q = sdev->request_queue;
-		blk_queue_max_hw_segments(q, q->max_hw_segments - 1);
-
+	if (dev->class == ATA_DEV_ATAPI)
 		/* set the min alignment */
 		blk_queue_update_dma_alignment(sdev->request_queue,
 					       ATA_DMA_PAD_SZ - 1);
-	} else
+	else {
 		/* ATA devices must be sector aligned */
 		blk_queue_update_dma_alignment(sdev->request_queue,
 					       ATA_SECT_SIZE - 1);
-
-	if (dev->class == ATA_DEV_ATA)
 		sdev->manage_start_stop = 1;
+	}
 
 	if (dev->flags & ATA_DFLAG_AN)
 		set_bit(SDEV_EVT_MEDIA_CHANGE, sdev->supported_events);
@@ -2500,7 +2492,7 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc)
 	 * want to set it properly, and for DMA where it is
 	 * effectively meaningless.
 	 */
-	nbytes = min(qc->nbytes, (unsigned int)63 * 1024);
+	nbytes = min(scmd->request->raw_data_len, (unsigned int)63 * 1024);
 
 	/* Most ATAPI devices which honor transfer chunk size don't
 	 * behave according to the spec when odd chunk size which
@@ -3555,7 +3547,7 @@ EXPORT_SYMBOL_GPL(ata_sas_port_alloc);
  *	@ap: Port to initialize
  *
  *	Called just after data structures for each port are
- *	initialized.  Allocates DMA pad.
+ *	initialized.
  *
  *	May be used as the port_start() entry in ata_port_operations.
  *
@@ -3564,7 +3556,7 @@ EXPORT_SYMBOL_GPL(ata_sas_port_alloc);
  */
 int ata_sas_port_start(struct ata_port *ap)
 {
-	return ata_pad_alloc(ap, ap->dev);
+	return 0;
 }
 EXPORT_SYMBOL_GPL(ata_sas_port_start);
 
@@ -3572,8 +3564,6 @@ EXPORT_SYMBOL_GPL(ata_sas_port_start);
  *	ata_port_stop - Undo ata_sas_port_start()
  *	@ap: Port to shut down
  *
- *	Frees the DMA pad.
- *
  *	May be used as the port_stop() entry in ata_port_operations.
  *
  *	LOCKING:
@@ -3582,7 +3572,6 @@ EXPORT_SYMBOL_GPL(ata_sas_port_start);
 
 void ata_sas_port_stop(struct ata_port *ap)
 {
-	ata_pad_free(ap, ap->dev);
 }
 EXPORT_SYMBOL_GPL(ata_sas_port_stop);
 
diff --git a/drivers/ata/pata_icside.c b/drivers/ata/pata_icside.c
index 5b8586d..f97068b 100644
--- a/drivers/ata/pata_icside.c
+++ b/drivers/ata/pata_icside.c
@@ -304,12 +304,6 @@ static int icside_dma_init(struct pata_icside_info *info)
 }
 
 
-static int pata_icside_port_start(struct ata_port *ap)
-{
-	/* No PRD to alloc */
-	return ata_pad_alloc(ap, ap->dev);
-}
-
 static struct scsi_host_template pata_icside_sht = {
 	.module			= THIS_MODULE,
 	.name			= DRV_NAME,
@@ -389,8 +383,6 @@ static struct ata_port_operations pata_icside_port_ops = {
 	.irq_clear		= ata_dummy_noret,
 	.irq_on			= ata_irq_on,
 
-	.port_start		= pata_icside_port_start,
-
 	.bmdma_stop		= pata_icside_bmdma_stop,
 	.bmdma_status		= pata_icside_bmdma_status,
 };
diff --git a/drivers/ata/sata_fsl.c b/drivers/ata/sata_fsl.c
index efcb66b..9323dd0 100644
--- a/drivers/ata/sata_fsl.c
+++ b/drivers/ata/sata_fsl.c
@@ -601,21 +601,9 @@ static int sata_fsl_port_start(struct ata_port *ap)
 	if (!pp)
 		return -ENOMEM;
 
-	/*
-	 * allocate per command dma alignment pad buffer, which is used
-	 * internally by libATA to ensure that all transfers ending on
-	 * unaligned boundaries are padded, to align on Dword boundaries
-	 */
-	retval = ata_pad_alloc(ap, dev);
-	if (retval) {
-		kfree(pp);
-		return retval;
-	}
-
 	mem = dma_alloc_coherent(dev, SATA_FSL_PORT_PRIV_DMA_SZ, &mem_dma,
 				 GFP_KERNEL);
 	if (!mem) {
-		ata_pad_free(ap, dev);
 		kfree(pp);
 		return -ENOMEM;
 	}
@@ -694,7 +682,6 @@ static void sata_fsl_port_stop(struct ata_port *ap)
 	dma_free_coherent(dev, SATA_FSL_PORT_PRIV_DMA_SZ,
 			  pp->cmdslot, pp->cmdslot_paddr);
 
-	ata_pad_free(ap, dev);
 	kfree(pp);
 }
 
diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
index 1e97a33..f93722a 100644
--- a/drivers/ata/sata_mv.c
+++ b/drivers/ata/sata_mv.c
@@ -1157,17 +1157,13 @@ static int mv_port_start(struct ata_port *ap)
 	struct mv_port_priv *pp;
 	void __iomem *port_mmio = mv_ap_base(ap);
 	unsigned long flags;
-	int tag, rc;
+	int tag;
 
 	pp = devm_kzalloc(dev, sizeof(*pp), GFP_KERNEL);
 	if (!pp)
 		return -ENOMEM;
 	ap->private_data = pp;
 
-	rc = ata_pad_alloc(ap, dev);
-	if (rc)
-		return rc;
-
 	pp->crqb = dma_pool_alloc(hpriv->crqb_pool, GFP_KERNEL, &pp->crqb_dma);
 	if (!pp->crqb)
 		return -ENOMEM;
diff --git a/drivers/ata/sata_sil24.c b/drivers/ata/sata_sil24.c
index b4b1f91..df7988d 100644
--- a/drivers/ata/sata_sil24.c
+++ b/drivers/ata/sata_sil24.c
@@ -1234,7 +1234,6 @@ static int sil24_port_start(struct ata_port *ap)
 	union sil24_cmd_block *cb;
 	size_t cb_size = sizeof(*cb) * SIL24_MAX_CMDS;
 	dma_addr_t cb_dma;
-	int rc;
 
 	pp = devm_kzalloc(dev, sizeof(*pp), GFP_KERNEL);
 	if (!pp)
@@ -1247,10 +1246,6 @@ static int sil24_port_start(struct ata_port *ap)
 		return -ENOMEM;
 	memset(cb, 0, cb_size);
 
-	rc = ata_pad_alloc(ap, dev);
-	if (rc)
-		return rc;
-
 	pp->cmd_block = cb;
 	pp->cmd_block_dma = cb_dma;
 
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index 73270ff..6762e89 100644
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -5140,7 +5140,7 @@ static void ipr_build_ata_ioadl(struct ipr_cmnd *ipr_cmd,
 	struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
 	struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
 	struct ipr_ioadl_desc *last_ioadl = NULL;
-	int len = qc->nbytes + qc->pad_len;
+	int len = qc->nbytes;
 	struct scatterlist *sg;
 	unsigned int si;
 
@@ -5206,7 +5206,7 @@ static unsigned int ipr_qc_issue(struct ata_queued_cmd *qc)
 	ioarcb->cmd_pkt.request_type = IPR_RQTYPE_ATA_PASSTHRU;
 	ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_LINK_DESC;
 	ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_ULEN_CHK;
-	ipr_cmd->dma_use_sg = qc->pad_len ? qc->n_elem + 1 : qc->n_elem;
+	ipr_cmd->dma_use_sg = qc->n_elem;
 
 	ipr_build_ata_ioadl(ipr_cmd, qc);
 	regs->flags |= IPR_ATA_FLAG_STATUS_ON_GOOD_COMPLETION;
diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
index 0996f86..7cd05b5 100644
--- a/drivers/scsi/libsas/sas_ata.c
+++ b/drivers/scsi/libsas/sas_ata.c
@@ -178,8 +178,8 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
 	task->uldd_task = qc;
 	if (ata_is_atapi(qc->tf.protocol)) {
 		memcpy(task->ata_task.atapi_packet, qc->cdb, qc->dev->cdb_len);
-		task->total_xfer_len = qc->nbytes + qc->pad_len;
-		task->num_scatter = qc->pad_len ? qc->n_elem + 1 : qc->n_elem;
+		task->total_xfer_len = qc->nbytes;
+		task->num_scatter = qc->n_elem;
 	} else {
 		for_each_sg(qc->sg, sg, qc->n_elem, si)
 			xfer += sg->length;
diff --git a/include/linux/libata.h b/include/linux/libata.h
index bc5a8d0..2e098f9 100644
--- a/include/linux/libata.h
+++ b/include/linux/libata.h
@@ -278,7 +278,6 @@ enum {
 
 	/* size of buffer to pad xfers ending on unaligned boundaries */
 	ATA_DMA_PAD_SZ		= 4,
-	ATA_DMA_PAD_BUF_SZ	= ATA_DMA_PAD_SZ * ATA_MAX_QUEUE,
 
 	/* ering size */
 	ATA_ERING_SIZE		= 32,
@@ -457,24 +456,18 @@ struct ata_queued_cmd {
 	unsigned long		flags;		/* ATA_QCFLAG_xxx */
 	unsigned int		tag;
 	unsigned int		n_elem;
-	unsigned int		mapped_n_elem;
 
 	int			dma_dir;
 
-	unsigned int		pad_len;
 	unsigned int		sect_size;
 
 	unsigned int		nbytes;
-	unsigned int		raw_nbytes;
 	unsigned int		curbytes;
 
 	struct scatterlist	*cursg;
 	unsigned int		cursg_ofs;
 
-	struct scatterlist	*last_sg;
-	struct scatterlist	saved_last_sg;
 	struct scatterlist	sgent;
-	struct scatterlist	extra_sg[2];
 
 	struct scatterlist	*sg;
 
@@ -619,9 +612,6 @@ struct ata_port {
 	struct ata_prd		*prd;	 /* our SG list */
 	dma_addr_t		prd_dma; /* and its DMA mapping */
 
-	void			*pad;	/* array of DMA pad buffers */
-	dma_addr_t		pad_dma;
-
 	struct ata_ioports	ioaddr;	/* ATA cmd/ctl/dma register blocks */
 
 	u8			ctl;	/* cache of ATA control register */
@@ -1363,12 +1353,9 @@ static inline void ata_qc_reinit(struct ata_queued_cmd *qc)
 	qc->flags = 0;
 	qc->cursg = NULL;
 	qc->cursg_ofs = 0;
-	qc->nbytes = qc->raw_nbytes = qc->curbytes = 0;
+	qc->nbytes = qc->curbytes = 0;
 	qc->n_elem = 0;
-	qc->mapped_n_elem = 0;
 	qc->err_mask = 0;
-	qc->pad_len = 0;
-	qc->last_sg = NULL;
 	qc->sect_size = ATA_SECT_SIZE;
 
 	ata_tf_init(qc->dev, &qc->tf);
@@ -1423,19 +1410,6 @@ static inline unsigned int __ac_err_mask(u8 status)
 	return mask;
 }
 
-static inline int ata_pad_alloc(struct ata_port *ap, struct device *dev)
-{
-	ap->pad_dma = 0;
-	ap->pad = dmam_alloc_coherent(dev, ATA_DMA_PAD_BUF_SZ,
-				      &ap->pad_dma, GFP_KERNEL);
-	return (ap->pad == NULL) ? -ENOMEM : 0;
-}
-
-static inline void ata_pad_free(struct ata_port *ap, struct device *dev)
-{
-	dmam_free_coherent(dev, ATA_DMA_PAD_BUF_SZ, ap->pad, ap->pad_dma);
-}
-
 static inline struct ata_port *ata_shost_to_port(struct Scsi_Host *host)
 {
 	return *(struct ata_port **)&host->hostdata[0];
-- 
1.5.2.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] libata: implement drain buffers
  2008-02-05  7:53 [PATCHSET #upstream] block/libata: update and use block layer padding and draining Tejun Heo
                   ` (3 preceding siblings ...)
  2008-02-05  7:53 ` [PATCH 4/5] libata: eliminate the home grown dma padding in favour of that provided by the block layer Tejun Heo
@ 2008-02-05  7:53 ` Tejun Heo
  2008-02-06 23:31 ` [PATCHSET #upstream] block/libata: update and use block layer padding and draining James Bottomley
  2008-02-07  0:09 ` Jeff Garzik
  6 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2008-02-05  7:53 UTC (permalink / raw)
  To: jeff, linux-ide, linux-scsi, fujita.tomonori, Jens.Axboe
  Cc: James Bottomley, Tejun Heo

From: James Bottomley <James.Bottomley@HansenPartnership.com>

From: James Bottomley <James.Bottomley@HansenPartnership.com>

This just updates the libata slave configure routine to take advantage
of the block layer drain buffers.  It also adjusts the size lengths in
the atapi code to add the drain buffer to the DMA length so the driver
knows it can rely on it.

I suspect I should also be checking for AHCI as well as ATA_DEV_ATAPI,
but I couldn't see how to do that easily.

tj: * atapi_drain_needed() added such that draining is applied to only
      misc ATAPI commands.
    * q->bounce_gfp used when allocating drain buffer.
    * ata_dev_printk() used instead of sdev_printk().

Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Tejun Heo <htejun@gmail.com>
---
 drivers/ata/libata-scsi.c |   41 +++++++++++++++++++++++++++++++++++------
 1 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index e54ee6e..185d699 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -826,17 +826,38 @@ static void ata_scsi_sdev_config(struct scsi_device *sdev)
 	sdev->max_device_blocked = 1;
 }
 
-static void ata_scsi_dev_config(struct scsi_device *sdev,
-				struct ata_device *dev)
+static int atapi_drain_needed(struct request *rq)
+{
+	if (likely(!blk_pc_request(rq)))
+		return 0;
+
+	return atapi_cmd_type(rq->cmd[0]) == ATAPI_MISC;
+}
+
+static int ata_scsi_dev_config(struct scsi_device *sdev,
+			       struct ata_device *dev)
 {
 	/* configure max sectors */
 	blk_queue_max_sectors(sdev->request_queue, dev->max_sectors);
 
-	if (dev->class == ATA_DEV_ATAPI)
+	if (dev->class == ATA_DEV_ATAPI) {
+		struct request_queue *q = sdev->request_queue;
+		void *buf;
+
 		/* set the min alignment */
 		blk_queue_update_dma_alignment(sdev->request_queue,
 					       ATA_DMA_PAD_SZ - 1);
-	else {
+
+		/* configure draining */
+		buf = kmalloc(ATAPI_MAX_DRAIN, q->bounce_gfp | GFP_KERNEL);
+		if (!buf) {
+			ata_dev_printk(dev, KERN_ERR,
+				       "drain buffer allocation failed\n");
+			return -ENOMEM;
+		}
+
+		blk_queue_dma_drain(q, atapi_drain_needed, buf, ATAPI_MAX_DRAIN);
+	} else {
 		/* ATA devices must be sector aligned */
 		blk_queue_update_dma_alignment(sdev->request_queue,
 					       ATA_SECT_SIZE - 1);
@@ -853,6 +874,8 @@ static void ata_scsi_dev_config(struct scsi_device *sdev,
 		depth = min(ATA_MAX_QUEUE - 1, depth);
 		scsi_adjust_queue_depth(sdev, MSG_SIMPLE_TAG, depth);
 	}
+
+	return 0;
 }
 
 /**
@@ -871,13 +894,14 @@ int ata_scsi_slave_config(struct scsi_device *sdev)
 {
 	struct ata_port *ap = ata_shost_to_port(sdev->host);
 	struct ata_device *dev = __ata_scsi_find_dev(ap, sdev);
+	int rc = 0;
 
 	ata_scsi_sdev_config(sdev);
 
 	if (dev)
-		ata_scsi_dev_config(sdev, dev);
+		rc = ata_scsi_dev_config(sdev, dev);
 
-	return 0;
+	return rc;
 }
 
 /**
@@ -897,6 +921,7 @@ int ata_scsi_slave_config(struct scsi_device *sdev)
 void ata_scsi_slave_destroy(struct scsi_device *sdev)
 {
 	struct ata_port *ap = ata_shost_to_port(sdev->host);
+	struct request_queue *q = sdev->request_queue;
 	unsigned long flags;
 	struct ata_device *dev;
 
@@ -912,6 +937,10 @@ void ata_scsi_slave_destroy(struct scsi_device *sdev)
 		ata_port_schedule_eh(ap);
 	}
 	spin_unlock_irqrestore(ap->lock, flags);
+
+	kfree(q->dma_drain_buffer);
+	q->dma_drain_buffer = NULL;
+	q->dma_drain_size = 0;
 }
 
 /**
-- 
1.5.2.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCHSET #upstream] block/libata: update and use block layer padding and draining
  2008-02-05  7:53 [PATCHSET #upstream] block/libata: update and use block layer padding and draining Tejun Heo
                   ` (4 preceding siblings ...)
  2008-02-05  7:53 ` [PATCH 5/5] libata: implement drain buffers Tejun Heo
@ 2008-02-06 23:31 ` James Bottomley
  2008-02-07  0:09 ` Jeff Garzik
  6 siblings, 0 replies; 12+ messages in thread
From: James Bottomley @ 2008-02-06 23:31 UTC (permalink / raw)
  To: Tejun Heo; +Cc: jeff, linux-ide, linux-scsi, fujita.tomonori, Jens.Axboe


On Tue, 2008-02-05 at 16:53 +0900, Tejun Heo wrote:
> This patchset updates block layer padding and draining support and
> make libata use it.  It's based on James Bottomley's initial work and,
> of the five, the last two patches are from James with some
> modifications.
> 
> Please read the following thread for more info.
> 
>   http://thread.gmane.org/gmane.linux.scsi/37185
> 
> This patchset is on top of
> 
>   upstream (a6af42fc9a12165136d82206ad52f18c5955ce87)
> + kill-n_iter-and-fix-fsl patch [1]

This certainly fixes the SATAPI panic on aic94xx (finally got the
machine with it plugged into the expanders to boot).

James



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHSET #upstream] block/libata: update and use block layer padding and draining
  2008-02-05  7:53 [PATCHSET #upstream] block/libata: update and use block layer padding and draining Tejun Heo
                   ` (5 preceding siblings ...)
  2008-02-06 23:31 ` [PATCHSET #upstream] block/libata: update and use block layer padding and draining James Bottomley
@ 2008-02-07  0:09 ` Jeff Garzik
  2008-02-08 20:56   ` Jeff Garzik
  6 siblings, 1 reply; 12+ messages in thread
From: Jeff Garzik @ 2008-02-07  0:09 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-ide, linux-scsi, fujita.tomonori, Jens.Axboe,
	James.Bottomley, Andrew Morton

Tejun Heo wrote:
> This patchset updates block layer padding and draining support and
> make libata use it.  It's based on James Bottomley's initial work and,
> of the five, the last two patches are from James with some
> modifications.
> 
> Please read the following thread for more info.
> 
>   http://thread.gmane.org/gmane.linux.scsi/37185
> 
> This patchset is on top of
> 
>   upstream (a6af42fc9a12165136d82206ad52f18c5955ce87)
> + kill-n_iter-and-fix-fsl patch [1]

ACK patchset...  lets definitely get these fixes upstream.

Once Jens is happy, I would prefer the merge the lot upstream, if that 
is OK with everyone involved?

	Jeff




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHSET #upstream] block/libata: update and use block layer padding and draining
  2008-02-07  0:09 ` Jeff Garzik
@ 2008-02-08 20:56   ` Jeff Garzik
  2008-02-08 21:04     ` Jens Axboe
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff Garzik @ 2008-02-08 20:56 UTC (permalink / raw)
  To: Jens.Axboe
  Cc: Tejun Heo, linux-ide, linux-scsi, fujita.tomonori,
	James.Bottomley, Andrew Morton

Jeff Garzik wrote:
> Tejun Heo wrote:
>> This patchset updates block layer padding and draining support and
>> make libata use it.  It's based on James Bottomley's initial work and,
>> of the five, the last two patches are from James with some
>> modifications.
>>
>> Please read the following thread for more info.
>>
>>   http://thread.gmane.org/gmane.linux.scsi/37185
>>
>> This patchset is on top of
>>
>>   upstream (a6af42fc9a12165136d82206ad52f18c5955ce87)
>> + kill-n_iter-and-fix-fsl patch [1]
> 
> ACK patchset...  lets definitely get these fixes upstream.
> 
> Once Jens is happy, I would prefer the merge the lot upstream, if that 
> is OK with everyone involved?

Jens, ping?

It's a bug fix, so it would be nice to get this in soonish.  As noted, 
if all looks good, I would prefer to merge via libata-dev...

Thanks,

	Jeff



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHSET #upstream] block/libata: update and use block layer  padding and draining
  2008-02-08 20:56   ` Jeff Garzik
@ 2008-02-08 21:04     ` Jens Axboe
  2008-02-08 21:48       ` Jeff Garzik
  0 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2008-02-08 21:04 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: Tejun Heo, linux-ide, linux-scsi, fujita.tomonori,
	James.Bottomley, Andrew Morton

On Fri, Feb 08 2008, Jeff Garzik wrote:
> Jeff Garzik wrote:
> >Tejun Heo wrote:
> >>This patchset updates block layer padding and draining support and
> >>make libata use it.  It's based on James Bottomley's initial work and,
> >>of the five, the last two patches are from James with some
> >>modifications.
> >>
> >>Please read the following thread for more info.
> >>
> >>  http://thread.gmane.org/gmane.linux.scsi/37185
> >>
> >>This patchset is on top of
> >>
> >>  upstream (a6af42fc9a12165136d82206ad52f18c5955ce87)
> >>+ kill-n_iter-and-fix-fsl patch [1]
> >
> >ACK patchset...  lets definitely get these fixes upstream.
> >
> >Once Jens is happy, I would prefer the merge the lot upstream, if that 
> >is OK with everyone involved?
> 
> Jens, ping?
> 
> It's a bug fix, so it would be nice to get this in soonish.  As noted, 
> if all looks good, I would prefer to merge via libata-dev...

I'm ok with it, but lets please merge the block bits through the block
repo, since they are not trivial. Wont be until the week after next,
though.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHSET #upstream] block/libata: update and use block layer padding and draining
  2008-02-08 21:04     ` Jens Axboe
@ 2008-02-08 21:48       ` Jeff Garzik
  2008-02-09  0:22         ` Tejun Heo
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff Garzik @ 2008-02-08 21:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Tejun Heo, linux-ide, linux-scsi, fujita.tomonori,
	James.Bottomley, Andrew Morton

Jens Axboe wrote:
> On Fri, Feb 08 2008, Jeff Garzik wrote:
>> Jeff Garzik wrote:
>>> Tejun Heo wrote:
>>>> This patchset updates block layer padding and draining support and
>>>> make libata use it.  It's based on James Bottomley's initial work and,
>>>> of the five, the last two patches are from James with some
>>>> modifications.
>>>>
>>>> Please read the following thread for more info.
>>>>
>>>>  http://thread.gmane.org/gmane.linux.scsi/37185
>>>>
>>>> This patchset is on top of
>>>>
>>>>  upstream (a6af42fc9a12165136d82206ad52f18c5955ce87)
>>>> + kill-n_iter-and-fix-fsl patch [1]
>>> ACK patchset...  lets definitely get these fixes upstream.
>>>
>>> Once Jens is happy, I would prefer the merge the lot upstream, if that 
>>> is OK with everyone involved?
>> Jens, ping?
>>
>> It's a bug fix, so it would be nice to get this in soonish.  As noted, 
>> if all looks good, I would prefer to merge via libata-dev...
> 
> I'm ok with it, but lets please merge the block bits through the block
> repo, since they are not trivial. Wont be until the week after next,
> though.

hmmm, rather than delaying the bug fixes for two weeks, since you're OK 
with it we can push upstream now, and apply further fixes if problems 
arise during testing?

I would rather get these fixes out into wide testing sooner rather than 
later.

	Jeff





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHSET #upstream] block/libata: update and use block layer padding and draining
  2008-02-08 21:48       ` Jeff Garzik
@ 2008-02-09  0:22         ` Tejun Heo
  0 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2008-02-09  0:22 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: Jens Axboe, linux-ide, linux-scsi, fujita.tomonori,
	James.Bottomley, Andrew Morton

Jeff Garzik wrote:
> Jens Axboe wrote:
>> On Fri, Feb 08 2008, Jeff Garzik wrote:
>>> Jeff Garzik wrote:
>>>> Tejun Heo wrote:
>>>>> This patchset updates block layer padding and draining support and
>>>>> make libata use it.  It's based on James Bottomley's initial work and,
>>>>> of the five, the last two patches are from James with some
>>>>> modifications.
>>>>>
>>>>> Please read the following thread for more info.
>>>>>
>>>>>  http://thread.gmane.org/gmane.linux.scsi/37185
>>>>>
>>>>> This patchset is on top of
>>>>>
>>>>>  upstream (a6af42fc9a12165136d82206ad52f18c5955ce87)
>>>>> + kill-n_iter-and-fix-fsl patch [1]
>>>> ACK patchset...  lets definitely get these fixes upstream.
>>>>
>>>> Once Jens is happy, I would prefer the merge the lot upstream, if
>>>> that is OK with everyone involved?
>>> Jens, ping?
>>>
>>> It's a bug fix, so it would be nice to get this in soonish.  As
>>> noted, if all looks good, I would prefer to merge via libata-dev...
>>
>> I'm ok with it, but lets please merge the block bits through the block
>> repo, since they are not trivial. Wont be until the week after next,
>> though.
> 
> hmmm, rather than delaying the bug fixes for two weeks, since you're OK
> with it we can push upstream now, and apply further fixes if problems
> arise during testing?
> 
> I would rather get these fixes out into wide testing sooner rather than
> later.

I have an updated version.  Please standby a bit.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2008-02-09  0:22 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-02-05  7:53 [PATCHSET #upstream] block/libata: update and use block layer padding and draining Tejun Heo
2008-02-05  7:53 ` [PATCH 1/5] block: update bio according to DMA alignment padding Tejun Heo
2008-02-05  7:53 ` [PATCH 2/5] block: add request->raw_data_len Tejun Heo
2008-02-05  7:53 ` [PATCH 3/5] block: implement request_queue->dma_drain_needed Tejun Heo
2008-02-05  7:53 ` [PATCH 4/5] libata: eliminate the home grown dma padding in favour of that provided by the block layer Tejun Heo
2008-02-05  7:53 ` [PATCH 5/5] libata: implement drain buffers Tejun Heo
2008-02-06 23:31 ` [PATCHSET #upstream] block/libata: update and use block layer padding and draining James Bottomley
2008-02-07  0:09 ` Jeff Garzik
2008-02-08 20:56   ` Jeff Garzik
2008-02-08 21:04     ` Jens Axboe
2008-02-08 21:48       ` Jeff Garzik
2008-02-09  0:22         ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).