qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support
@ 2011-09-16 14:39 Paolo Bonzini
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 1/5] dma-helpers: rename is_write to to_dev Paolo Bonzini
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Paolo Bonzini @ 2011-09-16 14:39 UTC (permalink / raw)
  To: qemu-devel

These patches are preparatory work for supporting scatter/gather in
the SCSI subsystem.  Since there would be no HBA actually using it,
I am just posting the cleanups, and the fix for CVE-2011-3346 (buffer
overflow in the handling of READ CAPACITY 16) that comes for free
with the last patch.

v1->v2: made to_dev a bool; fixes in patch 3

Paolo Bonzini (5):
  dma-helpers: rename is_write to to_dev
  dma-helpers: allow including from target-independent code
  dma-helpers: rewrite completion/cancellation
  scsi-disk: commonize iovec creation between reads and writes
  scsi-disk: lazily allocate bounce buffer

 dma-helpers.c  |   58 +++++++++++++++++++++++++-------------
 dma.h          |   10 ++++--
 hw/ide/core.c  |    2 +-
 hw/ide/macio.c |    2 +-
 hw/scsi-disk.c |   84 +++++++++++++++++++++++++++++++++----------------------
 qemu-common.h  |    1 +
 6 files changed, 98 insertions(+), 59 deletions(-)

-- 
1.7.6

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Qemu-devel] [PATCH v2 1/5] dma-helpers: rename is_write to to_dev
  2011-09-16 14:39 [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Paolo Bonzini
@ 2011-09-16 14:40 ` Paolo Bonzini
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 2/5] dma-helpers: allow including from target-independent code Paolo Bonzini
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2011-09-16 14:40 UTC (permalink / raw)
  To: qemu-devel

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 dma-helpers.c  |   14 +++++++-------
 dma.h          |    2 +-
 hw/ide/core.c  |    2 +-
 hw/ide/macio.c |    2 +-
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/dma-helpers.c b/dma-helpers.c
index 4610ea0..717e384 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -42,7 +42,7 @@ typedef struct {
     BlockDriverAIOCB *acb;
     QEMUSGList *sg;
     uint64_t sector_num;
-    int is_write;
+    bool to_dev;
     int sg_cur_index;
     target_phys_addr_t sg_cur_byte;
     QEMUIOVector iov;
@@ -75,7 +75,7 @@ static void dma_bdrv_unmap(DMAAIOCB *dbs)
 
     for (i = 0; i < dbs->iov.niov; ++i) {
         cpu_physical_memory_unmap(dbs->iov.iov[i].iov_base,
-                                  dbs->iov.iov[i].iov_len, !dbs->is_write,
+                                  dbs->iov.iov[i].iov_len, !dbs->to_dev,
                                   dbs->iov.iov[i].iov_len);
     }
 }
@@ -101,7 +101,7 @@ static void dma_bdrv_cb(void *opaque, int ret)
     while (dbs->sg_cur_index < dbs->sg->nsg) {
         cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
         cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
-        mem = cpu_physical_memory_map(cur_addr, &cur_len, !dbs->is_write);
+        mem = cpu_physical_memory_map(cur_addr, &cur_len, !dbs->to_dev);
         if (!mem)
             break;
         qemu_iovec_add(&dbs->iov, mem, cur_len);
@@ -143,7 +143,7 @@ static AIOPool dma_aio_pool = {
 BlockDriverAIOCB *dma_bdrv_io(
     BlockDriverState *bs, QEMUSGList *sg, uint64_t sector_num,
     DMAIOFunc *io_func, BlockDriverCompletionFunc *cb,
-    void *opaque, int is_write)
+    void *opaque, bool to_dev)
 {
     DMAAIOCB *dbs = qemu_aio_get(&dma_aio_pool, bs, cb, opaque);
 
@@ -153,7 +153,7 @@ BlockDriverAIOCB *dma_bdrv_io(
     dbs->sector_num = sector_num;
     dbs->sg_cur_index = 0;
     dbs->sg_cur_byte = 0;
-    dbs->is_write = is_write;
+    dbs->to_dev = to_dev;
     dbs->io_func = io_func;
     dbs->bh = NULL;
     qemu_iovec_init(&dbs->iov, sg->nsg);
@@ -170,12 +170,12 @@ BlockDriverAIOCB *dma_bdrv_read(BlockDriverState *bs,
                                 QEMUSGList *sg, uint64_t sector,
                                 void (*cb)(void *opaque, int ret), void *opaque)
 {
-    return dma_bdrv_io(bs, sg, sector, bdrv_aio_readv, cb, opaque, 0);
+    return dma_bdrv_io(bs, sg, sector, bdrv_aio_readv, cb, opaque, false);
 }
 
 BlockDriverAIOCB *dma_bdrv_write(BlockDriverState *bs,
                                  QEMUSGList *sg, uint64_t sector,
                                  void (*cb)(void *opaque, int ret), void *opaque)
 {
-    return dma_bdrv_io(bs, sg, sector, bdrv_aio_writev, cb, opaque, 1);
+    return dma_bdrv_io(bs, sg, sector, bdrv_aio_writev, cb, opaque, true);
 }
diff --git a/dma.h b/dma.h
index a6db5ba..b222346 100644
--- a/dma.h
+++ b/dma.h
@@ -39,7 +39,7 @@ typedef BlockDriverAIOCB *DMAIOFunc(BlockDriverState *bs, int64_t sector_num,
 BlockDriverAIOCB *dma_bdrv_io(BlockDriverState *bs,
                               QEMUSGList *sg, uint64_t sector_num,
                               DMAIOFunc *io_func, BlockDriverCompletionFunc *cb,
-                              void *opaque, int is_write);
+                              void *opaque, bool to_dev);
 BlockDriverAIOCB *dma_bdrv_read(BlockDriverState *bs,
                                 QEMUSGList *sg, uint64_t sector,
                                 BlockDriverCompletionFunc *cb, void *opaque);
diff --git a/hw/ide/core.c b/hw/ide/core.c
index e06f442..e61ab77 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -602,7 +602,7 @@ handle_rw_error:
         break;
     case IDE_DMA_TRIM:
         s->bus->dma->aiocb = dma_bdrv_io(s->bs, &s->sg, sector_num,
-                                         ide_issue_trim, ide_dma_cb, s, 1);
+                                         ide_issue_trim, ide_dma_cb, s, true);
         break;
     }
 
diff --git a/hw/ide/macio.c b/hw/ide/macio.c
index 11a145c..bfb49e6 100644
--- a/hw/ide/macio.c
+++ b/hw/ide/macio.c
@@ -157,7 +157,7 @@ static void pmac_ide_transfer_cb(void *opaque, int ret)
         break;
     case IDE_DMA_TRIM:
         m->aiocb = dma_bdrv_io(s->bs, &s->sg, sector_num,
-                               ide_issue_trim, pmac_ide_transfer_cb, s, 1);
+                               ide_issue_trim, pmac_ide_transfer_cb, s, true);
         break;
     }
 
-- 
1.7.6

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Qemu-devel] [PATCH v2 2/5] dma-helpers: allow including from target-independent code
  2011-09-16 14:39 [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Paolo Bonzini
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 1/5] dma-helpers: rename is_write to to_dev Paolo Bonzini
@ 2011-09-16 14:40 ` Paolo Bonzini
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 3/5] dma-helpers: rewrite completion/cancellation Paolo Bonzini
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2011-09-16 14:40 UTC (permalink / raw)
  To: qemu-devel

Target-independent code cannot construct sglists, but it can take
them from the outside as a black box.  Allow this.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 dma.h         |    8 ++++++--
 qemu-common.h |    1 +
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/dma.h b/dma.h
index b222346..2bdc236 100644
--- a/dma.h
+++ b/dma.h
@@ -15,10 +15,13 @@
 #include "hw/hw.h"
 #include "block.h"
 
-typedef struct {
+typedef struct ScatterGatherEntry ScatterGatherEntry;
+
+#if defined(TARGET_PHYS_ADDR_BITS)
+struct ScatterGatherEntry {
     target_phys_addr_t base;
     target_phys_addr_t len;
-} ScatterGatherEntry;
+};
 
 struct QEMUSGList {
     ScatterGatherEntry *sg;
@@ -31,6 +34,7 @@ void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint);
 void qemu_sglist_add(QEMUSGList *qsg, target_phys_addr_t base,
                      target_phys_addr_t len);
 void qemu_sglist_destroy(QEMUSGList *qsg);
+#endif
 
 typedef BlockDriverAIOCB *DMAIOFunc(BlockDriverState *bs, int64_t sector_num,
                                  QEMUIOVector *iov, int nb_sectors,
diff --git a/qemu-common.h b/qemu-common.h
index 404c421..ef9a2bb 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -18,6 +18,7 @@ typedef struct DeviceState DeviceState;
 
 struct Monitor;
 typedef struct Monitor Monitor;
+typedef struct QEMUSGList QEMUSGList;
 
 /* we put basic includes here to avoid repeating them in device drivers */
 #include <stdlib.h>
-- 
1.7.6

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Qemu-devel] [PATCH v2 3/5] dma-helpers: rewrite completion/cancellation
  2011-09-16 14:39 [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Paolo Bonzini
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 1/5] dma-helpers: rename is_write to to_dev Paolo Bonzini
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 2/5] dma-helpers: allow including from target-independent code Paolo Bonzini
@ 2011-09-16 14:40 ` Paolo Bonzini
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 4/5] scsi-disk: commonize iovec creation between reads and writes Paolo Bonzini
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2011-09-16 14:40 UTC (permalink / raw)
  To: qemu-devel

This fixes various problems with completion/cancellation:

* if the io_func fails to get an AIOCB, the callback wasn't called

* If DMA encounters a bounce buffer conflict, and the DMA operation is
canceled before the bottom half fires, bad things happen.

* memory is not unmapped after cancellation, again causing problems
when doing DMA to I/O areas

* cancellation could leak the iovec

* the callback was missed if the I/O operation failed without returning
an AIOCB

and probably more that I've missed.  The patch fixes them by sharing
the cleanup code between completion and cancellation.  The dma_bdrv_cb
now returns a boolean completed/not completed flag, and the wrapper
dma_continue takes care of tasks to do upon completion.

Most of these are basically impossible in practice, but it is better
to be tidy...

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
	v1->v2: call the callback when the io_func fails.  Add
	in_cancel.

 dma-helpers.c |   44 +++++++++++++++++++++++++++++++-------------
 1 files changed, 31 insertions(+), 13 deletions(-)

diff --git a/dma-helpers.c b/dma-helpers.c
index 717e384..86d2d0a 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -43,6 +43,7 @@ typedef struct {
     QEMUSGList *sg;
     uint64_t sector_num;
     bool to_dev;
+    bool in_cancel;
     int sg_cur_index;
     target_phys_addr_t sg_cur_byte;
     QEMUIOVector iov;
@@ -58,7 +59,7 @@ static void reschedule_dma(void *opaque)
 
     qemu_bh_delete(dbs->bh);
     dbs->bh = NULL;
-    dma_bdrv_cb(opaque, 0);
+    dma_bdrv_cb(dbs, 0);
 }
 
 static void continue_after_map_failure(void *opaque)
@@ -78,6 +79,26 @@ static void dma_bdrv_unmap(DMAAIOCB *dbs)
                                   dbs->iov.iov[i].iov_len, !dbs->to_dev,
                                   dbs->iov.iov[i].iov_len);
     }
+    qemu_iovec_reset(&dbs->iov);
+}
+
+static void dma_complete(DMAAIOCB *dbs, int ret)
+{
+    dma_bdrv_unmap(dbs);
+    if (dbs->common.cb) {
+        dbs->common.cb(dbs->common.opaque, ret);
+    }
+    qemu_iovec_destroy(&dbs->iov);
+    if (dbs->bh) {
+        qemu_bh_delete(dbs->bh);
+        dbs->bh = NULL;
+    }
+    if (!dbs->in_cancel) {
+        /* Requests may complete while dma_aio_cancel is in progress.  In
+         * this case, the AIOCB should not be released because it is still
+         * referenced by dma_aio_cancel.  */
+        qemu_aio_release(dbs);
+    }
 }
 
 static void dma_bdrv_cb(void *opaque, int ret)
@@ -89,12 +110,9 @@ static void dma_bdrv_cb(void *opaque, int ret)
     dbs->acb = NULL;
     dbs->sector_num += dbs->iov.size / 512;
     dma_bdrv_unmap(dbs);
-    qemu_iovec_reset(&dbs->iov);
 
     if (dbs->sg_cur_index == dbs->sg->nsg || ret < 0) {
-        dbs->common.cb(dbs->common.opaque, ret);
-        qemu_iovec_destroy(&dbs->iov);
-        qemu_aio_release(dbs);
+        dma_complete(dbs, ret);
         return;
     }
 
@@ -120,9 +138,7 @@ static void dma_bdrv_cb(void *opaque, int ret)
     dbs->acb = dbs->io_func(dbs->bs, dbs->sector_num, &dbs->iov,
                             dbs->iov.size / 512, dma_bdrv_cb, dbs);
     if (!dbs->acb) {
-        dma_bdrv_unmap(dbs);
-        qemu_iovec_destroy(&dbs->iov);
-        return;
+        dma_complete(dbs, -EIO);
     }
 }
 
@@ -131,8 +147,14 @@ static void dma_aio_cancel(BlockDriverAIOCB *acb)
     DMAAIOCB *dbs = container_of(acb, DMAAIOCB, common);
 
     if (dbs->acb) {
-        bdrv_aio_cancel(dbs->acb);
+        BlockDriverAIOCB *acb = dbs->acb;
+        dbs->acb = NULL;
+        dbs->in_cancel = true;
+        bdrv_aio_cancel(acb);
+        dbs->in_cancel = false;
     }
+    dbs->common.cb = NULL;
+    dma_complete(dbs, 0);
 }
 
 static AIOPool dma_aio_pool = {
@@ -158,10 +180,6 @@ BlockDriverAIOCB *dma_bdrv_io(
     dbs->bh = NULL;
     qemu_iovec_init(&dbs->iov, sg->nsg);
     dma_bdrv_cb(dbs, 0);
-    if (!dbs->acb) {
-        qemu_aio_release(dbs);
-        return NULL;
-    }
     return &dbs->common;
 }
 
-- 
1.7.6

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Qemu-devel] [PATCH v2 4/5] scsi-disk: commonize iovec creation between reads and writes
  2011-09-16 14:39 [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Paolo Bonzini
                   ` (2 preceding siblings ...)
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 3/5] dma-helpers: rewrite completion/cancellation Paolo Bonzini
@ 2011-09-16 14:40 ` Paolo Bonzini
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 5/5] scsi-disk: lazily allocate bounce buffer Paolo Bonzini
  2011-09-19 10:08 ` [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Kevin Wolf
  5 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2011-09-16 14:40 UTC (permalink / raw)
  To: qemu-devel

Also, consistently use qiov.size instead of iov.iov_len.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/scsi-disk.c |   42 ++++++++++++++++++------------------------
 1 files changed, 18 insertions(+), 24 deletions(-)

diff --git a/hw/scsi-disk.c b/hw/scsi-disk.c
index 8f33249..7654349 100644
--- a/hw/scsi-disk.c
+++ b/hw/scsi-disk.c
@@ -107,6 +107,13 @@ static void scsi_cancel_io(SCSIRequest *req)
     r->req.aiocb = NULL;
 }
 
+static uint32_t scsi_init_iovec(SCSIDiskReq *r)
+{
+    r->iov.iov_len = MIN(r->sector_count * 512, SCSI_DMA_BUF_SIZE);
+    qemu_iovec_init_external(&r->qiov, &r->iov, 1);
+    return r->qiov.size / 512;
+}
+
 static void scsi_read_complete(void * opaque, int ret)
 {
     SCSIDiskReq *r = (SCSIDiskReq *)opaque;
@@ -124,12 +131,12 @@ static void scsi_read_complete(void * opaque, int ret)
         }
     }
 
-    DPRINTF("Data ready tag=0x%x len=%zd\n", r->req.tag, r->iov.iov_len);
+    DPRINTF("Data ready tag=0x%x len=%zd\n", r->req.tag, r->qiov.size);
 
-    n = r->iov.iov_len / 512;
+    n = r->qiov.size / 512;
     r->sector += n;
     r->sector_count -= n;
-    scsi_req_data(&r->req, r->iov.iov_len);
+    scsi_req_data(&r->req, r->qiov.size);
 }
 
 static void scsi_flush_complete(void * opaque, int ret)
@@ -180,13 +187,7 @@ static void scsi_read_data(SCSIRequest *req)
         return;
     }
 
-    n = r->sector_count;
-    if (n > SCSI_DMA_BUF_SIZE / 512)
-        n = SCSI_DMA_BUF_SIZE / 512;
-
-    r->iov.iov_len = n * 512;
-    qemu_iovec_init_external(&r->qiov, &r->iov, 1);
-
+    n = scsi_init_iovec(r);
     bdrv_acct_start(s->bs, &r->acct, n * BDRV_SECTOR_SIZE, BDRV_ACCT_READ);
     r->req.aiocb = bdrv_aio_readv(s->bs, r->sector, &r->qiov, n,
                               scsi_read_complete, r);
@@ -235,7 +236,6 @@ static void scsi_write_complete(void * opaque, int ret)
 {
     SCSIDiskReq *r = (SCSIDiskReq *)opaque;
     SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, r->req.dev);
-    uint32_t len;
     uint32_t n;
 
     if (r->req.aiocb != NULL) {
@@ -249,19 +249,15 @@ static void scsi_write_complete(void * opaque, int ret)
         }
     }
 
-    n = r->iov.iov_len / 512;
+    n = r->qiov.size / 512;
     r->sector += n;
     r->sector_count -= n;
     if (r->sector_count == 0) {
         scsi_req_complete(&r->req, GOOD);
     } else {
-        len = r->sector_count * 512;
-        if (len > SCSI_DMA_BUF_SIZE) {
-            len = SCSI_DMA_BUF_SIZE;
-        }
-        r->iov.iov_len = len;
-        DPRINTF("Write complete tag=0x%x more=%d\n", r->req.tag, len);
-        scsi_req_data(&r->req, len);
+        scsi_init_iovec(r);
+        DPRINTF("Write complete tag=0x%x more=%d\n", r->req.tag, r->qiov.size);
+        scsi_req_data(&r->req, r->qiov.size);
     }
 }
 
@@ -280,18 +276,16 @@ static void scsi_write_data(SCSIRequest *req)
         return;
     }
 
-    n = r->iov.iov_len / 512;
+    n = r->qiov.size / 512;
     if (n) {
-        qemu_iovec_init_external(&r->qiov, &r->iov, 1);
-
         bdrv_acct_start(s->bs, &r->acct, n * BDRV_SECTOR_SIZE, BDRV_ACCT_WRITE);
         r->req.aiocb = bdrv_aio_writev(s->bs, r->sector, &r->qiov, n,
-                                   scsi_write_complete, r);
+                                       scsi_write_complete, r);
         if (r->req.aiocb == NULL) {
             scsi_write_complete(r, -ENOMEM);
         }
     } else {
-        /* Invoke completion routine to fetch data from host.  */
+        /* Called for the first time.  Ask the driver to send us more data.  */
         scsi_write_complete(r, 0);
     }
 }
-- 
1.7.6

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Qemu-devel] [PATCH v2 5/5] scsi-disk: lazily allocate bounce buffer
  2011-09-16 14:39 [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Paolo Bonzini
                   ` (3 preceding siblings ...)
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 4/5] scsi-disk: commonize iovec creation between reads and writes Paolo Bonzini
@ 2011-09-16 14:40 ` Paolo Bonzini
  2011-09-19 10:08 ` [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Kevin Wolf
  5 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2011-09-16 14:40 UTC (permalink / raw)
  To: qemu-devel

It will not be needed for reads and writes if the HBA provides a sglist.
In addition, this lets scsi-disk refuse commands with an excessive
allocation length, as well as limit memory on usual well-behaved guests.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/scsi-disk.c |   44 +++++++++++++++++++++++++++++++++-----------
 1 files changed, 33 insertions(+), 11 deletions(-)

diff --git a/hw/scsi-disk.c b/hw/scsi-disk.c
index 7654349..30eed19 100644
--- a/hw/scsi-disk.c
+++ b/hw/scsi-disk.c
@@ -54,6 +54,7 @@ typedef struct SCSIDiskReq {
     /* Both sector and sector_count are in terms of qemu 512 byte blocks.  */
     uint64_t sector;
     uint32_t sector_count;
+    uint32_t buflen;
     struct iovec iov;
     QEMUIOVector qiov;
     uint32_t status;
@@ -77,13 +78,15 @@ struct SCSIDiskState
 };
 
 static int scsi_handle_rw_error(SCSIDiskReq *r, int error, int type);
-static int scsi_disk_emulate_command(SCSIDiskReq *r, uint8_t *outbuf);
+static int scsi_disk_emulate_command(SCSIDiskReq *r);
 
 static void scsi_free_request(SCSIRequest *req)
 {
     SCSIDiskReq *r = DO_UPCAST(SCSIDiskReq, req, req);
 
-    qemu_vfree(r->iov.iov_base);
+    if (r->iov.iov_base) {
+        qemu_vfree(r->iov.iov_base);
+    }
 }
 
 /* Helper function for command completion with sense.  */
@@ -109,7 +112,13 @@ static void scsi_cancel_io(SCSIRequest *req)
 
 static uint32_t scsi_init_iovec(SCSIDiskReq *r)
 {
-    r->iov.iov_len = MIN(r->sector_count * 512, SCSI_DMA_BUF_SIZE);
+    SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, r->req.dev);
+
+    if (!r->iov.iov_base) {
+        r->buflen = SCSI_DMA_BUF_SIZE;
+        r->iov.iov_base = qemu_blockalign(s->bs, r->buflen);
+    }
+    r->iov.iov_len = MIN(r->sector_count * 512, r->buflen);
     qemu_iovec_init_external(&r->qiov, &r->iov, 1);
     return r->qiov.size / 512;
 }
@@ -316,7 +325,7 @@ static void scsi_dma_restart_bh(void *opaque)
                 scsi_write_data(&r->req);
                 break;
             case SCSI_REQ_STATUS_RETRY_FLUSH:
-                ret = scsi_disk_emulate_command(r, r->iov.iov_base);
+                ret = scsi_disk_emulate_command(r);
                 if (ret == 0) {
                     scsi_req_complete(&r->req, GOOD);
                 }
@@ -1043,13 +1052,31 @@ static int scsi_disk_emulate_read_toc(SCSIRequest *req, uint8_t *outbuf)
     return toclen;
 }
 
-static int scsi_disk_emulate_command(SCSIDiskReq *r, uint8_t *outbuf)
+static int scsi_disk_emulate_command(SCSIDiskReq *r)
 {
     SCSIRequest *req = &r->req;
     SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, req->dev);
     uint64_t nb_sectors;
+    uint8_t *outbuf;
     int buflen = 0;
 
+    if (!r->iov.iov_base) {
+        /*
+         * FIXME: we shouldn't return anything bigger than 4k, but the code
+         * requires the buffer to be as big as req->cmd.xfer in several
+         * places.  So, do not allow CDBs with a very large ALLOCATION
+         * LENGTH.  The real fix would be to modify scsi_read_data and
+         * dma_buf_read, so that they return data beyond the buflen
+         * as all zeros.
+         */
+        if (req->cmd.xfer > 65536) {
+            goto illegal_request;
+        }
+        r->buflen = MAX(4096, req->cmd.xfer);
+        r->iov.iov_base = qemu_blockalign(s->bs, r->buflen);
+    }
+
+    outbuf = r->iov.iov_base;
     switch (req->cmd.buf[0]) {
     case TEST_UNIT_READY:
         if (!bdrv_is_inserted(s->bs))
@@ -1213,11 +1240,9 @@ static int32_t scsi_send_command(SCSIRequest *req, uint8_t *buf)
     SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, req->dev);
     int32_t len;
     uint8_t command;
-    uint8_t *outbuf;
     int rc;
 
     command = buf[0];
-    outbuf = (uint8_t *)r->iov.iov_base;
     DPRINTF("Command: lun=%d tag=0x%x data=0x%02x", req->lun, req->tag, buf[0]);
 
 #ifdef DEBUG_SCSI
@@ -1249,7 +1274,7 @@ static int32_t scsi_send_command(SCSIRequest *req, uint8_t *buf)
     case MECHANISM_STATUS:
     case SERVICE_ACTION_IN_16:
     case VERIFY_10:
-        rc = scsi_disk_emulate_command(r, outbuf);
+        rc = scsi_disk_emulate_command(r);
         if (rc < 0) {
             return 0;
         }
@@ -1512,11 +1537,8 @@ static SCSIRequest *scsi_new_request(SCSIDevice *d, uint32_t tag,
 {
     SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, d);
     SCSIRequest *req;
-    SCSIDiskReq *r;
 
     req = scsi_req_alloc(&scsi_disk_reqops, &s->qdev, tag, lun, hba_private);
-    r = DO_UPCAST(SCSIDiskReq, req, req);
-    r->iov.iov_base = qemu_blockalign(s->bs, SCSI_DMA_BUF_SIZE);
     return req;
 }
 
-- 
1.7.6

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support
  2011-09-16 14:39 [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Paolo Bonzini
                   ` (4 preceding siblings ...)
  2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 5/5] scsi-disk: lazily allocate bounce buffer Paolo Bonzini
@ 2011-09-19 10:08 ` Kevin Wolf
  5 siblings, 0 replies; 7+ messages in thread
From: Kevin Wolf @ 2011-09-19 10:08 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

Am 16.09.2011 16:39, schrieb Paolo Bonzini:
> These patches are preparatory work for supporting scatter/gather in
> the SCSI subsystem.  Since there would be no HBA actually using it,
> I am just posting the cleanups, and the fix for CVE-2011-3346 (buffer
> overflow in the handling of READ CAPACITY 16) that comes for free
> with the last patch.
> 
> v1->v2: made to_dev a bool; fixes in patch 3
> 
> Paolo Bonzini (5):
>   dma-helpers: rename is_write to to_dev
>   dma-helpers: allow including from target-independent code
>   dma-helpers: rewrite completion/cancellation
>   scsi-disk: commonize iovec creation between reads and writes
>   scsi-disk: lazily allocate bounce buffer

Thanks, applied all to the block branch.

Kevin

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2011-09-19 10:05 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-09-16 14:39 [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Paolo Bonzini
2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 1/5] dma-helpers: rename is_write to to_dev Paolo Bonzini
2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 2/5] dma-helpers: allow including from target-independent code Paolo Bonzini
2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 3/5] dma-helpers: rewrite completion/cancellation Paolo Bonzini
2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 4/5] scsi-disk: commonize iovec creation between reads and writes Paolo Bonzini
2011-09-16 14:40 ` [Qemu-devel] [PATCH v2 5/5] scsi-disk: lazily allocate bounce buffer Paolo Bonzini
2011-09-19 10:08 ` [Qemu-devel] [PATCH v2 0/5] block: preparatory patches for scatter/gather support Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).