From: Paolo Bonzini <pbonzini@redhat.com>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 5/8] scsi-disk: don't call scsi_req_complete twice.
Date: Mon, 21 Nov 2011 14:49:57 +0100 [thread overview]
Message-ID: <4ECA5705.4080105@redhat.com> (raw)
In-Reply-To: <1321882802-3337-6-git-send-email-kraxel@redhat.com>
On 11/21/2011 02:39 PM, Gerd Hoffmann wrote:
> In case the guest sends a SYNCHRONIZE_CACHE command scsi_req_complete()
> is called twice: Once because there is no data to transfer and
> scsi-disk thinks it is done with the command, and once when the flush is
> actually finished ...
>
> Signed-off-by: Gerd Hoffmann<kraxel@redhat.com>
> ---
> hw/scsi-disk.c | 5 +++--
> 1 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/hw/scsi-disk.c b/hw/scsi-disk.c
> index 62f538f..f3c75b3 100644
> --- a/hw/scsi-disk.c
> +++ b/hw/scsi-disk.c
> @@ -291,7 +291,7 @@ static void scsi_write_complete(void * opaque, int ret)
> scsi_req_complete(&r->req, GOOD);
> } else {
> scsi_init_iovec(r);
> - DPRINTF("Write complete tag=0x%x more=%d\n", r->req.tag, r->qiov.size);
> + DPRINTF("Write complete tag=0x%x more=%zd\n", r->req.tag, r->qiov.size);
> scsi_req_data(&r->req, r->qiov.size);
> }
>
> @@ -1421,7 +1421,8 @@ static int32_t scsi_send_command(SCSIRequest *req, uint8_t *buf)
> scsi_check_condition(r, SENSE_CODE(LBA_OUT_OF_RANGE));
> return 0;
> }
> - if (r->sector_count == 0&& r->iov.iov_len == 0) {
> + if (r->sector_count == 0&& r->iov.iov_len == 0&&
> + command != SYNCHRONIZE_CACHE) {
> scsi_req_complete(&r->req, GOOD);
> }
> len = r->sector_count * 512 + r->iov.iov_len;
/me is confused :)
case SYNCHRONIZE_CACHE:
/* The request is used as the AIO opaque value, so add a ref. */
scsi_req_ref(&r->req);
bdrv_acct_start(s->qdev.conf.bs, &r->acct, 0, BDRV_ACCT_FLUSH);
r->req.aiocb = bdrv_aio_flush(s->qdev.conf.bs, scsi_flush_complete, r);
if (r->req.aiocb == NULL) {
scsi_flush_complete(r, -EIO);
}
return 0;
Paolo
next prev parent reply other threads:[~2011-11-21 13:50 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-11-21 13:39 [Qemu-devel] [PULL 1.0] usb patch queue (with scsi bits) Gerd Hoffmann
2011-11-21 13:39 ` [Qemu-devel] [PATCH 1/8] usb-storage: move status debug message to usb_msd_send_status Gerd Hoffmann
2011-11-21 13:39 ` [Qemu-devel] [PATCH 2/8] usb-storage: fill status in complete callback Gerd Hoffmann
2011-11-21 13:39 ` [Qemu-devel] [PATCH 3/8] usb-storage: drop tag from device state Gerd Hoffmann
2011-11-21 13:39 ` [Qemu-devel] [PATCH 4/8] usb-storage: drop result " Gerd Hoffmann
2011-11-21 13:39 ` [Qemu-devel] [PATCH 5/8] scsi-disk: don't call scsi_req_complete twice Gerd Hoffmann
2011-11-21 13:49 ` Paolo Bonzini [this message]
2011-11-21 13:40 ` [Qemu-devel] [PATCH 6/8] usb-storage: don't try to send the status early Gerd Hoffmann
2011-11-21 14:10 ` Paolo Bonzini
2011-11-21 13:40 ` [Qemu-devel] [PATCH 7/8] ehci: add assert Gerd Hoffmann
2011-11-21 13:40 ` [Qemu-devel] [PATCH 8/8] usb-linux: fix /proc/bus/usb/devices scan Gerd Hoffmann
2011-11-21 17:17 ` Markus Armbruster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4ECA5705.4080105@redhat.com \
--to=pbonzini@redhat.com \
--cc=kraxel@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).