From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LNA8k-0006rx-Gd for qemu-devel@nongnu.org; Wed, 14 Jan 2009 13:06:54 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LNA8j-0006rB-SJ for qemu-devel@nongnu.org; Wed, 14 Jan 2009 13:06:53 -0500 Received: from [199.232.76.173] (port=56334 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LNA8j-0006r0-8Y for qemu-devel@nongnu.org; Wed, 14 Jan 2009 13:06:53 -0500 Received: from mx2.redhat.com ([66.187.237.31]:34025) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LNA8i-0002zo-KP for qemu-devel@nongnu.org; Wed, 14 Jan 2009 13:06:52 -0500 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n0EI6pmQ001957 for ; Wed, 14 Jan 2009 13:06:51 -0500 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n0EI6pZA004476 for ; Wed, 14 Jan 2009 13:06:52 -0500 Received: from random.random (vpn-10-8.str.redhat.com [10.32.10.8]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id n0EI6pAj018432 for ; Wed, 14 Jan 2009 13:06:51 -0500 Date: Wed, 14 Jan 2009 19:06:49 +0100 From: Andrea Arcangeli Message-ID: <20090114180648.GP9779@random.random> References: <20080829135249.GI24884@duo.random> <20080901104356.GD25764@duo.random> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080901104356.GD25764@duo.random> Subject: [Qemu-devel] [PATCH] ide_dma_cancel will result in partial DMA transfer Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org From: Andrea Arcangeli The reason for not actually canceling the I/O is because with virtualization and lots of VM running, a guest fs may mistake a overload of the host, as an IDE timeout. So rather than canceling the I/O, it's safer to wait I/O completion and simulate that the I/O has completed just before the io cancellation was requested by the guest. This way if ntfs or an app writes data without checking for -EIO retval, and it thinks the write has succeeded, it's less likely to run into troubles. Similar issues for reads. Furthermore because the DMA operation is splitted into many synchronous aio_read/write if there's more than one entry in the SG table, without this patch the DMA would be cancelled in the middle, something we've no idea if it happens on real hardware too or not. Overall this seems a great risk for zero gain. This approach is sure safer than previous code given we can't pretend all guest fs code out there to check for errors and reply the DMA if it was completed partially, given a timeout would never materialize on a real harddisk unless there are defective blocks (and defective blocks are practically only an issue for reads never for writes in any recent hardware as writing to blocks is the way to fix them) or the harddisk breaks as a whole. Signed-off-by: Andrea Arcangeli --- This is a resubmit of an old patch in my queue. Wonder if it'll ever be merged. I think it's obviously safer (especially once we've preadv/pwritev driven I/O) even if a noop. Index: hw/ide.c =================================================================== --- hw/ide.c (revision 6296) +++ hw/ide.c (working copy) @@ -2878,8 +2878,28 @@ printf("%s: 0x%08x\n", __func__, val); #endif if (!(val & BM_CMD_START)) { - /* XXX: do it better */ - ide_dma_cancel(bm); + /* + * We can't cancel Scatter Gather DMA in the middle of the + * operation or a partial (not full) DMA transfer would reach + * the storage so we wait for completion instead (we beahve + * like if the DMA was complated by the time the guest trying + * to cancel dma with bmdma_cmd_writeb with BM_CMD_START not + * set). + * + * In the future we'll be able to safely cancel the I/O if the + * whole DMA operation will be submitted to disk with a single + * aio operation in the form of aio_readv/aio_writev + * (supported by linux kernel AIO but not by glibc pthread aio + * lib). + */ + if (bm->aiocb) { + QEMU_WARN("qemu_aio_flush called"); + qemu_aio_flush(); + if (bm->aiocb) + QEMU_WARN("aiocb still pending"); + if (bm->status & BM_STATUS_DMAING) + QEMU_WARN("BM_STATUS_DMAING still pending"); + } bm->cmd = val & 0x09; } else { if (!(bm->status & BM_STATUS_DMAING)) {