From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Ka6tI-0005L4-9b for qemu-devel@nongnu.org; Mon, 01 Sep 2008 06:44:12 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Ka6tF-0005KY-BJ for qemu-devel@nongnu.org; Mon, 01 Sep 2008 06:44:11 -0400 Received: from [199.232.76.173] (port=42467 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Ka6tD-0005KL-C6 for qemu-devel@nongnu.org; Mon, 01 Sep 2008 06:44:07 -0400 Received: from host36-195-149-62.serverdedicati.aruba.it ([62.149.195.36]:54733 helo=mx.cpushare.com) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1Ka6tD-0001kB-1z for qemu-devel@nongnu.org; Mon, 01 Sep 2008 06:44:07 -0400 Date: Mon, 1 Sep 2008 12:43:56 +0200 From: Andrea Arcangeli Message-ID: <20080901104356.GD25764@duo.random> References: <20080829135249.GI24884@duo.random> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080829135249.GI24884@duo.random> Subject: [Qemu-devel] [PATCH 1/2] ide_dma_cancel will result in partial DMA transfer Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org The reason for not actually canceling the I/O is because with virtualization and lots of VM running, a guest fs may mistake a overload of the host, as an IDE timeout. So rather than canceling the I/O, it's safer to wait I/O completion and simulate that the I/O has completed just before the io cancellation was requested by the guest. This way if ntfs or an app writes data without checking for -EIO retval, and it thinks the write has succeeded, it's less likely to run into troubles. Similar issues for reads. Furthermore because the DMA operation is splitted into many synchronous aio_read/write if there's more than one entry in the SG table, without this patch the DMA would be cancelled in the middle, something we've no idea if it happens on real hardware too or not. Overall this seems a great deal of risk of generating data corruption, for absolutely _zero_ gain at runtime. So regardless if this is found to fix or not the fs corruption (so far so good, no corruption reproduced but it takes several days of heavy workload to reproduce so nothing sure yet) I think it's good idea to apply. This approach is sure safer than previous code given we can't pretend all guest fs code out there to check for errors and reply the DMA if it was completed partially, given a timeout would never materialize on a real harddisk unless there are defective blocks (and defective blocks are practically only an issue for reads never for writes in any recent hardware as writing to blocks is the way to fix them) or the harddisk breaks as a whole. Signed-off-by: Andrea Arcangeli --- This is an update for the dma cancellation fix. Index: hw/ide.c =================================================================== --- hw/ide.c (revision 5119) +++ hw/ide.c (working copy) @@ -2894,8 +2894,24 @@ printf("%s: 0x%08x\n", __func__, val); #endif if (!(val & BM_CMD_START)) { - /* XXX: do it better */ - ide_dma_cancel(bm); + /* + * If guest tries to cancel the DMA we beahve like if the DMA + * was complated by the time the guest tried to cancel dma + * with bmdma_cmd_writeb with BM_CMD_START not set. This is + * safer than cancelling whatever partial DMA in flight + * because it has a chance to be bug-compatible if a guest fs + * isn't checking for I/O errors triggered by guest I/O + * timeouts when the host is overloaded. + */ + if (bm->aiocb) { + qemu_aio_flush(); +#ifdef DEBUG_IDE + if (bm->aiocb) + printf("aiocb still pending"); + if (bm->status & BM_STATUS_DMAING) + printf("BM_STATUS_DMAING still pending"); +#endif + } bm->cmd = val & 0x09; } else { if (!(bm->status & BM_STATUS_DMAING)) {