From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=52898 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1P5FlO-0000TM-Ke for qemu-devel@nongnu.org; Mon, 11 Oct 2010 06:37:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1P5FlN-0006ju-7V for qemu-devel@nongnu.org; Mon, 11 Oct 2010 06:37:50 -0400 Received: from mtagate4.de.ibm.com ([195.212.17.164]:38372) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1P5FlM-0006jn-Ut for qemu-devel@nongnu.org; Mon, 11 Oct 2010 06:37:49 -0400 Received: from d12nrmr1607.megacenter.de.ibm.com (d12nrmr1607.megacenter.de.ibm.com [9.149.167.49]) by mtagate4.de.ibm.com (8.13.1/8.13.1) with ESMTP id o9BAblhN012366 for ; Mon, 11 Oct 2010 10:37:47 GMT Received: from d12av04.megacenter.de.ibm.com (d12av04.megacenter.de.ibm.com [9.149.165.229]) by d12nrmr1607.megacenter.de.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o9BAbjOj4075756 for ; Mon, 11 Oct 2010 12:37:47 +0200 Received: from d12av04.megacenter.de.ibm.com (loopback [127.0.0.1]) by d12av04.megacenter.de.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id o9BAbjb9011701 for ; Mon, 11 Oct 2010 12:37:45 +0200 Date: Mon, 11 Oct 2010 11:37:44 +0100 From: Stefan Hajnoczi Message-ID: <20101011103743.GB4078@stefan-thinkpad.transitives.com> References: <1286552914-27014-1-git-send-email-stefanha@linux.vnet.ibm.com> <1286552914-27014-7-git-send-email-stefanha@linux.vnet.ibm.com> <4CB182F7.8090100@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4CB182F7.8090100@redhat.com> Subject: [Qemu-devel] Re: [PATCH v2 6/7] qed: Read/write support List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Kevin Wolf , Anthony Liguori , qemu-devel@nongnu.org, Christoph Hellwig On Sun, Oct 10, 2010 at 11:10:15AM +0200, Avi Kivity wrote: > On 10/08/2010 05:48 PM, Stefan Hajnoczi wrote: > >This patch implements the read/write state machine. Operations are > >fully asynchronous and multiple operations may be active at any time. > > > >Allocating writes lock tables to ensure metadata updates do not > >interfere with each other. If two allocating writes need to update the > >same L2 table they will run sequentially. If two allocating writes need > >to update different L2 tables they will run in parallel. > > > > Shouldn't there be a flush between an allocating write and an L2 > update? Otherwise a reuse of a cluster can move logical sectors > from one place to another, causing a data disclosure. > > Can be skipped if the new cluster is beyond the physical image size. Currently clusters are never reused and new clusters are always beyond physical image size. The only exception I can think of is when the image file size is not a multiple of the cluster size and we round down to the start of cluster. In an implementation that supports TRIM or otherwise reuses clusters this is a cost. > >+ > >+/* > >+ * Table locking works as follows: > >+ * > >+ * Reads and non-allocating writes do not acquire locks because they do not > >+ * modify tables and only see committed L2 cache entries. > > What about a non-allocating write that follows an allocating write? > > 1 Guest writes to sector 0 > 2 Host reads backing image (or supplies zeros), sectors 1-127 > 3 Host writes sectors 0-127 > 4 Guest writes sector 1 > 5 Host writes sector 1 > > There needs to be a barrier that prevents the host and the disk from > reordering operations 3 and 5, or guest operation 4 is lost. As far > as the guest is concerned no overlapping writes were issued, so it > isn't required to provide any barriers. > > (based on the comment only, haven't read the code) There is no barrier between operations 3 and 5. However, operation 5 only starts after operation 3 has completed because of table locking. It is my understanding that *independent* requests may be reordered but two writes to the *same* sector will not be reordered if write A completes before write B is issued. Imagine a test program that uses pwrite() to rewrite a counter many times on disk. When the program finishes it prints the counter variable's last value. This scenario is like operations 3 and 5 above. If we read the counter back from disk it will be the final value, not some intermediate value. The writes will not be reordered. Stefan