From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Ln8j2-0006Ph-6Y for qemu-devel@nongnu.org; Fri, 27 Mar 2009 05:51:44 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Ln8ix-0006O5-7y for qemu-devel@nongnu.org; Fri, 27 Mar 2009 05:51:43 -0400 Received: from [199.232.76.173] (port=48592 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Ln8ix-0006O0-1v for qemu-devel@nongnu.org; Fri, 27 Mar 2009 05:51:39 -0400 Received: from mx2.redhat.com ([66.187.237.31]:43480) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Ln8iw-0004TT-CM for qemu-devel@nongnu.org; Fri, 27 Mar 2009 05:51:38 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n2R9pbM8028203 for ; Fri, 27 Mar 2009 05:51:37 -0400 Message-ID: <49CCA1CF.4000003@redhat.com> Date: Fri, 27 Mar 2009 12:52:15 +0300 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS References: <49CB599D.6000701@eu.citrix.com> <49CB5FA0.10101@redhat.com> <49CB6AF7.3080604@eu.citrix.com> <49CB70AC.3060900@redhat.com> <20090326124719.GL5642@const.bordeaux.inria.fr> <49CB7C0F.7010507@redhat.com> <20090326153021.GP5642@const.bordeaux.inria.fr> <49CBCA3C.2030909@redhat.com> <20090326184814.GC5458@const.famille.thibault.fr> <49CBDA22.3070600@redhat.com> <20090326231801.GE5458@const.famille.thibault.fr> In-Reply-To: <20090326231801.GE5458@const.famille.thibault.fr> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Samuel Thibault wrote: > Ah, I thought you understood that the posix driver has the same kind of > limitation It's not the same limitation. The posix driver has no limits on DMA size, it will happily transfer a gigabyte of data if you ask it to. > (and qemu is actually _bugged_ in that regard). > It has a bug in that it does not correctly interpret the return value of pread()/pwrite(). It's a minor bug since no system supported by qemu will actually return a short read or write (I think) and in that we hope disk errors are rare. Nevertheless it should be fixed (it's an easy fix too). However implementing DMA limits like you propose (IDE_DMA_BUF_SECTORS) will not fix the bug, only reduce performance. > >>> I'm here just pointing out that the problem is not >>> _only_ in the xen-specific driver, but also in the posix driver, on any >>> OS that doesn't necessarily do all the work the caller asked for (which >>> is _allowed_ by POSIX). >>> >>> >> But that's not limited DMA (or at least, not limited up-front). And >> it's easily corrected, place a while loop around preadv/pwritev, no need >> to split a request a priori somewhere up the stack. >> > > Sure, and I could do the same in the block-vbd driver, thus then my > original remark "it should be centralized in the block layer instead of > placing the burden on all block format drivers". Just to make sure: I'm > _not_ saying that should be done in the DMA code. I said it should be > done in the block layer, shared by all block drivers. > A generic fix will have to issue a new aio request. block-raw-posix need not do that, just a while loop. >> And it wouldn't be right for block-vbd - you should split your requests >> as late as possible, IMO. >> > > Why making it "late"? Exposing the lower limits to let upper layers > decide how they should manage fragmentation usually gets better > performance. (Note that in my case there is no system involved, so it's > really _not_ costly to do the fragmentation on the qemu side). > If ring entries can be more than a page (if the request is contiguous), then the limit can be expanded. In other words, it's a worst-case limit, not a hard limit. Exposing the worst case limit will lead to pessimistic choices. That's how virtio-blk works, don't know about xen vbd (might not work due to the need to transfer grants?) -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.