From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:53200) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TsXDi-0006zz-Ru for qemu-devel@nongnu.org; Tue, 08 Jan 2013 06:19:51 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TsXDh-0006go-AO for qemu-devel@nongnu.org; Tue, 08 Jan 2013 06:19:50 -0500 Received: from mx1.redhat.com ([209.132.183.28]:17947) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TsXDh-0006f2-3N for qemu-devel@nongnu.org; Tue, 08 Jan 2013 06:19:49 -0500 Message-ID: <50EC00CE.80205@redhat.com> Date: Tue, 08 Jan 2013 12:19:42 +0100 From: Kevin Wolf MIME-Version: 1.0 References: <1355941771-3418-1-git-send-email-namei.unix@gmail.com> <87k3s6shdv.wl%morita.kazutaka@lab.ntt.co.jp> <50D967C3.7020109@gmail.com> <50E58B19.2050701@gmail.com> <20130104163830.GF6310@stefanha-thinkpad.hitronhub.home> <50E7AEC4.5080309@gmail.com> <50E7BA41.3020307@gmail.com> <50E7DC9B.4080309@gmail.com> <50EACC61.2020603@redhat.com> <50EBB1CB.9030608@gmail.com> <20130108094025.GE2557@stefanha-thinkpad.redhat.com> <50EBEAD2.6070608@gmail.com> <50EBEE42.7010407@redhat.com> <50EBF755.3050607@gmail.com> <50EBFA3F.8030808@redhat.com> <50EBFE20.9010100@gmail.com> In-Reply-To: <50EBFE20.9010100@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] sheepdog: implement direct write semantics List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Liu Yuan Cc: Stefan Hajnoczi , qemu-devel@nongnu.org, MORITA Kazutaka , Paolo Bonzini Am 08.01.2013 12:08, schrieb Liu Yuan: > On 01/08/2013 06:51 PM, Kevin Wolf wrote: >> Am 08.01.2013 11:39, schrieb Liu Yuan: >>> This also explains why >>> I saw a regression about write performance: Old QEMU can issue multiple >>> write requests in one go, but now the requests are sent one by one (even >>> with cache=writeback set), which makes Sheepdog write performance drop a >>> lot. Is it possible to issue multiple requests in one go as old QEMU does? >> >> Huh? We didn't change anything to that respect, or at least not that I'm >> aware of. qemu always only had single-request bdrv_co_writev, so if >> anything that batching must have happened inside Sheepdog code? Do you >> know what makes it not batch requests any more? >> > > QEMU v1.1.x works well with batched write requests. Sheepdog block > driver doesn't do batching trick as far as I know, just send request as > it is feed. There isn't noticeable changes between v1.1.x and current > master regard to Sheepdog.c. > > To detail the different behavior, from Sheepdog daemon which receives > the requests from QEMU: > old: can receive multiple many requests at the virtually the same time > and handle them concurrently > now: only receive one request, handle it, reply and get another. > > So I think the problem is, current QEMU will wait for write response > before sending another request. I can't see a reason why it would do that. Can you bisect this? >>> It seems it is hard to restore into old semantics of cache flags due to >>> new design of QEMU block layer. So will you accept that adding a 'flags' >>> into BlockDriverState which carry the 'cache flags' from user to keep >>> backward compatibility? >> >> No, going back to the old behaviour would break guest-toggled WCE. >> > > Guest-toggled WCE only works with IDE and seems that virtio-blk doesn't > support it, no? And I think there are huge virtio-blk users. It works with virtio-blk and SCSI as well. > I didn't meant to break WCE. What I meant is to allow backward > compatibility. For e.g, Sheepdog driver can make use of this dedicated > cache flags to implement its own cache control and doesn't affect other > drivers at all. How would you do it? With a WCE that changes during runtime the idea of a flag that is passed to bdrv_open() and stays valid as long as the BlockDriverState exists doesn't match reality any more. Kevin