From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NUFg1-0006i3-Nf for qemu-devel@nongnu.org; Mon, 11 Jan 2010 03:31:05 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NUFfw-0006g8-0G for qemu-devel@nongnu.org; Mon, 11 Jan 2010 03:31:03 -0500 Received: from [199.232.76.173] (port=37376 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NUFfu-0006fl-Rn for qemu-devel@nongnu.org; Mon, 11 Jan 2010 03:30:59 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55541) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NUFft-0004A8-8c for qemu-devel@nongnu.org; Mon, 11 Jan 2010 03:30:58 -0500 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o0B8Utx7012604 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 11 Jan 2010 03:30:56 -0500 Message-ID: <4B4AE1BD.4000400@redhat.com> Date: Mon, 11 Jan 2010 10:30:53 +0200 From: Avi Kivity MIME-Version: 1.0 References: <1263195647.2005.44.camel@localhost> In-Reply-To: <1263195647.2005.44.camel@localhost> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Vadim Rozenfeld Cc: Dor Laor , qemu-devel On 01/11/2010 09:40 AM, Vadim Rozenfeld wrote: > The following patch allows us to improve Windows virtio > block driver performance on small size requests. > Additionally, it leads to reducing of cpu usage on write IOs > > Note, this is not an improvement for Windows specifically. > diff --git a/hw/virtio-blk.c b/hw/virtio-blk.c > index a2f0639..0e3a8d5 100644 > --- a/hw/virtio-blk.c > +++ b/hw/virtio-blk.c > @@ -28,6 +28,7 @@ typedef struct VirtIOBlock > char serial_str[BLOCK_SERIAL_STRLEN + 1]; > QEMUBH *bh; > size_t config_size; > + unsigned int pending; > } VirtIOBlock; > > static VirtIOBlock *to_virtio_blk(VirtIODevice *vdev) > @@ -87,6 +88,8 @@ typedef struct VirtIOBlockReq > struct VirtIOBlockReq *next; > } VirtIOBlockReq; > > +static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue > *vq); > + > static void virtio_blk_req_complete(VirtIOBlockReq *req, int status) > { > VirtIOBlock *s = req->dev; > @@ -95,6 +98,11 @@ static void virtio_blk_req_complete(VirtIOBlockReq > *req, int status) > virtqueue_push(s->vq,&req->elem, req->qiov.size + > sizeof(*req->in)); > virtio_notify(&s->vdev, s->vq); > > + if(--s->pending == 0) { > + virtio_queue_set_notification(s->vq, 1); > + virtio_blk_handle_output(&s->vdev, s->vq); > + } > + > Coding style: space after if. See the CODING_STYLE file. > @@ -340,6 +348,9 @@ static void virtio_blk_handle_output(VirtIODevice > *vdev, VirtQueue *vq) > exit(1); > } > > + if(++s->pending == 1) > + virtio_queue_set_notification(s->vq, 0); > + > req->out = (void *)req->elem.out_sg[0].iov_base; > req->in = (void *)req->elem.in_sg[req->elem.in_num - > 1].iov_base; > > Coding style: space after if, braces after if. Your patch is word wrapped, please send it correctly. Easiest using git send-email. The patch has potential to reduce performance on volumes with multiple spindles. Consider two processes issuing sequential reads into a RAID array. With this patch, the reads will be executed sequentially rather than in parallel, so I think a follow-on patch to make the minimum depth a parameter (set by the guest? the host?) would be helpful. -- error compiling committee.c: too many arguments to function