qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Dor Laor <dlaor@redhat.com>
Cc: qemu-devel <qemu-devel@nongnu.org>, Avi Kivity <avi@redhat.com>,
	Vadim Rozenfeld <vrozenfe@redhat.com>
Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device
Date: Mon, 11 Jan 2010 11:19:21 +0200	[thread overview]
Message-ID: <4B4AED19.3060401@redhat.com> (raw)
In-Reply-To: <4B4AE95D.7080305@redhat.com>

On 01/11/2010 11:03 AM, Dor Laor wrote:
> On 01/11/2010 10:30 AM, Avi Kivity wrote:
>> On 01/11/2010 09:40 AM, Vadim Rozenfeld wrote:
>>> The following patch allows us to improve Windows virtio
>>> block driver performance on small size requests.
>>> Additionally, it leads to reducing of cpu usage on write IOs
>>>
>>
>> Note, this is not an improvement for Windows specifically.
>>
>>> diff --git a/hw/virtio-blk.c b/hw/virtio-blk.c
>>> index a2f0639..0e3a8d5 100644
>>> --- a/hw/virtio-blk.c
>>> +++ b/hw/virtio-blk.c
>>> @@ -28,6 +28,7 @@ typedef struct VirtIOBlock
>>> char serial_str[BLOCK_SERIAL_STRLEN + 1];
>>> QEMUBH *bh;
>>> size_t config_size;
>>> + unsigned int pending;
>>> } VirtIOBlock;
>>>
>>> static VirtIOBlock *to_virtio_blk(VirtIODevice *vdev)
>>> @@ -87,6 +88,8 @@ typedef struct VirtIOBlockReq
>>> struct VirtIOBlockReq *next;
>>> } VirtIOBlockReq;
>>>
>>> +static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue
>>> *vq);
>>> +
>>> static void virtio_blk_req_complete(VirtIOBlockReq *req, int status)
>>> {
>>> VirtIOBlock *s = req->dev;
>>> @@ -95,6 +98,11 @@ static void virtio_blk_req_complete(VirtIOBlockReq
>>> *req, int status)
>>> virtqueue_push(s->vq,&req->elem, req->qiov.size +
>>> sizeof(*req->in));
>>> virtio_notify(&s->vdev, s->vq);
>>>
>>> + if(--s->pending == 0) {
>>> + virtio_queue_set_notification(s->vq, 1);
>>> + virtio_blk_handle_output(&s->vdev, s->vq);
>
> The above line should be moved out of the 'if'.
>
> Attached results with rhel5.4 (qemu0.11) for win2k8 32bit guest. Note
> the drastic reduction in cpu consumption.

Attachment did not survive the email server, so you'll have to trust me 
saying that cpu consumption was done from 65% -> 40% for reads and from 
80% --> 30% for writes

>
>>> + }
>>> +
>>
>> Coding style: space after if. See the CODING_STYLE file.
>>
>>> @@ -340,6 +348,9 @@ static void virtio_blk_handle_output(VirtIODevice
>>> *vdev, VirtQueue *vq)
>>> exit(1);
>>> }
>>>
>>> + if(++s->pending == 1)
>>> + virtio_queue_set_notification(s->vq, 0);
>>> +
>>> req->out = (void *)req->elem.out_sg[0].iov_base;
>>> req->in = (void *)req->elem.in_sg[req->elem.in_num -
>>> 1].iov_base;
>>>
>>
>> Coding style: space after if, braces after if.
>>
>> Your patch is word wrapped, please send it correctly. Easiest using git
>> send-email.
>>
>> The patch has potential to reduce performance on volumes with multiple
>> spindles. Consider two processes issuing sequential reads into a RAID
>> array. With this patch, the reads will be executed sequentially rather
>> than in parallel, so I think a follow-on patch to make the minimum depth
>> a parameter (set by the guest? the host?) would be helpful.
>>
>

  parent reply	other threads:[~2010-01-11  9:18 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-01-11  7:40 [Qemu-devel] [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device Vadim Rozenfeld
2010-01-11  8:30 ` [Qemu-devel] " Avi Kivity
     [not found]   ` <4B4AE95D.7080305@redhat.com>
2010-01-11  9:19     ` Dor Laor [this message]
2010-01-11 13:11       ` Christoph Hellwig
2010-01-11 13:13         ` Avi Kivity
2010-01-11 13:16           ` Christoph Hellwig
2010-01-11 13:47           ` Christoph Hellwig
2010-01-11 14:00             ` Anthony Liguori
2010-02-24  2:58               ` Paul Brook
2010-02-24 14:59                 ` Anthony Liguori
2010-02-25 15:06                   ` Paul Brook
2010-02-25 15:23                     ` Anthony Liguori
2010-02-25 16:48                       ` Paul Brook
2010-02-25 17:11                     ` Avi Kivity
2010-02-25 17:15                       ` Anthony Liguori
2010-02-25 17:33                         ` Avi Kivity
2010-02-25 18:05                           ` malc
2010-02-25 19:55                           ` Anthony Liguori
2010-02-26  8:47                             ` Avi Kivity
2010-02-26 14:36                               ` Anthony Liguori
2010-02-26 15:39                                 ` Avi Kivity
2010-01-11 13:42   ` Christoph Hellwig
2010-01-11 13:49     ` Anthony Liguori
2010-01-11 14:29       ` Avi Kivity
2010-01-11 14:37         ` Anthony Liguori
2010-01-11 14:46           ` Avi Kivity
2010-01-11 15:13             ` Anthony Liguori
2010-01-11 15:19               ` Avi Kivity
2010-01-11 15:22                 ` Anthony Liguori
2010-01-11 15:31                   ` Avi Kivity
2010-01-11 15:32                     ` Anthony Liguori
2010-01-11 15:35                       ` Avi Kivity
2010-01-11 15:38                         ` Anthony Liguori
2010-01-11 18:22               ` [Qemu-devel] " Michael S. Tsirkin
2010-01-11 18:20           ` Michael S. Tsirkin
2010-01-11 14:25     ` [Qemu-devel] " Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B4AED19.3060401@redhat.com \
    --to=dlaor@redhat.com \
    --cc=avi@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=vrozenfe@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).