From: Zang Hongyong <zanghongyong@huawei.com>
To: "Michael S. Tsirkin" <mst@redhat.com>, <jasowang@redhat.com>
Cc: Qinchuanyu <qinchuanyu@huawei.com>,
"rusty@rustcorp.com.au" <rusty@rustcorp.com.au>,
"nab@linux-iscsi.org" <nab@linux-iscsi.org>,
"(netdev@vger.kernel.org)" <netdev@vger.kernel.org>,
"(kvm@vger.kernel.org)" <kvm@vger.kernel.org>,
"Zhangjie (HZ)" <zhang.zhangjie@huawei.com>
Subject: Re: provide vhost thread per virtqueue for forwarding scenario
Date: Wed, 22 May 2013 17:59:03 +0800 [thread overview]
Message-ID: <519C96E7.2030704@huawei.com> (raw)
In-Reply-To: <20130520074300.GA27848@redhat.com>
On 2013/5/20 15:43, Michael S. Tsirkin wrote:
> On Mon, May 20, 2013 at 02:11:19AM +0000, Qinchuanyu wrote:
>> Vhost thread provide both tx and rx ability for virtio-net.
>> In the forwarding scenarios, tx and rx share the vhost thread, and throughput is limited by single thread.
>>
>> So I did a patch for provide vhost thread per virtqueue, not per vhost_net.
>>
>> Of course, multi-queue virtio-net is final solution, but it require new version of virtio-net working in guest.
>> If you have to work with suse10,11, redhat 5.x as guest, and want to improve the forward throughput,
>> using vhost thread per queue seems to be the only solution.
> Why is it? If multi-queue works well for you, just update the drivers in
> the guests that you care about. Guest driver backport is not so hard.
>
> In my testing, performance of thread per vq varies: some workloads might
> gain throughput but you get more IPIs and more scheduling overhead, so
> you waste more host CPU per byte. As you create more VMs, this stops
> being a win.
>
>> I did the test with kernel 3.0.27 and qemu-1.4.0, guest is suse11-sp2, and then two vhost thread provide
>> double tx/rx forwarding performance than signal vhost thread.
>> The virtqueue of vhost_blk is 1, so it still use one vhost thread without change.
>>
>> Is there something wrong in this solution? If not, I would list patch later.
>>
>> Best regards
>> King
> Yes, I don't think we want to create threads even more aggressively
> in all cases. I'm worried about scalability as it is.
> I think we should explore a flexible approach, use a thread pool
> (for example, a wq) to share threads between virtqueues,
> switch to a separate thread only if there's free CPU and existing
> threads are busy. Hopefully share threads between vhost instances too.
On Xen platform, network backend pv driver model has evolved to this
way. Netbacks from all DomUs share a thread pool,
and thread number eaqual to cpu core number.
Is there any plan for kvm paltform?
>
next prev parent reply other threads:[~2013-05-22 10:01 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-20 2:11 provide vhost thread per virtqueue for forwarding scenario Qinchuanyu
2013-05-20 7:43 ` Michael S. Tsirkin
2013-05-20 8:16 ` Abel Gordon
2013-05-22 9:59 ` Zang Hongyong [this message]
2013-05-22 10:07 ` Jason Wang
2013-05-23 4:13 ` Rusty Russell
2013-05-22 10:10 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=519C96E7.2030704@huawei.com \
--to=zanghongyong@huawei.com \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mst@redhat.com \
--cc=nab@linux-iscsi.org \
--cc=netdev@vger.kernel.org \
--cc=qinchuanyu@huawei.com \
--cc=rusty@rustcorp.com.au \
--cc=zhang.zhangjie@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).