netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* provide vhost thread per virtqueue for forwarding scenario
@ 2013-05-20  2:11 Qinchuanyu
  2013-05-20  7:43 ` Michael S. Tsirkin
  0 siblings, 1 reply; 7+ messages in thread
From: Qinchuanyu @ 2013-05-20  2:11 UTC (permalink / raw)
  To: mst@redhat.com, rusty@rustcorp.com.au, nab@linux-iscsi.org
  Cc:  (netdev@vger.kernel.org),  (kvm@vger.kernel.org), Zanghongyong,
	Zhangjie (HZ)

Vhost thread provide both tx and rx ability for virtio-net. 
In the forwarding scenarios, tx and rx share the vhost thread, and throughput is limited by single thread.

So I did a patch for provide vhost thread per virtqueue, not per vhost_net.

Of course, multi-queue virtio-net is final solution, but it require new version of virtio-net working in guest.
If you have to work with suse10,11, redhat 5.x as guest, and want to improve the forward throughput,
using vhost thread per queue seems to be the only solution.

I did the test with kernel 3.0.27 and qemu-1.4.0, guest is suse11-sp2, and then two vhost thread provide 
double tx/rx forwarding performance than signal vhost thread. 
The virtqueue of vhost_blk is 1, so it still use one vhost thread without change.

Is there something wrong in this solution? If not, I would list patch later.

Best regards
King

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: provide vhost thread per virtqueue for forwarding scenario
  2013-05-20  2:11 provide vhost thread per virtqueue for forwarding scenario Qinchuanyu
@ 2013-05-20  7:43 ` Michael S. Tsirkin
  2013-05-20  8:16   ` Abel Gordon
  2013-05-22  9:59   ` Zang Hongyong
  0 siblings, 2 replies; 7+ messages in thread
From: Michael S. Tsirkin @ 2013-05-20  7:43 UTC (permalink / raw)
  To: Qinchuanyu
  Cc: rusty@rustcorp.com.au, nab@linux-iscsi.org,
	 (netdev@vger.kernel.org),  (kvm@vger.kernel.org), Zanghongyong,
	Zhangjie (HZ)

On Mon, May 20, 2013 at 02:11:19AM +0000, Qinchuanyu wrote:
> Vhost thread provide both tx and rx ability for virtio-net. 
> In the forwarding scenarios, tx and rx share the vhost thread, and throughput is limited by single thread.
> 
> So I did a patch for provide vhost thread per virtqueue, not per vhost_net.
> 
> Of course, multi-queue virtio-net is final solution, but it require new version of virtio-net working in guest.
> If you have to work with suse10,11, redhat 5.x as guest, and want to improve the forward throughput,
> using vhost thread per queue seems to be the only solution.

Why is it? If multi-queue works well for you, just update the drivers in
the guests that you care about. Guest driver backport is not so hard.

In my testing, performance of thread per vq varies: some workloads might
gain throughput but you get more IPIs and more scheduling overhead, so
you waste more host CPU per byte. As you create more VMs, this stops
being a win.

> 
> I did the test with kernel 3.0.27 and qemu-1.4.0, guest is suse11-sp2, and then two vhost thread provide 
> double tx/rx forwarding performance than signal vhost thread. 
> The virtqueue of vhost_blk is 1, so it still use one vhost thread without change.
> 
> Is there something wrong in this solution? If not, I would list patch later.
> 
> Best regards
> King

Yes, I don't think we want to create threads even more aggressively
in all cases. I'm worried about scalability as it is.
I think we should explore a flexible approach, use a thread pool
(for example, a wq) to share threads between virtqueues,
switch to a separate thread only if there's free CPU and existing
threads are busy. Hopefully share threads between vhost instances too.

-- 
MST

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: provide vhost thread per virtqueue for forwarding scenario
  2013-05-20  7:43 ` Michael S. Tsirkin
@ 2013-05-20  8:16   ` Abel Gordon
  2013-05-22  9:59   ` Zang Hongyong
  1 sibling, 0 replies; 7+ messages in thread
From: Abel Gordon @ 2013-05-20  8:16 UTC (permalink / raw)
  To: Qinchuanyu
  Cc: kvm-owner, nab@linux-iscsi.org,  (netdev@vger.kernel.org),
	rusty@rustcorp.com.au, Zanghongyong, Zhangjie (HZ), nyharel,
	Muli Ben-Yehuda, orit.was, Michael S. Tsirkin



"Michael S. Tsirkin" <mst@redhat.com> wrote on 20/05/2013 10:43:00 AM:

> >
> > I did the test with kernel 3.0.27 and qemu-1.4.0, guest is suse11-
> sp2, and then two vhost thread provide
> > double tx/rx forwarding performance than signal vhost thread.
> > The virtqueue of vhost_blk is 1, so it still use one vhost thread
> without change.
> >
> > Is there something wrong in this solution? If not, I would list patch
later.
> >
> > Best regards
> > King
>
> Yes, I don't think we want to create threads even more aggressively
> in all cases. I'm worried about scalability as it is.
> I think we should explore a flexible approach, use a thread pool
> (for example, a wq) to share threads between virtqueues,
> switch to a separate thread only if there's free CPU and existing
> threads are busy. Hopefully share threads between vhost instances too.
>

Qinchuanyu, you can take a look at the following tech. report
http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument


which actually shows the scalability problem Michael mentioned when
you run multiple vhost-threads.

Regards,
Abel.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: provide vhost thread per virtqueue for forwarding scenario
  2013-05-20  7:43 ` Michael S. Tsirkin
  2013-05-20  8:16   ` Abel Gordon
@ 2013-05-22  9:59   ` Zang Hongyong
  2013-05-22 10:07     ` Jason Wang
  2013-05-22 10:10     ` Michael S. Tsirkin
  1 sibling, 2 replies; 7+ messages in thread
From: Zang Hongyong @ 2013-05-22  9:59 UTC (permalink / raw)
  To: Michael S. Tsirkin, jasowang
  Cc: Qinchuanyu, rusty@rustcorp.com.au, nab@linux-iscsi.org,
	(netdev@vger.kernel.org), (kvm@vger.kernel.org), Zhangjie (HZ)

On 2013/5/20 15:43, Michael S. Tsirkin wrote:
> On Mon, May 20, 2013 at 02:11:19AM +0000, Qinchuanyu wrote:
>> Vhost thread provide both tx and rx ability for virtio-net.
>> In the forwarding scenarios, tx and rx share the vhost thread, and throughput is limited by single thread.
>>
>> So I did a patch for provide vhost thread per virtqueue, not per vhost_net.
>>
>> Of course, multi-queue virtio-net is final solution, but it require new version of virtio-net working in guest.
>> If you have to work with suse10,11, redhat 5.x as guest, and want to improve the forward throughput,
>> using vhost thread per queue seems to be the only solution.
> Why is it? If multi-queue works well for you, just update the drivers in
> the guests that you care about. Guest driver backport is not so hard.
>
> In my testing, performance of thread per vq varies: some workloads might
> gain throughput but you get more IPIs and more scheduling overhead, so
> you waste more host CPU per byte. As you create more VMs, this stops
> being a win.
>
>> I did the test with kernel 3.0.27 and qemu-1.4.0, guest is suse11-sp2, and then two vhost thread provide
>> double tx/rx forwarding performance than signal vhost thread.
>> The virtqueue of vhost_blk is 1, so it still use one vhost thread without change.
>>
>> Is there something wrong in this solution? If not, I would list patch later.
>>
>> Best regards
>> King
> Yes, I don't think we want to create threads even more aggressively
> in all cases. I'm worried about scalability as it is.
> I think we should explore a flexible approach, use a thread pool
> (for example, a wq) to share threads between virtqueues,
> switch to a separate thread only if there's free CPU and existing
> threads are busy. Hopefully share threads between vhost instances too.
On Xen platform, network backend pv driver model has evolved to this 
way. Netbacks from all DomUs share a thread pool,
and thread number eaqual to cpu core number.
Is there any plan for kvm paltform?
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: provide vhost thread per virtqueue for forwarding scenario
  2013-05-22  9:59   ` Zang Hongyong
@ 2013-05-22 10:07     ` Jason Wang
  2013-05-23  4:13       ` Rusty Russell
  2013-05-22 10:10     ` Michael S. Tsirkin
  1 sibling, 1 reply; 7+ messages in thread
From: Jason Wang @ 2013-05-22 10:07 UTC (permalink / raw)
  To: Zang Hongyong
  Cc: Michael S. Tsirkin, Qinchuanyu, rusty@rustcorp.com.au,
	nab@linux-iscsi.org, (netdev@vger.kernel.org),
	(kvm@vger.kernel.org), Zhangjie (HZ)

On 05/22/2013 05:59 PM, Zang Hongyong wrote:
> On 2013/5/20 15:43, Michael S. Tsirkin wrote:
>> On Mon, May 20, 2013 at 02:11:19AM +0000, Qinchuanyu wrote:
>>> Vhost thread provide both tx and rx ability for virtio-net.
>>> In the forwarding scenarios, tx and rx share the vhost thread, and
>>> throughput is limited by single thread.
>>>
>>> So I did a patch for provide vhost thread per virtqueue, not per
>>> vhost_net.
>>>
>>> Of course, multi-queue virtio-net is final solution, but it require
>>> new version of virtio-net working in guest.
>>> If you have to work with suse10,11, redhat 5.x as guest, and want to
>>> improve the forward throughput,
>>> using vhost thread per queue seems to be the only solution.
>> Why is it? If multi-queue works well for you, just update the drivers in
>> the guests that you care about. Guest driver backport is not so hard.
>>
>> In my testing, performance of thread per vq varies: some workloads might
>> gain throughput but you get more IPIs and more scheduling overhead, so
>> you waste more host CPU per byte. As you create more VMs, this stops
>> being a win.
>>
>>> I did the test with kernel 3.0.27 and qemu-1.4.0, guest is
>>> suse11-sp2, and then two vhost thread provide
>>> double tx/rx forwarding performance than signal vhost thread.
>>> The virtqueue of vhost_blk is 1, so it still use one vhost thread
>>> without change.
>>>
>>> Is there something wrong in this solution? If not, I would list
>>> patch later.
>>>
>>> Best regards
>>> King
>> Yes, I don't think we want to create threads even more aggressively
>> in all cases. I'm worried about scalability as it is.
>> I think we should explore a flexible approach, use a thread pool
>> (for example, a wq) to share threads between virtqueues,
>> switch to a separate thread only if there's free CPU and existing
>> threads are busy. Hopefully share threads between vhost instances too.
> On Xen platform, network backend pv driver model has evolved to this
> way. Netbacks from all DomUs share a thread pool,
> and thread number eaqual to cpu core number.
> Is there any plan for kvm paltform?

There used to be two related RFCs for this, one is the multiple vhost
workers from Anthony another is percpu vhost thread from Shirley. You
can search the archives on netdev or kvm for the patches.
>>
>
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: provide vhost thread per virtqueue for forwarding scenario
  2013-05-22  9:59   ` Zang Hongyong
  2013-05-22 10:07     ` Jason Wang
@ 2013-05-22 10:10     ` Michael S. Tsirkin
  1 sibling, 0 replies; 7+ messages in thread
From: Michael S. Tsirkin @ 2013-05-22 10:10 UTC (permalink / raw)
  To: Zang Hongyong
  Cc: jasowang, Qinchuanyu, rusty@rustcorp.com.au, nab@linux-iscsi.org,
	(netdev@vger.kernel.org), (kvm@vger.kernel.org), Zhangjie (HZ)

On Wed, May 22, 2013 at 05:59:03PM +0800, Zang Hongyong wrote:
> On 2013/5/20 15:43, Michael S. Tsirkin wrote:
> >On Mon, May 20, 2013 at 02:11:19AM +0000, Qinchuanyu wrote:
> >>Vhost thread provide both tx and rx ability for virtio-net.
> >>In the forwarding scenarios, tx and rx share the vhost thread, and throughput is limited by single thread.
> >>
> >>So I did a patch for provide vhost thread per virtqueue, not per vhost_net.
> >>
> >>Of course, multi-queue virtio-net is final solution, but it require new version of virtio-net working in guest.
> >>If you have to work with suse10,11, redhat 5.x as guest, and want to improve the forward throughput,
> >>using vhost thread per queue seems to be the only solution.
> >Why is it? If multi-queue works well for you, just update the drivers in
> >the guests that you care about. Guest driver backport is not so hard.
> >
> >In my testing, performance of thread per vq varies: some workloads might
> >gain throughput but you get more IPIs and more scheduling overhead, so
> >you waste more host CPU per byte. As you create more VMs, this stops
> >being a win.
> >
> >>I did the test with kernel 3.0.27 and qemu-1.4.0, guest is suse11-sp2, and then two vhost thread provide
> >>double tx/rx forwarding performance than signal vhost thread.
> >>The virtqueue of vhost_blk is 1, so it still use one vhost thread without change.
> >>
> >>Is there something wrong in this solution? If not, I would list patch later.
> >>
> >>Best regards
> >>King
> >Yes, I don't think we want to create threads even more aggressively
> >in all cases. I'm worried about scalability as it is.
> >I think we should explore a flexible approach, use a thread pool
> >(for example, a wq) to share threads between virtqueues,
> >switch to a separate thread only if there's free CPU and existing
> >threads are busy. Hopefully share threads between vhost instances too.
> On Xen platform, network backend pv driver model has evolved to this
> way. Netbacks from all DomUs share a thread pool,
> and thread number eaqual to cpu core number.
> Is there any plan for kvm paltform?

Shirley Ma had a patchset like this. Look it up:
'NUMA aware scheduling per vhost thread patch'

Unfortunately I don't think we can fix the thread number: if a thread
gets blocked because its accessing a swapped out memory for guest 1, we
must allow guest 2 to make progress.

But it shouldn't be too hard to fix: detect that
a thread is blocked and spawn a new one,
or pre-create a per-guest thread and bounce the work there.

> >
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: provide vhost thread per virtqueue for forwarding scenario
  2013-05-22 10:07     ` Jason Wang
@ 2013-05-23  4:13       ` Rusty Russell
  0 siblings, 0 replies; 7+ messages in thread
From: Rusty Russell @ 2013-05-23  4:13 UTC (permalink / raw)
  To: Jason Wang, Zang Hongyong
  Cc: Michael S. Tsirkin, Qinchuanyu, nab@linux-iscsi.org,
	(netdev@vger.kernel.org), (kvm@vger.kernel.org), Zhangjie (HZ)

Jason Wang <jasowang@redhat.com> writes:
> On 05/22/2013 05:59 PM, Zang Hongyong wrote:
>> On 2013/5/20 15:43, Michael S. Tsirkin wrote:
>>> On Mon, May 20, 2013 at 02:11:19AM +0000, Qinchuanyu wrote:
>>> Yes, I don't think we want to create threads even more aggressively
>>> in all cases. I'm worried about scalability as it is.
>>> I think we should explore a flexible approach, use a thread pool
>>> (for example, a wq) to share threads between virtqueues,
>>> switch to a separate thread only if there's free CPU and existing
>>> threads are busy. Hopefully share threads between vhost instances too.
>> On Xen platform, network backend pv driver model has evolved to this
>> way. Netbacks from all DomUs share a thread pool,
>> and thread number eaqual to cpu core number.
>> Is there any plan for kvm paltform?
>
> There used to be two related RFCs for this, one is the multiple vhost
> workers from Anthony another is percpu vhost thread from Shirley. You
> can search the archives on netdev or kvm for the patches.

As I've said to MST before, I think our entire model is wrong.
Userspace should create the threads and call in.  If you're doing kernel
acceleration, two extra threads per NIC is a tiny overhead.

Of course, such radical changes to vhost doesn't help existing users as
Qinchuanyu asked...

Cheers,
Rusty,

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-05-23  4:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-20  2:11 provide vhost thread per virtqueue for forwarding scenario Qinchuanyu
2013-05-20  7:43 ` Michael S. Tsirkin
2013-05-20  8:16   ` Abel Gordon
2013-05-22  9:59   ` Zang Hongyong
2013-05-22 10:07     ` Jason Wang
2013-05-23  4:13       ` Rusty Russell
2013-05-22 10:10     ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).