virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: vhost + multiqueue + RSS question.
       [not found] <20141116161818.GD7589@cloudius-systems.com>
@ 2014-11-16 18:56 ` Michael S. Tsirkin
  2014-11-17  4:54   ` Venkateswara Rao Nandigam
                     ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2014-11-16 18:56 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: kvm, virtualization

On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> Hi Michael,
> 
>  I am playing with vhost multiqueue capability and have a question about
> vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> ConnectX-3 NIC which supports multiqueue and RSS. Network related
> parameters for qemu are:
> 
>    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
>    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> 
> In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> 
> I am running one tcp stream into the guest using iperf. Since there is
> only one tcp stream I expect it to be handled by one queue only but
> this seams to be not the case. ethtool -S on a host shows that the
> stream is handled by one queue in the NIC, just like I would expect,
> but in a guest all 4 virtio-input interrupt are incremented. Am I
> missing any configuration?

I don't see anything obviously wrong with what you describe.
Maybe, somehow, same irqfd got bound to multiple MSI vectors?
To see, can you try dumping struct kvm_irqfd that's passed to kvm?


> --
> 			Gleb.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: vhost + multiqueue + RSS question.
  2014-11-16 18:56 ` vhost + multiqueue + RSS question Michael S. Tsirkin
@ 2014-11-17  4:54   ` Venkateswara Rao Nandigam
  2014-11-17  5:30   ` Jason Wang
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: Venkateswara Rao Nandigam @ 2014-11-17  4:54 UTC (permalink / raw)
  To: Michael S. Tsirkin, Gleb Natapov
  Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org

I have a question related this topic. So How do you set the RSS Key on the Mellanox NIc? I mean from your Guest?

If it being set as part of Host driver, is there a way to set it from Guest? I mean my guest will choose a RSS Key and will try to set on the Physical NIC.

Thanks,
Venkatesh

-----Original Message-----
From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On Behalf Of Michael S. Tsirkin
Sent: Monday, November 17, 2014 12:26 AM
To: Gleb Natapov
Cc: kvm@vger.kernel.org; Jason Wang; virtualization@lists.linux-foundation.org
Subject: Re: vhost + multiqueue + RSS question.

On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> Hi Michael,
> 
>  I am playing with vhost multiqueue capability and have a question 
> about vhost multiqueue and RSS (receive side steering). My setup has 
> Mellanox
> ConnectX-3 NIC which supports multiqueue and RSS. Network related 
> parameters for qemu are:
> 
>    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
>    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> 
> In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> 
> I am running one tcp stream into the guest using iperf. Since there is 
> only one tcp stream I expect it to be handled by one queue only but 
> this seams to be not the case. ethtool -S on a host shows that the 
> stream is handled by one queue in the NIC, just like I would expect, 
> but in a guest all 4 virtio-input interrupt are incremented. Am I 
> missing any configuration?

I don't see anything obviously wrong with what you describe.
Maybe, somehow, same irqfd got bound to multiple MSI vectors?
To see, can you try dumping struct kvm_irqfd that's passed to kvm?


> --
> 			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
  2014-11-16 18:56 ` vhost + multiqueue + RSS question Michael S. Tsirkin
  2014-11-17  4:54   ` Venkateswara Rao Nandigam
@ 2014-11-17  5:30   ` Jason Wang
  2014-11-17  7:26     ` Gleb Natapov
       [not found]   ` <5CC583AB71FC0C44A5CAB54823E83A8422B8ABD4@SINPEX01CL01.citrite.net>
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Jason Wang @ 2014-11-17  5:30 UTC (permalink / raw)
  To: Michael S. Tsirkin, Gleb Natapov; +Cc: kvm, virtualization

On 11/17/2014 02:56 AM, Michael S. Tsirkin wrote:
> On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
>> Hi Michael,
>>
>>  I am playing with vhost multiqueue capability and have a question about
>> vhost multiqueue and RSS (receive side steering). My setup has Mellanox
>> ConnectX-3 NIC which supports multiqueue and RSS. Network related
>> parameters for qemu are:
>>
>>    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
>>    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
>>
>> In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
>>
>> I am running one tcp stream into the guest using iperf. Since there is
>> only one tcp stream I expect it to be handled by one queue only but
>> this seams to be not the case. ethtool -S on a host shows that the
>> stream is handled by one queue in the NIC, just like I would expect,
>> but in a guest all 4 virtio-input interrupt are incremented. Am I
>> missing any configuration?
> I don't see anything obviously wrong with what you describe.
> Maybe, somehow, same irqfd got bound to multiple MSI vectors?
> To see, can you try dumping struct kvm_irqfd that's passed to kvm?
>
>
>> --
>> 			Gleb.

This sounds like a regression, which kernel/qemu version did you use?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
       [not found]   ` <5CC583AB71FC0C44A5CAB54823E83A8422B8ABD4@SINPEX01CL01.citrite.net>
@ 2014-11-17  5:39     ` Jason Wang
  0 siblings, 0 replies; 15+ messages in thread
From: Jason Wang @ 2014-11-17  5:39 UTC (permalink / raw)
  To: Venkateswara Rao Nandigam, Michael S. Tsirkin, Gleb Natapov
  Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org

On 11/17/2014 12:54 PM, Venkateswara Rao Nandigam wrote:
> I have a question related this topic. So How do you set the RSS Key on the Mellanox NIc? I mean from your Guest?

I believe it's possible but not implemented currently. The issue is the
implementation should not be vendor specific.

TUN/TAP has its own automatic flow steering implementation (flow caches).
>
> If it being set as part of Host driver, is there a way to set it from Guest? I mean my guest will choose a RSS Key and will try to set on the Physical NIC.

Flow caches can co-operate with RFS/aRFS now, so there's indeed some
kind of co-operation between host card and guest I believe.
>
> Thanks,
> Venkatesh
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
  2014-11-17  5:30   ` Jason Wang
@ 2014-11-17  7:26     ` Gleb Natapov
  0 siblings, 0 replies; 15+ messages in thread
From: Gleb Natapov @ 2014-11-17  7:26 UTC (permalink / raw)
  To: Jason Wang; +Cc: virtualization, kvm, Michael S. Tsirkin

On Mon, Nov 17, 2014 at 01:30:06PM +0800, Jason Wang wrote:
> On 11/17/2014 02:56 AM, Michael S. Tsirkin wrote:
> > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> >> Hi Michael,
> >>
> >>  I am playing with vhost multiqueue capability and have a question about
> >> vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> >> ConnectX-3 NIC which supports multiqueue and RSS. Network related
> >> parameters for qemu are:
> >>
> >>    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> >>    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> >>
> >> In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> >>
> >> I am running one tcp stream into the guest using iperf. Since there is
> >> only one tcp stream I expect it to be handled by one queue only but
> >> this seams to be not the case. ethtool -S on a host shows that the
> >> stream is handled by one queue in the NIC, just like I would expect,
> >> but in a guest all 4 virtio-input interrupt are incremented. Am I
> >> missing any configuration?
> > I don't see anything obviously wrong with what you describe.
> > Maybe, somehow, same irqfd got bound to multiple MSI vectors?
> > To see, can you try dumping struct kvm_irqfd that's passed to kvm?
> >
> >
> >> --
> >> 			Gleb.
> 
> This sounds like a regression, which kernel/qemu version did you use?
Sorry, should have mentioned it from the start. Host is a fedora 20 with
kernel 3.16.6-200.fc20.x86_64 and qemu-system-x86-1.6.2-9.fc20.x86_64.
Guest is also fedora 20 but with an older kernel 3.11.10-301.

--
			Gleb.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
  2014-11-16 18:56 ` vhost + multiqueue + RSS question Michael S. Tsirkin
                     ` (2 preceding siblings ...)
       [not found]   ` <5CC583AB71FC0C44A5CAB54823E83A8422B8ABD4@SINPEX01CL01.citrite.net>
@ 2014-11-17  7:44   ` Gleb Natapov
       [not found]   ` <20141117074423.GG7589@cloudius-systems.com>
  4 siblings, 0 replies; 15+ messages in thread
From: Gleb Natapov @ 2014-11-17  7:44 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization

On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
> On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> > Hi Michael,
> > 
> >  I am playing with vhost multiqueue capability and have a question about
> > vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> > ConnectX-3 NIC which supports multiqueue and RSS. Network related
> > parameters for qemu are:
> > 
> >    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> >    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> > 
> > In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> > 
> > I am running one tcp stream into the guest using iperf. Since there is
> > only one tcp stream I expect it to be handled by one queue only but
> > this seams to be not the case. ethtool -S on a host shows that the
> > stream is handled by one queue in the NIC, just like I would expect,
> > but in a guest all 4 virtio-input interrupt are incremented. Am I
> > missing any configuration?
> 
> I don't see anything obviously wrong with what you describe.
> Maybe, somehow, same irqfd got bound to multiple MSI vectors?
It does not look like this is what is happening judging by the way
interrupts are distributed between queues. They are not distributed
uniformly and often I see one queue gets most interrupt and others get
much less and then it changes.

--
			Gleb.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
       [not found]   ` <20141117074423.GG7589@cloudius-systems.com>
@ 2014-11-17 10:38     ` Michael S. Tsirkin
  2014-11-17 11:22       ` Gleb Natapov
       [not found]       ` <20141117112207.GJ7589@cloudius-systems.com>
  0 siblings, 2 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2014-11-17 10:38 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: kvm, virtualization

On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
> On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
> > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> > > Hi Michael,
> > > 
> > >  I am playing with vhost multiqueue capability and have a question about
> > > vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> > > ConnectX-3 NIC which supports multiqueue and RSS. Network related
> > > parameters for qemu are:
> > > 
> > >    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> > >    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> > > 
> > > In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> > > 
> > > I am running one tcp stream into the guest using iperf. Since there is
> > > only one tcp stream I expect it to be handled by one queue only but
> > > this seams to be not the case. ethtool -S on a host shows that the
> > > stream is handled by one queue in the NIC, just like I would expect,
> > > but in a guest all 4 virtio-input interrupt are incremented. Am I
> > > missing any configuration?
> > 
> > I don't see anything obviously wrong with what you describe.
> > Maybe, somehow, same irqfd got bound to multiple MSI vectors?
> It does not look like this is what is happening judging by the way
> interrupts are distributed between queues. They are not distributed
> uniformly and often I see one queue gets most interrupt and others get
> much less and then it changes.

Weird. It would happen if you transmitted from multiple CPUs.
You did pin iperf to a single CPU within guest, did you not?



> --
> 			Gleb.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
  2014-11-17 10:38     ` Michael S. Tsirkin
@ 2014-11-17 11:22       ` Gleb Natapov
       [not found]       ` <20141117112207.GJ7589@cloudius-systems.com>
  1 sibling, 0 replies; 15+ messages in thread
From: Gleb Natapov @ 2014-11-17 11:22 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization

On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote:
> On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
> > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
> > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> > > > Hi Michael,
> > > > 
> > > >  I am playing with vhost multiqueue capability and have a question about
> > > > vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> > > > ConnectX-3 NIC which supports multiqueue and RSS. Network related
> > > > parameters for qemu are:
> > > > 
> > > >    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> > > >    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> > > > 
> > > > In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> > > > 
> > > > I am running one tcp stream into the guest using iperf. Since there is
> > > > only one tcp stream I expect it to be handled by one queue only but
> > > > this seams to be not the case. ethtool -S on a host shows that the
> > > > stream is handled by one queue in the NIC, just like I would expect,
> > > > but in a guest all 4 virtio-input interrupt are incremented. Am I
> > > > missing any configuration?
> > > 
> > > I don't see anything obviously wrong with what you describe.
> > > Maybe, somehow, same irqfd got bound to multiple MSI vectors?
> > It does not look like this is what is happening judging by the way
> > interrupts are distributed between queues. They are not distributed
> > uniformly and often I see one queue gets most interrupt and others get
> > much less and then it changes.
> 
> Weird. It would happen if you transmitted from multiple CPUs.
> You did pin iperf to a single CPU within guest, did you not?
> 
No, I didn't because I didn't expect it to matter for input interrupts.
When I run iperf on a host rx queue that receives all packets depends
only on a connection itself, not on a cpu iperf is running on (I tested
that). When I pin iperf in a guest I do indeed see that all interrupts
are arriving to the same irq vector. Is a number after virtio-input
in /proc/interrupt any indication of a queue a packet arrived to (on
a host I can use ethtool -S to check what queue receives packets, but
unfortunately this does not work for virtio nic in a guest)? Because if
it is the way RSS works in virtio is not how it works on a host and not
what I would expect after reading about RSS. The queue a packets arrives
to should be calculated by hashing fields from a packet header only.

--
			Gleb.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
       [not found]       ` <20141117112207.GJ7589@cloudius-systems.com>
@ 2014-11-17 11:58         ` Michael S. Tsirkin
       [not found]         ` <20141117115820.GA10709@redhat.com>
  1 sibling, 0 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2014-11-17 11:58 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: kvm, virtualization

On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote:
> On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote:
> > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
> > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
> > > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> > > > > Hi Michael,
> > > > > 
> > > > >  I am playing with vhost multiqueue capability and have a question about
> > > > > vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> > > > > ConnectX-3 NIC which supports multiqueue and RSS. Network related
> > > > > parameters for qemu are:
> > > > > 
> > > > >    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> > > > >    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> > > > > 
> > > > > In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> > > > > 
> > > > > I am running one tcp stream into the guest using iperf. Since there is
> > > > > only one tcp stream I expect it to be handled by one queue only but
> > > > > this seams to be not the case. ethtool -S on a host shows that the
> > > > > stream is handled by one queue in the NIC, just like I would expect,
> > > > > but in a guest all 4 virtio-input interrupt are incremented. Am I
> > > > > missing any configuration?
> > > > 
> > > > I don't see anything obviously wrong with what you describe.
> > > > Maybe, somehow, same irqfd got bound to multiple MSI vectors?
> > > It does not look like this is what is happening judging by the way
> > > interrupts are distributed between queues. They are not distributed
> > > uniformly and often I see one queue gets most interrupt and others get
> > > much less and then it changes.
> > 
> > Weird. It would happen if you transmitted from multiple CPUs.
> > You did pin iperf to a single CPU within guest, did you not?
> > 
> No, I didn't because I didn't expect it to matter for input interrupts.
> When I run iperf on a host rx queue that receives all packets depends
> only on a connection itself, not on a cpu iperf is running on (I tested
> that).

This really depends on the type of networking card you have
on the host, and how it's configured.

I think you will get something more closely resembling this
behaviour if you enable RFS in host.

> When I pin iperf in a guest I do indeed see that all interrupts
> are arriving to the same irq vector. Is a number after virtio-input
> in /proc/interrupt any indication of a queue a packet arrived to (on
> a host I can use ethtool -S to check what queue receives packets, but
> unfortunately this does not work for virtio nic in a guest)?

I think it is.

> Because if
> it is the way RSS works in virtio is not how it works on a host and not
> what I would expect after reading about RSS. The queue a packets arrives
> to should be calculated by hashing fields from a packet header only.

Yes, what virtio has is not RSS - it's an accelerated RFS really.

The point is to try and take application locality into account.


> --
> 			Gleb.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
       [not found]         ` <20141117115820.GA10709@redhat.com>
@ 2014-11-17 12:22           ` Gleb Natapov
  2014-11-18  3:37           ` Jason Wang
       [not found]           ` <20141117122214.GK7589@cloudius-systems.com>
  2 siblings, 0 replies; 15+ messages in thread
From: Gleb Natapov @ 2014-11-17 12:22 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization

On Mon, Nov 17, 2014 at 01:58:20PM +0200, Michael S. Tsirkin wrote:
> On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote:
> > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote:
> > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
> > > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
> > > > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> > > > > > Hi Michael,
> > > > > > 
> > > > > >  I am playing with vhost multiqueue capability and have a question about
> > > > > > vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> > > > > > ConnectX-3 NIC which supports multiqueue and RSS. Network related
> > > > > > parameters for qemu are:
> > > > > > 
> > > > > >    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> > > > > >    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> > > > > > 
> > > > > > In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> > > > > > 
> > > > > > I am running one tcp stream into the guest using iperf. Since there is
> > > > > > only one tcp stream I expect it to be handled by one queue only but
> > > > > > this seams to be not the case. ethtool -S on a host shows that the
> > > > > > stream is handled by one queue in the NIC, just like I would expect,
> > > > > > but in a guest all 4 virtio-input interrupt are incremented. Am I
> > > > > > missing any configuration?
> > > > > 
> > > > > I don't see anything obviously wrong with what you describe.
> > > > > Maybe, somehow, same irqfd got bound to multiple MSI vectors?
> > > > It does not look like this is what is happening judging by the way
> > > > interrupts are distributed between queues. They are not distributed
> > > > uniformly and often I see one queue gets most interrupt and others get
> > > > much less and then it changes.
> > > 
> > > Weird. It would happen if you transmitted from multiple CPUs.
> > > You did pin iperf to a single CPU within guest, did you not?
> > > 
> > No, I didn't because I didn't expect it to matter for input interrupts.
> > When I run iperf on a host rx queue that receives all packets depends
> > only on a connection itself, not on a cpu iperf is running on (I tested
> > that).
> 
> This really depends on the type of networking card you have
> on the host, and how it's configured.
> 
> I think you will get something more closely resembling this
> behaviour if you enable RFS in host.
> 
> > When I pin iperf in a guest I do indeed see that all interrupts
> > are arriving to the same irq vector. Is a number after virtio-input
> > in /proc/interrupt any indication of a queue a packet arrived to (on
> > a host I can use ethtool -S to check what queue receives packets, but
> > unfortunately this does not work for virtio nic in a guest)?
> 
> I think it is.
> 
> > Because if
> > it is the way RSS works in virtio is not how it works on a host and not
> > what I would expect after reading about RSS. The queue a packets arrives
> > to should be calculated by hashing fields from a packet header only.
> 
> Yes, what virtio has is not RSS - it's an accelerated RFS really.
> 
OK, if what virtio has is RFS and not RSS my test results make sense.
Thanks!

--
			Gleb.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
       [not found]         ` <20141117115820.GA10709@redhat.com>
  2014-11-17 12:22           ` Gleb Natapov
@ 2014-11-18  3:37           ` Jason Wang
  2014-11-18 11:05             ` Michael S. Tsirkin
       [not found]           ` <20141117122214.GK7589@cloudius-systems.com>
  2 siblings, 1 reply; 15+ messages in thread
From: Jason Wang @ 2014-11-18  3:37 UTC (permalink / raw)
  To: Michael S. Tsirkin, Gleb Natapov; +Cc: kvm, virtualization

On 11/17/2014 07:58 PM, Michael S. Tsirkin wrote:
> On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote:
>> > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote:
>>> > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
>>>> > > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
>>>>> > > > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
>>>>>> > > > > > Hi Michael,
>>>>>> > > > > > 
>>>>>> > > > > >  I am playing with vhost multiqueue capability and have a question about
>>>>>> > > > > > vhost multiqueue and RSS (receive side steering). My setup has Mellanox
>>>>>> > > > > > ConnectX-3 NIC which supports multiqueue and RSS. Network related
>>>>>> > > > > > parameters for qemu are:
>>>>>> > > > > > 
>>>>>> > > > > >    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
>>>>>> > > > > >    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
>>>>>> > > > > > 
>>>>>> > > > > > In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
>>>>>> > > > > > 
>>>>>> > > > > > I am running one tcp stream into the guest using iperf. Since there is
>>>>>> > > > > > only one tcp stream I expect it to be handled by one queue only but
>>>>>> > > > > > this seams to be not the case. ethtool -S on a host shows that the
>>>>>> > > > > > stream is handled by one queue in the NIC, just like I would expect,
>>>>>> > > > > > but in a guest all 4 virtio-input interrupt are incremented. Am I
>>>>>> > > > > > missing any configuration?
>>>>> > > > > 
>>>>> > > > > I don't see anything obviously wrong with what you describe.
>>>>> > > > > Maybe, somehow, same irqfd got bound to multiple MSI vectors?
>>>> > > > It does not look like this is what is happening judging by the way
>>>> > > > interrupts are distributed between queues. They are not distributed
>>>> > > > uniformly and often I see one queue gets most interrupt and others get
>>>> > > > much less and then it changes.
>>> > > 
>>> > > Weird. It would happen if you transmitted from multiple CPUs.
>>> > > You did pin iperf to a single CPU within guest, did you not?
>>> > > 
>> > No, I didn't because I didn't expect it to matter for input interrupts.
>> > When I run iperf on a host rx queue that receives all packets depends
>> > only on a connection itself, not on a cpu iperf is running on (I tested
>> > that).
> This really depends on the type of networking card you have
> on the host, and how it's configured.
>
> I think you will get something more closely resembling this
> behaviour if you enable RFS in host.
>
>> > When I pin iperf in a guest I do indeed see that all interrupts
>> > are arriving to the same irq vector. Is a number after virtio-input
>> > in /proc/interrupt any indication of a queue a packet arrived to (on
>> > a host I can use ethtool -S to check what queue receives packets, but
>> > unfortunately this does not work for virtio nic in a guest)?
> I think it is.
>
>> > Because if
>> > it is the way RSS works in virtio is not how it works on a host and not
>> > what I would expect after reading about RSS. The queue a packets arrives
>> > to should be calculated by hashing fields from a packet header only.
> Yes, what virtio has is not RSS - it's an accelerated RFS really.

Strictly speaking, not aRFS. aRFS requires a programmable filter and
needs driver to fill the filter on demand. For virtio-net, this is done
automatically in host side (tun/tap). There's no guest involvement.

>
> The point is to try and take application locality into account.
>

Yes, the locality was done through (consider a N vcpu guest with N queue):

- virtio-net driver will provide a default 1:1 mapping between vcpu and
txq through XPS
- virtio-net driver will suggest a default irq affinity hint also for a
1:1 mapping bettwen vcpu and txq/rxq

With all these, each vcpu get its private txq/rxq paris. And host side
implementation (tun/tap) will make sure if the packets of a flow were
received from queue N, if will also use queue N to transmit the packets
of this flow to guest.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
       [not found]             ` <201411180937499909334@sangfor.com>
@ 2014-11-18  3:41               ` Jason Wang
  2014-11-18  7:56                 ` Gleb Natapov
  0 siblings, 1 reply; 15+ messages in thread
From: Jason Wang @ 2014-11-18  3:41 UTC (permalink / raw)
  To: Zhang Haoyu, Gleb Natapov, Michael S. Tsirkin; +Cc: kvm, virtualization

On 11/18/2014 09:37 AM, Zhang Haoyu wrote:
>> On Mon, Nov 17, 2014 at 01:58:20PM +0200, Michael S. Tsirkin wrote:
>>> On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote:
>>>> On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote:
>>>>> On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
>>>>>> On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
>>>>>>> On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
>>>>>>>> Hi Michael,
>>>>>>>>
>>>>>>>>  I am playing with vhost multiqueue capability and have a question about
>>>>>>>> vhost multiqueue and RSS (receive side steering). My setup has Mellanox
>>>>>>>> ConnectX-3 NIC which supports multiqueue and RSS. Network related
>>>>>>>> parameters for qemu are:
>>>>>>>>
>>>>>>>>    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
>>>>>>>>    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
>>>>>>>>
>>>>>>>> In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
>>>>>>>>
>>>>>>>> I am running one tcp stream into the guest using iperf. Since there is
>>>>>>>> only one tcp stream I expect it to be handled by one queue only but
>>>>>>>> this seams to be not the case. ethtool -S on a host shows that the
>>>>>>>> stream is handled by one queue in the NIC, just like I would expect,
>>>>>>>> but in a guest all 4 virtio-input interrupt are incremented. Am I
>>>>>>>> missing any configuration?
>>>>>>> I don't see anything obviously wrong with what you describe.
>>>>>>> Maybe, somehow, same irqfd got bound to multiple MSI vectors?
>>>>>> It does not look like this is what is happening judging by the way
>>>>>> interrupts are distributed between queues. They are not distributed
>>>>>> uniformly and often I see one queue gets most interrupt and others get
>>>>>> much less and then it changes.
>>>>> Weird. It would happen if you transmitted from multiple CPUs.
>>>>> You did pin iperf to a single CPU within guest, did you not?
>>>>>
>>>> No, I didn't because I didn't expect it to matter for input interrupts.
>>>> When I run iperf on a host rx queue that receives all packets depends
>>>> only on a connection itself, not on a cpu iperf is running on (I tested
>>>> that).
>>> This really depends on the type of networking card you have
>>> on the host, and how it's configured.
>>>
>>> I think you will get something more closely resembling this
>>> behaviour if you enable RFS in host.
>>>
>>>> When I pin iperf in a guest I do indeed see that all interrupts
>>>> are arriving to the same irq vector. Is a number after virtio-input
>>>> in /proc/interrupt any indication of a queue a packet arrived to (on
>>>> a host I can use ethtool -S to check what queue receives packets, but
>>>> unfortunately this does not work for virtio nic in a guest)?
>>> I think it is.
>>>
>>>> Because if
>>>> it is the way RSS works in virtio is not how it works on a host and not
>>>> what I would expect after reading about RSS. The queue a packets arrives
>>>> to should be calculated by hashing fields from a packet header only.
>>> Yes, what virtio has is not RSS - it's an accelerated RFS really.
>>>
>> OK, if what virtio has is RFS and not RSS my test results make sense.
>> Thanks!
> I think the RSS emulation for virtio-mq NIC is implemented in tun_select_queue(),
> am I missing something?
>
> Thanks,
> Zhang Haoyu
>

Yes, if RSS is the short for Receive Side Steering which is a generic
technology. But RSS is usually short for Receive Side Scaling which was
commonly technology used by Windows, it was implemented through a
indirection table in the card which is obviously not supported in tun
currently.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
  2014-11-18  3:41               ` Jason Wang
@ 2014-11-18  7:56                 ` Gleb Natapov
  0 siblings, 0 replies; 15+ messages in thread
From: Gleb Natapov @ 2014-11-18  7:56 UTC (permalink / raw)
  To: Jason Wang; +Cc: Zhang Haoyu, virtualization, kvm, Michael S. Tsirkin

On Tue, Nov 18, 2014 at 11:41:11AM +0800, Jason Wang wrote:
> On 11/18/2014 09:37 AM, Zhang Haoyu wrote:
> >> On Mon, Nov 17, 2014 at 01:58:20PM +0200, Michael S. Tsirkin wrote:
> >>> On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote:
> >>>> On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote:
> >>>>> On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
> >>>>>> On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
> >>>>>>> On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> >>>>>>>> Hi Michael,
> >>>>>>>>
> >>>>>>>>  I am playing with vhost multiqueue capability and have a question about
> >>>>>>>> vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> >>>>>>>> ConnectX-3 NIC which supports multiqueue and RSS. Network related
> >>>>>>>> parameters for qemu are:
> >>>>>>>>
> >>>>>>>>    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> >>>>>>>>    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> >>>>>>>>
> >>>>>>>> In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> >>>>>>>>
> >>>>>>>> I am running one tcp stream into the guest using iperf. Since there is
> >>>>>>>> only one tcp stream I expect it to be handled by one queue only but
> >>>>>>>> this seams to be not the case. ethtool -S on a host shows that the
> >>>>>>>> stream is handled by one queue in the NIC, just like I would expect,
> >>>>>>>> but in a guest all 4 virtio-input interrupt are incremented. Am I
> >>>>>>>> missing any configuration?
> >>>>>>> I don't see anything obviously wrong with what you describe.
> >>>>>>> Maybe, somehow, same irqfd got bound to multiple MSI vectors?
> >>>>>> It does not look like this is what is happening judging by the way
> >>>>>> interrupts are distributed between queues. They are not distributed
> >>>>>> uniformly and often I see one queue gets most interrupt and others get
> >>>>>> much less and then it changes.
> >>>>> Weird. It would happen if you transmitted from multiple CPUs.
> >>>>> You did pin iperf to a single CPU within guest, did you not?
> >>>>>
> >>>> No, I didn't because I didn't expect it to matter for input interrupts.
> >>>> When I run iperf on a host rx queue that receives all packets depends
> >>>> only on a connection itself, not on a cpu iperf is running on (I tested
> >>>> that).
> >>> This really depends on the type of networking card you have
> >>> on the host, and how it's configured.
> >>>
> >>> I think you will get something more closely resembling this
> >>> behaviour if you enable RFS in host.
> >>>
> >>>> When I pin iperf in a guest I do indeed see that all interrupts
> >>>> are arriving to the same irq vector. Is a number after virtio-input
> >>>> in /proc/interrupt any indication of a queue a packet arrived to (on
> >>>> a host I can use ethtool -S to check what queue receives packets, but
> >>>> unfortunately this does not work for virtio nic in a guest)?
> >>> I think it is.
> >>>
> >>>> Because if
> >>>> it is the way RSS works in virtio is not how it works on a host and not
> >>>> what I would expect after reading about RSS. The queue a packets arrives
> >>>> to should be calculated by hashing fields from a packet header only.
> >>> Yes, what virtio has is not RSS - it's an accelerated RFS really.
> >>>
> >> OK, if what virtio has is RFS and not RSS my test results make sense.
> >> Thanks!
> > I think the RSS emulation for virtio-mq NIC is implemented in tun_select_queue(),
> > am I missing something?
> >
> > Thanks,
> > Zhang Haoyu
> >
> 
> Yes, if RSS is the short for Receive Side Steering which is a generic
> technology. But RSS is usually short for Receive Side Scaling which was
> commonly technology used by Windows, it was implemented through a
> indirection table in the card which is obviously not supported in tun
> currently.
Hmm, I had an impression that "Receive Side Steering" and "Receive Side
Scaling" are interchangeable. Software implementation for RSS is called
"Receive Packet Steering" according to Documentation/networking/scaling.txt
not "Receive Packet Scaling". Those damn TLAs are confusing.
 
--
			Gleb.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
  2014-11-18  3:37           ` Jason Wang
@ 2014-11-18 11:05             ` Michael S. Tsirkin
  2014-11-19  3:01               ` Jason Wang
  0 siblings, 1 reply; 15+ messages in thread
From: Michael S. Tsirkin @ 2014-11-18 11:05 UTC (permalink / raw)
  To: Jason Wang; +Cc: Gleb Natapov, kvm, virtualization

On Tue, Nov 18, 2014 at 11:37:03AM +0800, Jason Wang wrote:
> On 11/17/2014 07:58 PM, Michael S. Tsirkin wrote:
> > On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote:
> >> > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote:
> >>> > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
> >>>> > > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
> >>>>> > > > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> >>>>>> > > > > > Hi Michael,
> >>>>>> > > > > > 
> >>>>>> > > > > >  I am playing with vhost multiqueue capability and have a question about
> >>>>>> > > > > > vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> >>>>>> > > > > > ConnectX-3 NIC which supports multiqueue and RSS. Network related
> >>>>>> > > > > > parameters for qemu are:
> >>>>>> > > > > > 
> >>>>>> > > > > >    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> >>>>>> > > > > >    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
> >>>>>> > > > > > 
> >>>>>> > > > > > In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
> >>>>>> > > > > > 
> >>>>>> > > > > > I am running one tcp stream into the guest using iperf. Since there is
> >>>>>> > > > > > only one tcp stream I expect it to be handled by one queue only but
> >>>>>> > > > > > this seams to be not the case. ethtool -S on a host shows that the
> >>>>>> > > > > > stream is handled by one queue in the NIC, just like I would expect,
> >>>>>> > > > > > but in a guest all 4 virtio-input interrupt are incremented. Am I
> >>>>>> > > > > > missing any configuration?
> >>>>> > > > > 
> >>>>> > > > > I don't see anything obviously wrong with what you describe.
> >>>>> > > > > Maybe, somehow, same irqfd got bound to multiple MSI vectors?
> >>>> > > > It does not look like this is what is happening judging by the way
> >>>> > > > interrupts are distributed between queues. They are not distributed
> >>>> > > > uniformly and often I see one queue gets most interrupt and others get
> >>>> > > > much less and then it changes.
> >>> > > 
> >>> > > Weird. It would happen if you transmitted from multiple CPUs.
> >>> > > You did pin iperf to a single CPU within guest, did you not?
> >>> > > 
> >> > No, I didn't because I didn't expect it to matter for input interrupts.
> >> > When I run iperf on a host rx queue that receives all packets depends
> >> > only on a connection itself, not on a cpu iperf is running on (I tested
> >> > that).
> > This really depends on the type of networking card you have
> > on the host, and how it's configured.
> >
> > I think you will get something more closely resembling this
> > behaviour if you enable RFS in host.
> >
> >> > When I pin iperf in a guest I do indeed see that all interrupts
> >> > are arriving to the same irq vector. Is a number after virtio-input
> >> > in /proc/interrupt any indication of a queue a packet arrived to (on
> >> > a host I can use ethtool -S to check what queue receives packets, but
> >> > unfortunately this does not work for virtio nic in a guest)?
> > I think it is.
> >
> >> > Because if
> >> > it is the way RSS works in virtio is not how it works on a host and not
> >> > what I would expect after reading about RSS. The queue a packets arrives
> >> > to should be calculated by hashing fields from a packet header only.
> > Yes, what virtio has is not RSS - it's an accelerated RFS really.
> 
> Strictly speaking, not aRFS. aRFS requires a programmable filter and
> needs driver to fill the filter on demand. For virtio-net, this is done
> automatically in host side (tun/tap). There's no guest involvement.

Well guest affects the filter by sending tx packets.

> >
> > The point is to try and take application locality into account.
> >
> 
> Yes, the locality was done through (consider a N vcpu guest with N queue):
> 
> - virtio-net driver will provide a default 1:1 mapping between vcpu and
> txq through XPS
> - virtio-net driver will suggest a default irq affinity hint also for a
> 1:1 mapping bettwen vcpu and txq/rxq
> 
> With all these, each vcpu get its private txq/rxq paris. And host side
> implementation (tun/tap) will make sure if the packets of a flow were
> received from queue N, if will also use queue N to transmit the packets
> of this flow to guest.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: vhost + multiqueue + RSS question.
  2014-11-18 11:05             ` Michael S. Tsirkin
@ 2014-11-19  3:01               ` Jason Wang
  0 siblings, 0 replies; 15+ messages in thread
From: Jason Wang @ 2014-11-19  3:01 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Gleb Natapov, kvm, virtualization

On 11/18/2014 07:05 PM, Michael S. Tsirkin wrote:
> On Tue, Nov 18, 2014 at 11:37:03AM +0800, Jason Wang wrote:
>> > On 11/17/2014 07:58 PM, Michael S. Tsirkin wrote:
>>> > > On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote:
>>>>> > >> > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote:
>>>>>>> > >>> > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote:
>>>>>>>>> > >>>> > > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
>>>>>>>>>>> > >>>>> > > > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
>>>>>>>>>>>>> > >>>>>> > > > > > Hi Michael,
>>>>>>>>>>>>> > >>>>>> > > > > > 
>>>>>>>>>>>>> > >>>>>> > > > > >  I am playing with vhost multiqueue capability and have a question about
>>>>>>>>>>>>> > >>>>>> > > > > > vhost multiqueue and RSS (receive side steering). My setup has Mellanox
>>>>>>>>>>>>> > >>>>>> > > > > > ConnectX-3 NIC which supports multiqueue and RSS. Network related
>>>>>>>>>>>>> > >>>>>> > > > > > parameters for qemu are:
>>>>>>>>>>>>> > >>>>>> > > > > > 
>>>>>>>>>>>>> > >>>>>> > > > > >    -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
>>>>>>>>>>>>> > >>>>>> > > > > >    -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
>>>>>>>>>>>>> > >>>>>> > > > > > 
>>>>>>>>>>>>> > >>>>>> > > > > > In a guest I ran "ethtool -L eth0 combined 4" to enable multiqueue.
>>>>>>>>>>>>> > >>>>>> > > > > > 
>>>>>>>>>>>>> > >>>>>> > > > > > I am running one tcp stream into the guest using iperf. Since there is
>>>>>>>>>>>>> > >>>>>> > > > > > only one tcp stream I expect it to be handled by one queue only but
>>>>>>>>>>>>> > >>>>>> > > > > > this seams to be not the case. ethtool -S on a host shows that the
>>>>>>>>>>>>> > >>>>>> > > > > > stream is handled by one queue in the NIC, just like I would expect,
>>>>>>>>>>>>> > >>>>>> > > > > > but in a guest all 4 virtio-input interrupt are incremented. Am I
>>>>>>>>>>>>> > >>>>>> > > > > > missing any configuration?
>>>>>>>>>>> > >>>>> > > > > 
>>>>>>>>>>> > >>>>> > > > > I don't see anything obviously wrong with what you describe.
>>>>>>>>>>> > >>>>> > > > > Maybe, somehow, same irqfd got bound to multiple MSI vectors?
>>>>>>>>> > >>>> > > > It does not look like this is what is happening judging by the way
>>>>>>>>> > >>>> > > > interrupts are distributed between queues. They are not distributed
>>>>>>>>> > >>>> > > > uniformly and often I see one queue gets most interrupt and others get
>>>>>>>>> > >>>> > > > much less and then it changes.
>>>>>>> > >>> > > 
>>>>>>> > >>> > > Weird. It would happen if you transmitted from multiple CPUs.
>>>>>>> > >>> > > You did pin iperf to a single CPU within guest, did you not?
>>>>>>> > >>> > > 
>>>>> > >> > No, I didn't because I didn't expect it to matter for input interrupts.
>>>>> > >> > When I run iperf on a host rx queue that receives all packets depends
>>>>> > >> > only on a connection itself, not on a cpu iperf is running on (I tested
>>>>> > >> > that).
>>> > > This really depends on the type of networking card you have
>>> > > on the host, and how it's configured.
>>> > >
>>> > > I think you will get something more closely resembling this
>>> > > behaviour if you enable RFS in host.
>>> > >
>>>>> > >> > When I pin iperf in a guest I do indeed see that all interrupts
>>>>> > >> > are arriving to the same irq vector. Is a number after virtio-input
>>>>> > >> > in /proc/interrupt any indication of a queue a packet arrived to (on
>>>>> > >> > a host I can use ethtool -S to check what queue receives packets, but
>>>>> > >> > unfortunately this does not work for virtio nic in a guest)?
>>> > > I think it is.
>>> > >
>>>>> > >> > Because if
>>>>> > >> > it is the way RSS works in virtio is not how it works on a host and not
>>>>> > >> > what I would expect after reading about RSS. The queue a packets arrives
>>>>> > >> > to should be calculated by hashing fields from a packet header only.
>>> > > Yes, what virtio has is not RSS - it's an accelerated RFS really.
>> > 
>> > Strictly speaking, not aRFS. aRFS requires a programmable filter and
>> > needs driver to fill the filter on demand. For virtio-net, this is done
>> > automatically in host side (tun/tap). There's no guest involvement.
> Well guest affects the filter by sending tx packets.
>


Yes, it is.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2014-11-19  3:01 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20141116161818.GD7589@cloudius-systems.com>
2014-11-16 18:56 ` vhost + multiqueue + RSS question Michael S. Tsirkin
2014-11-17  4:54   ` Venkateswara Rao Nandigam
2014-11-17  5:30   ` Jason Wang
2014-11-17  7:26     ` Gleb Natapov
     [not found]   ` <5CC583AB71FC0C44A5CAB54823E83A8422B8ABD4@SINPEX01CL01.citrite.net>
2014-11-17  5:39     ` Jason Wang
2014-11-17  7:44   ` Gleb Natapov
     [not found]   ` <20141117074423.GG7589@cloudius-systems.com>
2014-11-17 10:38     ` Michael S. Tsirkin
2014-11-17 11:22       ` Gleb Natapov
     [not found]       ` <20141117112207.GJ7589@cloudius-systems.com>
2014-11-17 11:58         ` Michael S. Tsirkin
     [not found]         ` <20141117115820.GA10709@redhat.com>
2014-11-17 12:22           ` Gleb Natapov
2014-11-18  3:37           ` Jason Wang
2014-11-18 11:05             ` Michael S. Tsirkin
2014-11-19  3:01               ` Jason Wang
     [not found]           ` <20141117122214.GK7589@cloudius-systems.com>
     [not found]             ` <201411180937499909334@sangfor.com>
2014-11-18  3:41               ` Jason Wang
2014-11-18  7:56                 ` Gleb Natapov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).