From: jasowang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Krishna Kumar <krkumar2@in.ibm.com>,
arnd@arndb.de, netdev@vger.kernel.org,
virtualization@lists.linux-foundation.org,
levinsasha928@gmail.com, davem@davemloft.net
Subject: Re: [PATCH] macvtap: Fix macvtap_get_queue to use rxhash first
Date: Thu, 24 Nov 2011 20:56:45 +0800 [thread overview]
Message-ID: <4ECE3F0D.3090908@redhat.com> (raw)
In-Reply-To: <20111124103449.GA16031@redhat.com>
On 11/24/2011 06:34 PM, Michael S. Tsirkin wrote:
> On Thu, Nov 24, 2011 at 06:13:41PM +0800, jasowang wrote:
>> On 11/24/2011 05:59 PM, Michael S. Tsirkin wrote:
>>> On Thu, Nov 24, 2011 at 01:47:14PM +0530, Krishna Kumar wrote:
>>>> It was reported that the macvtap device selects a
>>>> different vhost (when used with multiqueue feature)
>>>> for incoming packets of a single connection. Use
>>>> packet hash first. Patch tested on MQ virtio_net.
>>> So this is sure to address the problem, why exactly does this happen?
>> Ixgbe has flow director and bind queue to host cpu, so it can make
>> sure the packet of a flow to be handled by the same queue/cpu. So
>> when vhost thread moves from one host cpu to another, ixgbe would
>> therefore send the packet to the new cpu/queue.
> Confused. How does ixgbe know about vhost thread moving?
As far as I can see, ixgbe binds queues to physical cpu, so let consider:
vhost thread transmits packets of flow A on processor M
during packet transmission, ixgbe driver programs the card to deliver
the packet of flow A to queue/cpu M through flow director (see ixgbe_atr())
vhost thread then receives packet of flow A with from M
...
vhost thread transmits packets of flow A on processor N
ixgbe driver programs the flow director to change the delivery of flow A
to queue N ( cpu N )
vhost thread then receives packet of flow A with from N
...
So, for a single flow A, we may get different queue mappings. Using
rxhash instead may solve this issue.
>
>>> Does your device spread a single flow across multiple RX queues? Would
>>> not that cause trouble in the TCP layer?
>>> It would seem that using the recorded queue should be faster with
>>> less cache misses. Before we give up on that, I'd
>>> like to understand why it's wrong. Do you know?
>>>
>>>> Signed-off-by: Krishna Kumar<krkumar2@in.ibm.com>
>>>> ---
>>>> drivers/net/macvtap.c | 16 ++++++++--------
>>>> 1 file changed, 8 insertions(+), 8 deletions(-)
>>>>
>>>> diff -ruNp org/drivers/net/macvtap.c new/drivers/net/macvtap.c
>>>> --- org/drivers/net/macvtap.c 2011-10-22 08:38:01.000000000 +0530
>>>> +++ new/drivers/net/macvtap.c 2011-11-16 18:34:51.000000000 +0530
>>>> @@ -175,6 +175,14 @@ static struct macvtap_queue *macvtap_get
>>>> if (!numvtaps)
>>>> goto out;
>>>>
>>>> + /* Check if we can use flow to select a queue */
>>>> + rxq = skb_get_rxhash(skb);
>>>> + if (rxq) {
>>>> + tap = rcu_dereference(vlan->taps[rxq % numvtaps]);
>>>> + if (tap)
>>>> + goto out;
>>>> + }
>>>> +
>>>> if (likely(skb_rx_queue_recorded(skb))) {
>>>> rxq = skb_get_rx_queue(skb);
>>>>
>>>> @@ -186,14 +194,6 @@ static struct macvtap_queue *macvtap_get
>>>> goto out;
>>>> }
>>>>
>>>> - /* Check if we can use flow to select a queue */
>>>> - rxq = skb_get_rxhash(skb);
>>>> - if (rxq) {
>>>> - tap = rcu_dereference(vlan->taps[rxq % numvtaps]);
>>>> - if (tap)
>>>> - goto out;
>>>> - }
>>>> -
>>>> /* Everything failed - find first available queue */
>>>> for (rxq = 0; rxq< MAX_MACVTAP_QUEUES; rxq++) {
>>>> tap = rcu_dereference(vlan->taps[rxq]);
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2011-11-24 12:56 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-11-24 8:17 [PATCH] macvtap: Fix macvtap_get_queue to use rxhash first Krishna Kumar
2011-11-24 9:36 ` jasowang
2011-11-24 9:59 ` Michael S. Tsirkin
2011-11-24 10:13 ` jasowang
2011-11-24 10:34 ` Michael S. Tsirkin
2011-11-24 12:56 ` jasowang [this message]
2011-11-24 16:14 ` Michael S. Tsirkin
2011-11-25 3:07 ` Krishna Kumar2
2011-11-25 3:21 ` Jason Wang
2011-11-25 4:09 ` Krishna Kumar2
2011-11-25 6:35 ` David Miller
2011-11-27 17:23 ` Michael S. Tsirkin
2011-11-28 4:40 ` Jason Wang
2011-12-07 16:10 ` Michael S. Tsirkin
2011-12-07 18:52 ` David Miller
2011-12-20 11:15 ` Michael S. Tsirkin
2011-12-20 18:46 ` David Miller
2011-12-08 9:46 ` Jason Wang
2011-11-27 17:14 ` Michael S. Tsirkin
2011-11-28 4:25 ` Jason Wang
2011-11-28 17:42 ` Stephen Hemminger
2011-11-25 3:09 ` Jason Wang
2011-11-24 11:14 ` Krishna Kumar2
2011-11-24 13:00 ` jasowang
2011-11-24 16:12 ` Michael S. Tsirkin
2011-11-25 2:58 ` Krishna Kumar2
2011-11-25 3:18 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4ECE3F0D.3090908@redhat.com \
--to=jasowang@redhat.com \
--cc=arnd@arndb.de \
--cc=davem@davemloft.net \
--cc=krkumar2@in.ibm.com \
--cc=levinsasha928@gmail.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).