From: Sridhar Samudrala <sri@us.ibm.com>
To: Jason Wang <jasowang@redhat.com>
Cc: krkumar2@in.ibm.com, kvm@vger.kernel.org, mst@redhat.com,
virtualization@lists.linux-foundation.org,
levinsasha928@gmail.com, netdev@vger.kernel.org,
bhutchings@solarflare.com
Subject: Re: [net-next RFC PATCH 5/5] virtio-net: flow director support
Date: Thu, 08 Dec 2011 18:00:39 -0800 [thread overview]
Message-ID: <4EE16BC7.1030808@us.ibm.com> (raw)
In-Reply-To: <4EDF47D9.1090501@redhat.com>
On 12/7/2011 3:02 AM, Jason Wang wrote:
> On 12/06/2011 11:42 PM, Sridhar Samudrala wrote:
>> On 12/6/2011 5:15 AM, Stefan Hajnoczi wrote:
>>> On Tue, Dec 6, 2011 at 10:21 AM, Jason Wang<jasowang@redhat.com>
>>> wrote:
>>>> On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
>>>>> On Tue, Dec 6, 2011 at 6:33 AM, Jason Wang<jasowang@redhat.com>
>>>>> wrote:
>>>>>> On 12/05/2011 06:55 PM, Stefan Hajnoczi wrote:
>>>>>>> On Mon, Dec 5, 2011 at 8:59 AM, Jason Wang<jasowang@redhat.com>
>>>>>>> wrote:
>>>>> The vcpus are just threads and may not be bound to physical CPUs, so
>>>>> what is the big picture here? Is the guest even in the position to
>>>>> set the best queue mappings today?
>>>>
>>>> Not sure it could publish the best mapping but the idea is to make
>>>> sure the
>>>> packets of a flow were handled by the same guest vcpu and may be
>>>> the same
>>>> vhost thread in order to eliminate the packet reordering and lock
>>>> contention. But this assumption does not take the bouncing of vhost
>>>> or vcpu
>>>> threads which would also affect the result.
>>> Okay, this is why I'd like to know what the big picture here is. What
>>> solution are you proposing? How are we going to have everything from
>>> guest application, guest kernel, host threads, and host NIC driver
>>> play along so we get the right steering up the entire stack. I think
>>> there needs to be an answer to that before changing virtio-net to add
>>> any steering mechanism.
>>>
>>>
>> Yes. Also the current model of a vhost thread per VM's interface
>> doesn't help with packet steering
>> all the way from the guest to the host physical NIC.
>>
>> I think we need to have vhost thread(s) per-CPU that can handle
>> packets to/from physical NIC's
>> TX/RX queues. Currently we have a single vhost thread for a VM's i/f
>> that handles all the packets from
>> various flows coming from a multi-queue physical NIC.
>
> Even if we have per-cpu workthread, only one socket is used to queue
> the packet then, so a multiple queue(sockets) tap/macvtap is still
> needed.
I think so. We need per-cpu tap/macvtap sockets along with per-cpu
vhost threads.
This will parallelize all the way from physical NIC to vhost.
Thanks
Sridhar
next prev parent reply other threads:[~2011-12-09 2:00 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-05 8:58 [net-next RFC PATCH 0/5] Series short description Jason Wang
2011-12-05 8:58 ` [net-next RFC PATCH 1/5] virtio_net: passing rxhash through vnet_hdr Jason Wang
2011-12-05 8:58 ` [net-next RFC PATCH 2/5] tuntap: simple flow director support Jason Wang
2011-12-05 10:38 ` Stefan Hajnoczi
2011-12-05 20:09 ` Ben Hutchings
2011-12-06 7:21 ` Jason Wang
2011-12-06 17:31 ` Ben Hutchings
2011-12-05 8:59 ` [net-next RFC PATCH 3/5] macvtap: " Jason Wang
2011-12-05 20:11 ` Ben Hutchings
2011-12-05 8:59 ` [net-next RFC PATCH 4/5] virtio: introduce a method to get the irq of a specific virtqueue Jason Wang
2011-12-05 8:59 ` [net-next RFC PATCH 5/5] virtio-net: flow director support Jason Wang
2011-12-05 10:55 ` Stefan Hajnoczi
2011-12-06 6:33 ` Jason Wang
2011-12-06 9:18 ` Stefan Hajnoczi
2011-12-06 10:21 ` Jason Wang
2011-12-06 13:15 ` Stefan Hajnoczi
2011-12-06 15:42 ` Sridhar Samudrala
2011-12-06 16:14 ` Michael S. Tsirkin
2011-12-06 23:10 ` Sridhar Samudrala
2011-12-07 11:05 ` Jason Wang
2011-12-07 11:02 ` Jason Wang
2011-12-09 2:00 ` Sridhar Samudrala [this message]
2011-12-07 3:03 ` Jason Wang
2011-12-07 9:08 ` Stefan Hajnoczi
2011-12-07 12:10 ` Jason Wang
2011-12-07 15:04 ` Stefan Hajnoczi
2011-12-05 20:42 ` Ben Hutchings
2011-12-06 7:25 ` Jason Wang
2011-12-06 17:36 ` Ben Hutchings
2011-12-07 7:30 ` [net-next RFC PATCH 0/5] Series short description Rusty Russell
2011-12-07 11:31 ` Jason Wang
2011-12-07 17:02 ` Ben Hutchings
2011-12-08 10:06 ` Jason Wang
2011-12-09 5:31 ` Rusty Russell
2011-12-15 1:36 ` Ben Hutchings
2011-12-15 23:12 ` Rusty Russell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4EE16BC7.1030808@us.ibm.com \
--to=sri@us.ibm.com \
--cc=bhutchings@solarflare.com \
--cc=jasowang@redhat.com \
--cc=krkumar2@in.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=levinsasha928@gmail.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).