From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
linux-kernel@vger.kernel.org, kvm list <kvm@vger.kernel.org>,
virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org
Subject: Re: [PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
Date: Wed, 1 Jul 2020 22:09:53 +0800 [thread overview]
Message-ID: <0a83aa03-8e3c-1271-82f5-4c07931edea3@redhat.com> (raw)
In-Reply-To: <CAJaqyWedEg9TBkH1MxGP1AecYHD-e-=ugJ6XUN+CWb=rQGf49g@mail.gmail.com>
On 2020/7/1 下午9:04, Eugenio Perez Martin wrote:
> On Wed, Jul 1, 2020 at 2:40 PM Jason Wang <jasowang@redhat.com> wrote:
>>
>> On 2020/7/1 下午6:43, Eugenio Perez Martin wrote:
>>> On Tue, Jun 23, 2020 at 6:15 PM Eugenio Perez Martin
>>> <eperezma@redhat.com> wrote:
>>>> On Mon, Jun 22, 2020 at 6:29 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>>>>> On Mon, Jun 22, 2020 at 06:11:21PM +0200, Eugenio Perez Martin wrote:
>>>>>> On Mon, Jun 22, 2020 at 5:55 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>>>>>>> On Fri, Jun 19, 2020 at 08:07:57PM +0200, Eugenio Perez Martin wrote:
>>>>>>>> On Mon, Jun 15, 2020 at 2:28 PM Eugenio Perez Martin
>>>>>>>> <eperezma@redhat.com> wrote:
>>>>>>>>> On Thu, Jun 11, 2020 at 5:22 PM Konrad Rzeszutek Wilk
>>>>>>>>> <konrad.wilk@oracle.com> wrote:
>>>>>>>>>> On Thu, Jun 11, 2020 at 07:34:19AM -0400, Michael S. Tsirkin wrote:
>>>>>>>>>>> As testing shows no performance change, switch to that now.
>>>>>>>>>> What kind of testing? 100GiB? Low latency?
>>>>>>>>>>
>>>>>>>>> Hi Konrad.
>>>>>>>>>
>>>>>>>>> I tested this version of the patch:
>>>>>>>>> https://lkml.org/lkml/2019/10/13/42
>>>>>>>>>
>>>>>>>>> It was tested for throughput with DPDK's testpmd (as described in
>>>>>>>>> http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html)
>>>>>>>>> and kernel pktgen. No latency tests were performed by me. Maybe it is
>>>>>>>>> interesting to perform a latency test or just a different set of tests
>>>>>>>>> over a recent version.
>>>>>>>>>
>>>>>>>>> Thanks!
>>>>>>>> I have repeated the tests with v9, and results are a little bit different:
>>>>>>>> * If I test opening it with testpmd, I see no change between versions
>>>>>>> OK that is testpmd on guest, right? And vhost-net on the host?
>>>>>>>
>>>>>> Hi Michael.
>>>>>>
>>>>>> No, sorry, as described in
>>>>>> http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html.
>>>>>> But I could add to test it in the guest too.
>>>>>>
>>>>>> These kinds of raw packets "bursts" do not show performance
>>>>>> differences, but I could test deeper if you think it would be worth
>>>>>> it.
>>>>> Oh ok, so this is without guest, with virtio-user.
>>>>> It might be worth checking dpdk within guest too just
>>>>> as another data point.
>>>>>
>>>> Ok, I will do it!
>>>>
>>>>>>>> * If I forward packets between two vhost-net interfaces in the guest
>>>>>>>> using a linux bridge in the host:
>>>>>>> And here I guess you mean virtio-net in the guest kernel?
>>>>>> Yes, sorry: Two virtio-net interfaces connected with a linux bridge in
>>>>>> the host. More precisely:
>>>>>> * Adding one of the interfaces to another namespace, assigning it an
>>>>>> IP, and starting netserver there.
>>>>>> * Assign another IP in the range manually to the other virtual net
>>>>>> interface, and start the desired test there.
>>>>>>
>>>>>> If you think it would be better to perform then differently please let me know.
>>>>> Not sure why you bother with namespaces since you said you are
>>>>> using L2 bridging. I guess it's unimportant.
>>>>>
>>>> Sorry, I think I should have provided more context about that.
>>>>
>>>> The only reason to use namespaces is to force the traffic of these
>>>> netperf tests to go through the external bridge. To test netperf
>>>> different possibilities than the testpmd (or pktgen or others "blast
>>>> of frames unconditionally" tests).
>>>>
>>>> This way, I make sure that is the same version of everything in the
>>>> guest, and is a little bit easier to manage cpu affinity, start and
>>>> stop testing...
>>>>
>>>> I could use a different VM for sending and receiving, but I find this
>>>> way a faster one and it should not introduce a lot of noise. I can
>>>> test with two VM if you think that this use of network namespace
>>>> introduces too much noise.
>>>>
>>>> Thanks!
>>>>
>>>>>>>> - netperf UDP_STREAM shows a performance increase of 1.8, almost
>>>>>>>> doubling performance. This gets lower as frame size increase.
>>> Regarding UDP_STREAM:
>>> * with event_idx=on: The performance difference is reduced a lot if
>>> applied affinity properly (manually assigning CPU on host/guest and
>>> setting IRQs on guest), making them perform equally with and without
>>> the patch again. Maybe the batching makes the scheduler perform
>>> better.
>>
>> Note that for UDP_STREAM, the result is pretty trick to be analyzed. E.g
>> setting a sndbuf for TAP may help for the performance (reduce the drop).
>>
> Ok, will add that to the test. Thanks!
Actually, it's better to skip the UDP_STREAM test since:
- My understanding is very few application is using raw UDP stream
- It's hard to analyze (usually you need to count the drop ratio etc)
>
>>>>>>>> - rests of the test goes noticeably worse: UDP_RR goes from ~6347
>>>>>>>> transactions/sec to 5830
>>> * Regarding UDP_RR, TCP_STREAM, and TCP_RR, proper CPU pinning makes
>>> them perform similarly again, only a very small performance drop
>>> observed. It could be just noise.
>>> ** All of them perform better than vanilla if event_idx=off, not sure
>>> why. I can try to repeat them if you suspect that can be a test
>>> failure.
>>>
>>> * With testpmd and event_idx=off, if I send from the VM to host, I see
>>> a performance increment especially in small packets. The buf api also
>>> increases performance compared with only batching: Sending the minimum
>>> packet size in testpmd makes pps go from 356kpps to 473 kpps.
>>
>> What's your setup for this. The number looks rather low. I'd expected
>> 1-2 Mpps at least.
>>
> Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 2 NUMA nodes of 16G memory
> each, and no device assigned to the NUMA node I'm testing in. Too low
> for testpmd AF_PACKET driver too?
I don't test AF_PACKET, I guess it should use the V3 which mmap based
zerocopy interface.
And it might worth to check the cpu utilization of vhost thread. It's
required to stress it as 100% otherwise there could be a bottleneck
somewhere.
>
>>> Sending
>>> 1024 length UDP-PDU makes it go from 570kpps to 64 kpps.
>>>
>>> Something strange I observe in these tests: I get more pps the bigger
>>> the transmitted buffer size is. Not sure why.
>>>
>>> ** Sending from the host to the VM does not make a big change with the
>>> patches in small packets scenario (minimum, 64 bytes, about 645
>>> without the patch, ~625 with batch and batch+buf api). If the packets
>>> are bigger, I can see a performance increase: with 256 bits,
>>
>> I think you meant bytes?
>>
> Yes, sorry.
>
>>> it goes
>>> from 590kpps to about 600kpps, and in case of 1500 bytes payload it
>>> gets from 348kpps to 528kpps, so it is clearly an improvement.
>>>
>>> * with testpmd and event_idx=on, batching+buf api perform similarly in
>>> both directions.
>>>
>>> All of testpmd tests were performed with no linux bridge, just a
>>> host's tap interface (<interface type='ethernet'> in xml),
>>
>> What DPDK driver did you use in the test (AF_PACKET?).
>>
> Yes, both testpmd are using AF_PACKET driver.
I see, using AF_PACKET means extra layers of issues need to be analyzed
which is probably not good.
>
>>> with a
>>> testpmd txonly and another in rxonly forward mode, and using the
>>> receiving side packets/bytes data. Guest's rps, xps and interrupts,
>>> and host's vhost threads affinity were also tuned in each test to
>>> schedule both testpmd and vhost in different processors.
>>
>> My feeling is that if we start from simple setup, it would be more
>> easier as a start. E.g start without an VM.
>>
>> 1) TX: testpmd(txonly) -> virtio-user -> vhost_net -> XDP_DROP on TAP
>> 2) RX: pkgetn -> TAP -> vhost_net -> testpmd(rxonly)
>>
> Got it. Is there a reason to prefer pktgen over testpmd?
I think the reason is using testpmd you must use a userspace kernel
interface (AF_PACKET), and it could not be as fast as pktgen since:
- it talks directly to xmit of TAP
- skb can be cloned
Thanks
>
>> Thanks
>>
>>
>>> I will send the v10 RFC with the small changes requested by Stefan and Jason.
>>>
>>> Thanks!
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>>>>> OK so it seems plausible that we still have a bug where an interrupt
>>>>>>> is delayed. That is the main difference between pmd and virtio.
>>>>>>> Let's try disabling event index, and see what happens - that's
>>>>>>> the trickiest part of interrupts.
>>>>>>>
>>>>>> Got it, will get back with the results.
>>>>>>
>>>>>> Thank you very much!
>>>>>>
>>>>>>>> - TCP_STREAM goes from ~10.7 gbps to ~7Gbps
>>>>>>>> - TCP_RR from 6223.64 transactions/sec to 5739.44
next prev parent reply other threads:[~2020-07-01 14:10 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-11 11:34 [PATCH RFC v8 00/11] vhost: ring format independence Michael S. Tsirkin
2020-06-11 11:34 ` [PATCH RFC v8 01/11] vhost: option to fetch descriptors through an independent struct Michael S. Tsirkin
2020-06-11 11:34 ` [PATCH RFC v8 02/11] vhost: use batched get_vq_desc version Michael S. Tsirkin
2020-06-11 15:22 ` Konrad Rzeszutek Wilk
2020-06-15 12:28 ` Eugenio Perez Martin
2020-06-19 18:07 ` Eugenio Perez Martin
2020-06-19 18:25 ` Eugenio Perez Martin
2020-06-22 9:07 ` Jason Wang
2020-06-22 10:44 ` Eugenio Perez Martin
2020-06-22 15:55 ` Michael S. Tsirkin
2020-06-22 16:11 ` Eugenio Perez Martin
2020-06-22 16:29 ` Michael S. Tsirkin
2020-06-23 16:15 ` Eugenio Perez Martin
2020-07-01 10:43 ` Eugenio Perez Martin
2020-07-01 11:11 ` Michael S. Tsirkin
2020-07-01 12:56 ` Eugenio Perez Martin
2020-07-01 12:39 ` Jason Wang
2020-07-01 13:04 ` Eugenio Perez Martin
2020-07-01 14:09 ` Jason Wang [this message]
2020-07-09 16:46 ` Eugenio Perez Martin
2020-07-09 17:37 ` Michael S. Tsirkin
2020-07-10 3:56 ` Jason Wang
2020-07-10 5:39 ` Eugenio Perez Martin
2020-07-10 5:58 ` Michael S. Tsirkin
2020-07-16 17:16 ` Eugenio Perez Martin
2020-07-20 8:55 ` Jason Wang
2020-07-20 13:07 ` Eugenio Perez Martin
2020-07-20 9:27 ` Michael S. Tsirkin
2020-07-20 11:16 ` Eugenio Pérez
2020-07-20 11:45 ` Michael S. Tsirkin
2020-07-21 2:55 ` Jason Wang
2020-07-29 18:37 ` Eugenio Perez Martin
2020-07-10 6:44 ` Jason Wang
2020-06-17 3:19 ` Jason Wang
2020-06-19 17:56 ` Eugenio Perez Martin
2020-06-22 16:00 ` Michael S. Tsirkin
2020-06-23 2:51 ` Jason Wang
2020-06-23 7:00 ` Eugenio Perez Martin
2020-06-23 7:15 ` Jason Wang
2020-06-23 8:25 ` Michael S. Tsirkin
2020-06-23 15:54 ` Eugenio Perez Martin
2020-06-11 11:34 ` [PATCH RFC v8 03/11] vhost/net: pass net specific struct pointer Michael S. Tsirkin
2020-06-15 16:08 ` Eugenio Perez Martin
2020-06-11 11:34 ` [PATCH RFC v8 04/11] vhost: reorder functions Michael S. Tsirkin
2020-06-11 11:34 ` [PATCH RFC v8 05/11] vhost: format-independent API for used buffers Michael S. Tsirkin
2020-06-15 16:11 ` Eugenio Perez Martin
2020-06-11 11:34 ` [PATCH RFC v8 06/11] vhost/net: convert to new API: heads->bufs Michael S. Tsirkin
2020-06-11 11:34 ` [PATCH RFC v8 07/11] vhost/net: avoid iov length math Michael S. Tsirkin
2020-06-11 11:34 ` [PATCH RFC v8 08/11] vhost/test: convert to the buf API Michael S. Tsirkin
2020-06-11 11:34 ` [PATCH RFC v8 09/11] vhost/scsi: switch to buf APIs Michael S. Tsirkin
2020-06-11 11:34 ` [PATCH RFC v8 10/11] vhost/vsock: switch to the buf API Michael S. Tsirkin
2020-06-11 11:34 ` [PATCH RFC v8 11/11] vhost: drop head based APIs Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0a83aa03-8e3c-1271-82f5-4c07931edea3@redhat.com \
--to=jasowang@redhat.com \
--cc=eperezma@redhat.com \
--cc=konrad.wilk@oracle.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).