From: "Alex Bennée" <alex.bennee@linaro.org>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: qemu devel list <qemu-devel@nongnu.org>, netdev@vger.kernel.org
Subject: Re: VSOCK benchmark and optimizations
Date: Tue, 02 Apr 2019 04:19:25 +0000 [thread overview]
Message-ID: <87zhp9599u.fsf@zen.linaroharston> (raw)
In-Reply-To: <20190401163240.xw24ezsloy5ds2hz@steredhat>
Stefano Garzarella <sgarzare@redhat.com> writes:
> Hi Alex,
> I'm sending you some benchmarks and information about VSOCK CCing qemu-devel
> and linux-netdev (maybe this info could be useful for others :))
>
> One of the VSOCK advantages is the simple configuration: you don't need to set
> up IP addresses for guest/host, and it can be used with the standard POSIX
> socket API. [1]
>
> I'm currently working on it, so the "optimized" values are still work in
> progress and I'll send the patches upstream (Linux) as soon as possible.
> (I hope in 1 or 2 weeks)
>
> Optimizations:
> + reducing the number of credit update packets
> - RX side sent, on every packet received, an empty packet only to inform the
> TX side about the space in the RX buffer.
> + increase RX buffers size to 64 KB (from 4 KB)
> + merge packets to fill RX buffers
>
> As benchmark tool I used iperf3 [2] modified with VSOCK support:
>
> host -> guest [Gbps] guest -> host [Gbps]
> pkt_size before opt. optimized before opt. optimized
> 1K 0.5 1.6 1.4 1.4
> 2K 1.1 3.1 2.3 2.5
> 4K 2.0 5.6 4.2 4.4
> 8K 3.2 10.2 7.2 7.5
> 16K 6.4 14.2 9.4 11.3
> 32K 9.8 18.9 9.2 17.8
> 64K 13.8 22.9 8.8 25.0
> 128K 17.6 24.5 7.7 25.7
> 256K 19.0 24.8 8.1 25.6
> 512K 20.8 25.1 8.1 25.4
>
>
> How to reproduce:
>
> host$ modprobe vhost_vsock
> host$ qemu-system-x86_64 ... -device vhost-vsock-pci,guest-cid=3
> # Note: Guest CID should be >= 3
> # (0, 1 are reserved and 2 identify the host)
>
> guest$ iperf3 --vsock -s
>
> host$ iperf3 --vsock -c 3 -l ${pkt_size} # host -> guest
> host$ iperf3 --vsock -c 3 -l ${pkt_size} -R # guest -> host
>
>
> If you want, I can do a similar benchmark (with iperf3) using a networking
> card (do you have a specific configuration?).
My main interest is how it stacks up against:
--device virtio-net-pci and I guess the vhost equivalent
AIUI one of the motivators was being able to run something like NFS for
a guest FS over vsock instead of the overhead from UDP and having to
deal with the additional complication of having a working network setup.
>
> Let me know if you need more details!
>
> Thanks,
> Stefano
>
> [1] https://wiki.qemu.org/Features/VirtioVsock
> [2] https://github.com/stefano-garzarella/iperf/
--
Alex Bennée
next prev parent reply other threads:[~2019-04-02 4:19 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-01 16:32 VSOCK benchmark and optimizations Stefano Garzarella
2019-04-02 4:19 ` Alex Bennée [this message]
2019-04-02 7:37 ` Stefano Garzarella
2019-04-03 12:36 ` [Qemu-devel] " Stefan Hajnoczi
2019-04-04 10:47 ` Stefano Garzarella
2019-04-03 12:34 ` [Qemu-devel] " Stefan Hajnoczi
2019-04-03 15:10 ` Stefano Garzarella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87zhp9599u.fsf@zen.linaroharston \
--to=alex.bennee@linaro.org \
--cc=netdev@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).