From: "Michael S. Tsirkin" <mst@redhat.com>
To: Markus Armbruster <armbru@redhat.com>
Cc: Eli Britstein <eli.britstein@toganetworks.com>,
qemu-devel@nongnu.org,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: IVSHMEM device performance
Date: Mon, 11 Apr 2016 15:27:34 +0300 [thread overview]
Message-ID: <20160411151918-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <87fuuso755.fsf@dusky.pond.sub.org>
On Mon, Apr 11, 2016 at 10:56:54AM +0200, Markus Armbruster wrote:
> Cc: qemu-devel
>
> Eli Britstein <eli.britstein@toganetworks.com> writes:
>
> > Hi
> >
> > In a VM, I add a IVSHMEM device, on which the MBUFS mempool resides, and also rings I create (I run a DPDK application in the VM).
> > I saw there is a performance penalty if I use such device, instead of hugepages (the VM's hugepages). My VM's memory is *NOT* backed with host's hugepages.
> > The memory behind the IVSHMEM device is a host hugepage (I use a patched version of QEMU, as provided by Intel).
> > I thought maybe the reason is that this memory is seen by the VM as a mapped PCI memory region, so it is not cached, but I am not sure.
> > So, my direction was to change the kernel (in the VM) so it will consider this memory as a regular memory (and thus cached), instead of a PCI memory region.
> > However, I am not sure my direction is correct, and even if so, I am not sure how/where to change the kernel (my starting point was mm/mmap.c, but I'm not sure it's the correct place to start).
> >
> > Any suggestion is welcomed.
> > Thanks,
> > Eli.
A cleaner way is just to use virtio, keeping everything in VM's
memory, with access either by data copies in hypervisor, or
directly using vhost-user.
For example, with vhost-pci: https://wiki.opnfv.org/vm2vm_mst
there has been recent work on this, see slides 12-14 in
http://schd.ws/hosted_files/ons2016/36/Nakajima_and_Ergin_PreSwitch_final.pdf
This is very much work in progress, but if you are interested
you should probably get in touch with Nakajima et al.
--
MST
next prev parent reply other threads:[~2016-04-11 12:27 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-11 6:21 IVSHMEM device performance Eli Britstein
2016-04-11 8:56 ` Markus Armbruster
2016-04-11 12:27 ` Michael S. Tsirkin [this message]
2016-04-11 13:18 ` Eli Britstein
2016-04-11 16:07 ` [Qemu-devel] " Eric Blake
2016-04-14 2:16 ` Wang, Wei W
2016-04-14 12:45 ` Paolo Bonzini
2016-04-17 7:18 ` Eli Britstein
2016-04-17 12:03 ` Paolo Bonzini
2016-04-17 15:57 ` Eli Britstein
2016-05-05 8:57 ` Eli Britstein
2016-05-06 0:50 ` Wang, Wei W
2016-05-08 6:12 ` Eli Britstein
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160411151918-mutt-send-email-mst@redhat.com \
--to=mst@redhat.com \
--cc=armbru@redhat.com \
--cc=eli.britstein@toganetworks.com \
--cc=kvm@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox