qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Re: [Qemu-devel] IVSHMEM device performance
       [not found] <VI1PR02MB17275FD1274A0D7C24956E8283940@VI1PR02MB1727.eurprd02.prod.outlook.com>
@ 2016-04-11  8:56 ` Markus Armbruster
  2016-04-11 12:27   ` Michael S. Tsirkin
  0 siblings, 1 reply; 4+ messages in thread
From: Markus Armbruster @ 2016-04-11  8:56 UTC (permalink / raw)
  To: Eli Britstein; +Cc: kvm@vger.kernel.org, qemu-devel

Cc: qemu-devel

Eli Britstein <eli.britstein@toganetworks.com> writes:

> Hi
>
> In a VM, I add a IVSHMEM device, on which the MBUFS mempool resides, and also rings I create (I run a DPDK application in the VM).
> I saw there is a performance penalty if I use such device, instead of hugepages (the VM's hugepages). My VM's memory is *NOT* backed with host's hugepages.
> The memory behind the IVSHMEM device is a host hugepage (I use a patched version of QEMU, as provided by Intel).
> I thought maybe the reason is that this memory is seen by the VM as a mapped PCI memory region, so it is not cached, but I am not sure.
> So, my direction was to change the kernel (in the VM) so it will consider this memory as a regular memory (and thus cached), instead of a PCI memory region.
> However, I am not sure my direction is correct, and even if so, I am not sure how/where to change the kernel (my starting point was  mm/mmap.c, but I'm not sure it's the correct place to start).
>
> Any suggestion is welcomed.
> Thanks,
> Eli.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] IVSHMEM device performance
  2016-04-11  8:56 ` [Qemu-devel] IVSHMEM device performance Markus Armbruster
@ 2016-04-11 12:27   ` Michael S. Tsirkin
  2016-04-11 13:18     ` Eli Britstein
  0 siblings, 1 reply; 4+ messages in thread
From: Michael S. Tsirkin @ 2016-04-11 12:27 UTC (permalink / raw)
  To: Markus Armbruster; +Cc: Eli Britstein, qemu-devel, kvm@vger.kernel.org

On Mon, Apr 11, 2016 at 10:56:54AM +0200, Markus Armbruster wrote:
> Cc: qemu-devel
> 
> Eli Britstein <eli.britstein@toganetworks.com> writes:
> 
> > Hi
> >
> > In a VM, I add a IVSHMEM device, on which the MBUFS mempool resides, and also rings I create (I run a DPDK application in the VM).
> > I saw there is a performance penalty if I use such device, instead of hugepages (the VM's hugepages). My VM's memory is *NOT* backed with host's hugepages.
> > The memory behind the IVSHMEM device is a host hugepage (I use a patched version of QEMU, as provided by Intel).
> > I thought maybe the reason is that this memory is seen by the VM as a mapped PCI memory region, so it is not cached, but I am not sure.
> > So, my direction was to change the kernel (in the VM) so it will consider this memory as a regular memory (and thus cached), instead of a PCI memory region.
> > However, I am not sure my direction is correct, and even if so, I am not sure how/where to change the kernel (my starting point was  mm/mmap.c, but I'm not sure it's the correct place to start).
> >
> > Any suggestion is welcomed.
> > Thanks,
> > Eli.

A cleaner way is just to use virtio, keeping everything in VM's
memory, with access either by data copies in hypervisor, or
directly using vhost-user.
For example, with vhost-pci: https://wiki.opnfv.org/vm2vm_mst
there has been recent work on this, see slides 12-14 in
http://schd.ws/hosted_files/ons2016/36/Nakajima_and_Ergin_PreSwitch_final.pdf

This is very much work in progress, but if you are interested
you should probably get in touch with Nakajima et al.

-- 
MST

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] IVSHMEM device performance
  2016-04-11 12:27   ` Michael S. Tsirkin
@ 2016-04-11 13:18     ` Eli Britstein
  2016-04-11 16:07       ` Eric Blake
  0 siblings, 1 reply; 4+ messages in thread
From: Eli Britstein @ 2016-04-11 13:18 UTC (permalink / raw)
  To: Michael S. Tsirkin, Markus Armbruster
  Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org



> -----Original Message-----
> From: Michael S. Tsirkin [mailto:mst@redhat.com]
> Sent: Monday, 11 April, 2016 3:28 PM
> To: Markus Armbruster
> Cc: Eli Britstein; qemu-devel@nongnu.org; kvm@vger.kernel.org
> Subject: Re: IVSHMEM device performance
>
> On Mon, Apr 11, 2016 at 10:56:54AM +0200, Markus Armbruster wrote:
> > Cc: qemu-devel
> >
> > Eli Britstein <eli.britstein@toganetworks.com> writes:
> >
> > > Hi
> > >
> > > In a VM, I add a IVSHMEM device, on which the MBUFS mempool
> resides, and also rings I create (I run a DPDK application in the VM).
> > > I saw there is a performance penalty if I use such device, instead of
> hugepages (the VM's hugepages). My VM's memory is *NOT* backed with
> host's hugepages.
> > > The memory behind the IVSHMEM device is a host hugepage (I use a
> patched version of QEMU, as provided by Intel).
> > > I thought maybe the reason is that this memory is seen by the VM as a
> mapped PCI memory region, so it is not cached, but I am not sure.
> > > So, my direction was to change the kernel (in the VM) so it will consider
> this memory as a regular memory (and thus cached), instead of a PCI
> memory region.
> > > However, I am not sure my direction is correct, and even if so, I am not
> sure how/where to change the kernel (my starting point was  mm/mmap.c,
> but I'm not sure it's the correct place to start).
> > >
> > > Any suggestion is welcomed.
> > > Thanks,
> > > Eli.
>
> A cleaner way is just to use virtio, keeping everything in VM's memory, with
> access either by data copies in hypervisor, or directly using vhost-user.
> For example, with vhost-pci: https://wiki.opnfv.org/vm2vm_mst there has
> been recent work on this, see slides 12-14 in
> http://schd.ws/hosted_files/ons2016/36/Nakajima_and_Ergin_PreSwitch_fi
> nal.pdf
>
> This is very much work in progress, but if you are interested you should
> probably get in touch with Nakajima et al.
[Eli Britstein] This is indeed very interesting and I will further look into it.
However, if I'm not wrong, this requires some support from the host which I would like to avoid.
My requirement from the host is only to provide an IVSHMEM device for several VMs, and my applications are running on VMs only. So, I think that vhost-pci is not applicable in my case. Am I wrong?
Can you think of a reason why accessing that PCI memory mapped memory (which is really the host's hugepage) is more expensive than accessing the VM's hugepages (even though they are not really the host's hugepage)?
Do you think my suspicion that as a PCI mapped memory it doesn't use cache is correct? If so, do you think I can change it (either in some configuration or changing the VM's kernel)?
Any other direction?

Thanks, Eli
>
> --
> MST
-------------------------------------------------------------------------------------------------------------------------------------------------
This email and any files transmitted and/or attachments with it are confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager. This message contains confidential
information of Toga Networks Ltd., and is intended only for the individual named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
------------------------------------------------------------------------------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] IVSHMEM device performance
  2016-04-11 13:18     ` Eli Britstein
@ 2016-04-11 16:07       ` Eric Blake
  0 siblings, 0 replies; 4+ messages in thread
From: Eric Blake @ 2016-04-11 16:07 UTC (permalink / raw)
  To: Eli Britstein, Michael S. Tsirkin, Markus Armbruster
  Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org

[-- Attachment #1: Type: text/plain, Size: 694 bytes --]

On 04/11/2016 07:18 AM, Eli Britstein wrote:
> 

[meta-comment]


> Thanks, Eli
>>
>> --
>> MST
> -------------------------------------------------------------------------------------------------------------------------------------------------
> This email and any files transmitted and/or attachments with it are confidential and proprietary information of

Disclaimers like this are inappropriate on publicly-archived lists, and
unenforceable.  If your employer insists on spamming us with legalese,
it might be better for you to send mail from a personal account.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-04-11 16:07 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <VI1PR02MB17275FD1274A0D7C24956E8283940@VI1PR02MB1727.eurprd02.prod.outlook.com>
2016-04-11  8:56 ` [Qemu-devel] IVSHMEM device performance Markus Armbruster
2016-04-11 12:27   ` Michael S. Tsirkin
2016-04-11 13:18     ` Eli Britstein
2016-04-11 16:07       ` Eric Blake

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).