From mboxrd@z Thu Jan 1 00:00:00 1970 From: Markus Armbruster Subject: Re: IVSHMEM device performance Date: Mon, 11 Apr 2016 10:56:54 +0200 Message-ID: <87fuuso755.fsf@dusky.pond.sub.org> References: Mime-Version: 1.0 Content-Type: text/plain Cc: "kvm\@vger.kernel.org" , qemu-devel@nongnu.org To: Eli Britstein Return-path: Received: from mx1.redhat.com ([209.132.183.28]:47691 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752838AbcDKI45 (ORCPT ); Mon, 11 Apr 2016 04:56:57 -0400 In-Reply-To: (Eli Britstein's message of "Mon, 11 Apr 2016 06:21:24 +0000") Sender: kvm-owner@vger.kernel.org List-ID: Cc: qemu-devel Eli Britstein writes: > Hi > > In a VM, I add a IVSHMEM device, on which the MBUFS mempool resides, and also rings I create (I run a DPDK application in the VM). > I saw there is a performance penalty if I use such device, instead of hugepages (the VM's hugepages). My VM's memory is *NOT* backed with host's hugepages. > The memory behind the IVSHMEM device is a host hugepage (I use a patched version of QEMU, as provided by Intel). > I thought maybe the reason is that this memory is seen by the VM as a mapped PCI memory region, so it is not cached, but I am not sure. > So, my direction was to change the kernel (in the VM) so it will consider this memory as a regular memory (and thus cached), instead of a PCI memory region. > However, I am not sure my direction is correct, and even if so, I am not sure how/where to change the kernel (my starting point was mm/mmap.c, but I'm not sure it's the correct place to start). > > Any suggestion is welcomed. > Thanks, > Eli.