From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:46897) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TgD0b-0005oc-GP for qemu-devel@nongnu.org; Wed, 05 Dec 2012 06:19:27 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TgD0Y-0006sp-M2 for qemu-devel@nongnu.org; Wed, 05 Dec 2012 06:19:21 -0500 Received: from mx1.redhat.com ([209.132.183.28]:7218) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TgD0Y-0006s5-Et for qemu-devel@nongnu.org; Wed, 05 Dec 2012 06:19:18 -0500 Date: Wed, 5 Dec 2012 13:22:26 +0200 From: "Michael S. Tsirkin" Message-ID: <20121205112226.GA12619@redhat.com> References: <1353597412-12232-1-git-send-email-stefanha@redhat.com> <1353597412-12232-4-git-send-email-stefanha@redhat.com> <20121129135425.GD9625@redhat.com> <20121129142656.GA14196@stefanha-thinkpad.redhat.com> <20121129143608.GB10665@redhat.com> <20121205083156.GB5892@stefanha-thinkpad.hitronhub.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20121205083156.GB5892@stefanha-thinkpad.hitronhub.home> Subject: Re: [Qemu-devel] [PATCH v4 03/11] dataplane: add host memory mapping code List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: Kevin Wolf , Anthony Liguori , qemu-devel@nongnu.org, Blue Swirl , khoa@us.ibm.com, Stefan Hajnoczi , Paolo Bonzini , Asias He On Wed, Dec 05, 2012 at 09:31:56AM +0100, Stefan Hajnoczi wrote: > On Thu, Nov 29, 2012 at 04:36:08PM +0200, Michael S. Tsirkin wrote: > > On Thu, Nov 29, 2012 at 03:26:56PM +0100, Stefan Hajnoczi wrote: > > > On Thu, Nov 29, 2012 at 03:54:25PM +0200, Michael S. Tsirkin wrote: > > > > On Thu, Nov 22, 2012 at 04:16:44PM +0100, Stefan Hajnoczi wrote: > > > > > The data plane thread needs to map guest physical addresses to host > > > > > pointers. Normally this is done with cpu_physical_memory_map() but the > > > > > function assumes the global mutex is held. The data plane thread does > > > > > not touch the global mutex and therefore needs a thread-safe memory > > > > > mapping mechanism. > > > > > > > > > > Hostmem registers a MemoryListener similar to how vhost collects and > > > > > pushes memory region information into the kernel. There is a > > > > > fine-grained lock on the regions list which is held during lookup and > > > > > when installing a new regions list. > > > > > > > > > > When the physical memory map changes the MemoryListener callbacks are > > > > > invoked. They build up a new list of memory regions which is finally > > > > > installed when the list has been completed. > > > > > > > > > > Note that this approach is not safe across memory hotplug because mapped > > > > > pointers may still be in used across memory unplug. However, this is > > > > > currently a problem for QEMU in general and needs to be addressed in the > > > > > future. > > > > > > > > > > Signed-off-by: Stefan Hajnoczi > > > > > > > > Worth bothering with binary search? > > > > vhost does a linear search over regions because > > > > the number of ram regions is very small. > > > > > > memory.c does binary search. I did the same but in practice there are > > > <20 regions for a simple VM. It's probably not worth it but without > > > performance results this is speculation. > > > > > > I think there's no harm in using binary search to start with. > > > > > > > > +static void hostmem_listener_append_region(MemoryListener *listener, > > > > > + MemoryRegionSection *section) > > > > > +{ > > > > > + Hostmem *hostmem = container_of(listener, Hostmem, listener); > > > > > + > > > > > + if (memory_region_is_ram(section->mr)) { > > > > > + hostmem_append_new_region(hostmem, section); > > > > > + } > > > > > > > > I think you also need to remove VGA region since you > > > > don't mark these pages as dirty so access there won't work. > > > > > > I don't understand. If memory in the VGA region returns true from > > > memory_region_is_ram(), why would there be a problem? > > > > If you change this memory but you don't update the display. > > Never happens with non buggy guests but we should catch and fail if it does. > > Okay, I took a look at the VGA code and I think it makes sense now. We > have VRAM as a regular RAM region so that writes to it are cheap. To > avoid scanning or redrawing VRAM on every update we use dirty logging. > > Since virtio-blk-data-plane does not mark pages dirty an I/O buffer in > VRAM would fail to update the display correctly. > > I will try to put in a check to omit the VGA region. There are many ways to do this but I guess the simplest is to detect dirty logging and skip that region. > It can be dropped > in the future when we use the memory API with dirty logging from the > data plane thread. > > Stefan Or not - there's also the issue that e.g. cirrus doing tricks with memory mapping at data path. So skipping that region might help performance. -- MST