From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35342) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZHB95-0000PW-5M for qemu-devel@nongnu.org; Mon, 20 Jul 2015 09:30:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZHB91-0004iZ-2X for qemu-devel@nongnu.org; Mon, 20 Jul 2015 09:30:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56854) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZHB90-0004gT-S4 for qemu-devel@nongnu.org; Mon, 20 Jul 2015 09:30:11 -0400 Date: Mon, 20 Jul 2015 15:30:06 +0200 From: Igor Mammedov Message-ID: <20150720153006.5a66a70c@nial.brq.redhat.com> In-Reply-To: <55ACDA41.2080201@suse.de> References: <00ee01d0c2c9$c308c010$491a4030$@samsung.com> <55ACDA41.2080201@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] Virt machine memory map List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexander Graf Cc: Marcel Apfelbaum , Peter Maydell , Pavel Fedin , QEMU Developers On Mon, 20 Jul 2015 13:23:45 +0200 Alexander Graf wrote: > On 07/20/15 11:41, Peter Maydell wrote: > > On 20 July 2015 at 09:55, Pavel Fedin wrote: > >> Hello! > >> > >> In our project we work on a very fast paravirtualized network I/O drivers, based on ivshmem. We > >> successfully got ivshmem working on ARM, however with one hack. > >> Currently we have: > >> --- cut --- > >> [VIRT_PCIE_MMIO] = { 0x10000000, 0x2eff0000 }, > >> [VIRT_PCIE_PIO] = { 0x3eff0000, 0x00010000 }, > >> [VIRT_PCIE_ECAM] = { 0x3f000000, 0x01000000 }, > >> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 }, > >> --- cut --- > >> And MMIO region is not enough for us because we want to have 1GB mapping for PCI device. In order > >> to make it working, we modify the map as follows: > >> --- cut --- > >> [VIRT_PCIE_MMIO] = { 0x10000000, 0x7eff0000 }, > >> [VIRT_PCIE_PIO] = { 0x8eff0000, 0x00010000 }, > >> [VIRT_PCIE_ECAM] = { 0x8f000000, 0x01000000 }, > >> [VIRT_MEM] = { 0x90000000, 30ULL * 1024 * 1024 * 1024 }, > >> --- cut --- > >> The question is - how could we upstream this? I believe modifying 32-bit virt memory map this way > >> is not good. Will it be OK to have different memory map for 64-bit virt ? > > I think the theory we discussed at the time of putting in the PCIe > > device was that if we wanted this we'd add support for the other > > PCIe memory window (which would then live at somewhere above 4GB). > > Alex, can you remember what the idea was? > > Yes, pretty much. It would give us an upper bound to the amount of RAM > that we're able to support, but at least we would be able to support big > MMIO regions like for ivshmem. > > I'm not really sure where to put it though. Depending on your kernel > config Linux supports somewhere between 39 and 48 or so bits of phys > address space. And I'd rather not crawl into the PCI hole rat hole that > we have on x86 ;). > > We could of course also put it just above RAM - but then our device tree > becomes really dynamic and heavily dependent on -m. on x86 we've made everything that is not mapped to ram/mmio fall down to PCI address space, see pc_pci_as_mapping_init(). So we don't have explicitly mapped PCI regions anymore there, but we still thinking in terms of PCI hole/PCI ranges when it comes to ACPI PCI bus description where one need to specify ranges available for bus in its _CRS. > > > > > But to be honest I think we weren't expecting anybody to need > > 1GB of PCI MMIO space unless it was a video card... > > Ivshmem was actually the most likely target that I could've thought of > to require big MMIO regions ;). > > > Alex > >