From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: [PATCH v2 2/5] hvmloader: Load large devices into high MMIO space as needed Date: Tue, 18 Jun 2013 17:46:21 +0100 Message-ID: <1371573984-28514-2-git-send-email-george.dunlap@eu.citrix.com> References: <1371573984-28514-1-git-send-email-george.dunlap@eu.citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1371573984-28514-1-git-send-email-george.dunlap@eu.citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xen.org Cc: George Dunlap , Ian Jackson , Stefano Stabellini , Ian Campbell , Hanweidong List-Id: xen-devel@lists.xenproject.org Keep track of how much mmio space is left total, as well as the amount of "low" MMIO space (<4GiB), and only load devices into high memory if there is not enough low memory for the rest of the devices to fit. Because devices are processed by size in order from large to small, this should preferentially relocate devices with large BARs to 64-bit space. Signed-off-by: George Dunlap CC: Ian Jackson CC: Ian Campbell CC: Stefano Stabellini CC: Hanweidong --- tools/firmware/hvmloader/pci.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c index 8691a19..7f306a1 100644 --- a/tools/firmware/hvmloader/pci.c +++ b/tools/firmware/hvmloader/pci.c @@ -38,7 +38,8 @@ void pci_setup(void) { uint8_t is_64bar, using_64bar, bar64_relocate = 0; uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper; - uint64_t base, bar_sz, bar_sz_upper, low_mmio_left, mmio_total = 0; + uint64_t base, bar_sz, bar_sz_upper, low_mmio_left, mmio_total = 0, + mmio_left; uint32_t vga_devfn = 256; uint16_t class, vendor_id, device_id; unsigned int bar, pin, link, isa_irq; @@ -244,6 +245,7 @@ void pci_setup(void) io_resource.max = 0x10000; low_mmio_left = pci_mem_end - pci_mem_start; + mmio_left = mmio_total; /* Assign iomem and ioport resources in descending order of size. */ for ( i = 0; i < nr_bars; i++ ) @@ -252,7 +254,12 @@ void pci_setup(void) bar_reg = bars[i].bar_reg; bar_sz = bars[i].bar_sz; - using_64bar = bars[i].is_64bar && bar64_relocate && (low_mmio_left < bar_sz); + /* Relocate to high memory if the total amount of MMIO needed + * is more than the low MMIO available. Because devices are + * processed in order of bar_sz, this will preferentially + * relocate larger devices to high memory first. */ + using_64bar = bars[i].is_64bar && bar64_relocate + && (mmio_left > low_mmio_left); bar_data = pci_readl(devfn, bar_reg); if ( (bar_data & PCI_BASE_ADDRESS_SPACE) == @@ -276,6 +283,7 @@ void pci_setup(void) low_mmio_left -= bar_sz; bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK; } + mmio_left -= bar_sz; } else { -- 1.7.9.5