From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59829) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YAhxx-0007fh-Jt for qemu-devel@nongnu.org; Mon, 12 Jan 2015 11:35:47 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YAhxt-0007Dc-QV for qemu-devel@nongnu.org; Mon, 12 Jan 2015 11:35:45 -0500 Received: from cantor2.suse.de ([195.135.220.15]:42973 helo=mx2.suse.de) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YAhxt-0007Ci-Ap for qemu-devel@nongnu.org; Mon, 12 Jan 2015 11:35:41 -0500 Message-ID: <54B3F7CD.1070902@suse.de> Date: Mon, 12 Jan 2015 17:35:25 +0100 From: Alexander Graf MIME-Version: 1.0 References: <1420560191-48029-1-git-send-email-agraf@suse.de> <1420560191-48029-4-git-send-email-agraf@suse.de> <54AD5640.9000606@huawei.com> <54ADA977.2020500@suse.de> <54AE7E41.2070503@huawei.com> <54AE8574.6010305@suse.de> <54AE9BDD.8060405@huawei.com> <54B3F4E6.20401@huawei.com> In-Reply-To: <54B3F4E6.20401@huawei.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Claudio Fontana , qemu-devel@nongnu.org Cc: Peter Maydell , ard.biesheuvel@linaro.org, mst@redhat.com, rob.herring@linaro.org, stuart.yoder@freescale.com, a.rigo@virtualopensystems.com On 12.01.15 17:23, Claudio Fontana wrote: > On 08.01.2015 16:01, Claudio Fontana wrote: >> On 08.01.2015 14:26, Alexander Graf wrote: >>> >>> >>> On 08.01.15 13:55, Claudio Fontana wrote: >>>> (added cc: Alvise which I mistakenly assumed was in Cc: already) >>> >>> He was in CC :). Now he's there twice. >>> >>>> >>>> On 07.01.2015 22:47, Alexander Graf wrote: >>>>> >>>>> >>>>> On 07.01.15 16:52, Claudio Fontana wrote: >>>>>> On 06.01.2015 17:03, Alexander Graf wrote: >>>>>>> Now that we have a working "generic" PCIe host bridge driver, we can plug >>>>>>> it into ARMs virt machine to always have PCIe available to normal ARM VMs. >>>>>>> >>>>>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000 >>>>>>> into an AArch64 VM with this and they all lived happily ever after. >>>>>>> >>>>>>> Signed-off-by: Alexander Graf >>>>>>> >>>>>>> --- >>>>>>> >>>>>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM >>>>>>> systems. If you want to use it with AArch64 guests, please apply the following >>>>>>> patch or wait until upstream cleaned up the code properly: >>>>>>> >>>>>>> http://csgraf.de/agraf/pci/pci-3.19.patch >>>>>>> --- >>>>>>> default-configs/arm-softmmu.mak | 2 + >>>>>>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++--- >>>>>>> 2 files changed, 80 insertions(+), 5 deletions(-) >>>>>>> >>> >>> [...] >>> >>>>>>> + dev = qdev_create(NULL, TYPE_GPEX_HOST); >>>>>>> + >>>>>>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio); >>>>>>> + qdev_init_nofail(dev); >>>>>>> + >>>>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam); >>>>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport); >>>>>>> + >>>>>>> + /* Map the MMIO window at the same spot in bus and cpu layouts */ >>>>>>> + mmio_alias = g_new0(MemoryRegion, 1); >>>>>>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1); >>>>>>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio", >>>>>>> + mmio_reg, base_mmio, size_mmio); >>>>>>> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias); >>>>>>> + >>>>>>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]); >>>>>>> + >>>>>>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base); >>>>>>> + qemu_fdt_add_subnode(vbi->fdt, nodename); >>>>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename, >>>>>>> + "compatible", "pci-host-ecam-generic"); >>>>>> >>>>>> is this the only compatible string we should set here? Is this not legacy pci compatible? >>>>>> In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example, >>>>>> would it be sensible to make it a list and include "pci" as well? >>>>> >>>>> I couldn't find anything that defines what an "pci" compatible should >>>>> look like. We definitely don't implement the legacy PCI config space >>>>> accessor registers. >>>> >>>> I see, I assumed this patch would support both CAM and ECAM as configuration methods, while now I understand >>>> Alvise's patches support only CAM, while these support only ECAM.. >>>> So basically I should look at the compatible string and then choose configuration method accordingly. >>>> I wonder if I should deal with the case where the compatible string contains both ECAM and CAM. >>> >>> Well, original PCI didn't even do CAM. You only had 2 32bit ioport >>> registers that you tunnel all config space access through. >>> >>>> >>>>>> >>>>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci"); >>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3); >>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2); >>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1); >>>>>>> + >>>>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg", >>>>>>> + 2, base_ecam, 2, size_ecam); >>>>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges", >>>>>>> + 1, 0x01000000, 2, 0, >>>>>>> + 2, base_ioport, 2, size_ioport, >>>>>>> + >>>>>>> + 1, 0x02000000, 2, base_mmio, >>>>>>> + 2, base_mmio, 2, size_mmio); >>>>>>> + >>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1); >>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map", >>>>>>> + 0, 0, 0, /* device */ >>>>>>> + 0, /* PCI irq */ >>>>>>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq, >>>>>>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */); >>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask", >>>>>>> + 0, 0, 0, /* device */ >>>>>>> + 0 /* PCI irq */); >>>>>> >>>>>> Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series. >>>>> >>>>> How exactly is this undocumented? The "mask" is a mask over the first >>>>> fields of an interrupt-map row plus an IRQ offset. So the mask above >>>>> means "Any device with any function and any IRQ on it, map to device IRQ >>>>> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3). >>>> >>>> (see my answer to Peter below in thread) >>>> >>>> this is a bit different to what Alvise's series is doing I think (see later). >>> >>> Yes, but it's easier :). >>> >>>> >>>>> >>>>>> Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it). >>>>> >>>>> You're saying you'd prefer a define? >>>> >>>> Yes that would be helpful :) >>>> >>>>> >>>>>> How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm). >>>>> >>>>> His implementation explicitly listed every PCI slot, while mine actually >>>>> makes use of the mask and simply routes everything to a single IRQ line. >>>>> >>>>> The benefit of masking devfn out is that you don't need to worry about >>>>> the number of slots you support - and anything performance critical >>>>> should go via MSI-X anyway ;). >>>> >>>> The benefit for me (but for me only probably..) is that with one IRQ per slot I didn't have to implement shared irqs and msi / msi-x in the guest yet. But that should be done eventually anyway.. >>> >>> You most likely wouldn't get one IRQ per slot anyway. Most PHBs expose 4 >>> outgoing IRQ lines, so you'll need to deal with sharing IRQs regardless. >>> >>> Also, sharing IRQ lines isn't incredibly black magic and quite a >>> fundamental PCI IRQ concept, so I'm glad I'm pushing you into the >>> direction of implementing it early on :). > > > Ok I have tentatively implemented this and I tested both Alvise's series and yours and both work ok for my use case. > > Note that I did not test any MSI/MSI-X, only the INTx method. There is no MSI yet ;) Alex