* [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
@ 2015-01-06 16:03 Alexander Graf
2015-01-06 16:03 ` [Qemu-devel] [PATCH 1/4] pci: Split pcie_host_mmcfg_map() Alexander Graf
` (6 more replies)
0 siblings, 7 replies; 44+ messages in thread
From: Alexander Graf @ 2015-01-06 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, ard.biesheuvel, rob.herring, mst, claudio.fontana,
stuart.yoder, a.rigo
Linux implements a nice binding to describe a "generic" PCI Express host bridge
using only device tree.
This patch set adds enough emulation logic to expose the parts that are
"generic" as a simple sysbus device and maps it into ARM's virt machine.
With this patch set, we can finally spawn PCI devices on ARM VMs. I was able
to have a fully DRM enabled virtual machine with VGA, e1000 and XHCI (for
keyboard and mouse) up and working.
It's only a small step for QEMU, but a big step for ARM VM's usability.
Happy new year!
Alexander Graf (4):
pci: Split pcie_host_mmcfg_map()
pci: Add generic PCIe host bridge
arm: Add PCIe host bridge in virt machine
arm: enable Bochs PCI VGA
default-configs/arm-softmmu.mak | 3 +
hw/arm/virt.c | 83 +++++++++++++++++++--
hw/pci-host/Makefile.objs | 1 +
hw/pci-host/gpex.c | 156 ++++++++++++++++++++++++++++++++++++++++
hw/pci/pcie_host.c | 9 ++-
include/hw/pci-host/gpex.h | 56 +++++++++++++++
include/hw/pci/pcie_host.h | 1 +
7 files changed, 302 insertions(+), 7 deletions(-)
create mode 100644 hw/pci-host/gpex.c
create mode 100644 include/hw/pci-host/gpex.h
--
1.7.12.4
^ permalink raw reply [flat|nested] 44+ messages in thread
* [Qemu-devel] [PATCH 1/4] pci: Split pcie_host_mmcfg_map()
2015-01-06 16:03 [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Alexander Graf
@ 2015-01-06 16:03 ` Alexander Graf
2015-01-12 16:28 ` Claudio Fontana
2015-01-06 16:03 ` [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge Alexander Graf
` (5 subsequent siblings)
6 siblings, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-06 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, ard.biesheuvel, rob.herring, mst, claudio.fontana,
stuart.yoder, a.rigo
The mmcfg space is a memory region that allows access to PCI config space
in the PCIe world. To maintain abstraction layers, I would like to expose
the mmcfg space as a sysbus mmio region rather than have it mapped straight
into the system's memory address space though.
So this patch splits the initialization of the mmcfg space from the actual
mapping, allowing us to only have an mmfg memory region without the map.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
hw/pci/pcie_host.c | 9 +++++++--
include/hw/pci/pcie_host.h | 1 +
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/hw/pci/pcie_host.c b/hw/pci/pcie_host.c
index 3db038f..dfb4a2b 100644
--- a/hw/pci/pcie_host.c
+++ b/hw/pci/pcie_host.c
@@ -98,8 +98,7 @@ void pcie_host_mmcfg_unmap(PCIExpressHost *e)
}
}
-void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr,
- uint32_t size)
+void pcie_host_mmcfg_init(PCIExpressHost *e, uint32_t size)
{
assert(!(size & (size - 1))); /* power of 2 */
assert(size >= PCIE_MMCFG_SIZE_MIN);
@@ -107,6 +106,12 @@ void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr,
e->size = size;
memory_region_init_io(&e->mmio, OBJECT(e), &pcie_mmcfg_ops, e,
"pcie-mmcfg", e->size);
+}
+
+void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr,
+ uint32_t size)
+{
+ pcie_host_mmcfg_init(e, size);
e->base_addr = addr;
memory_region_add_subregion(get_system_memory(), e->base_addr, &e->mmio);
}
diff --git a/include/hw/pci/pcie_host.h b/include/hw/pci/pcie_host.h
index ff44ef6..4d23c80 100644
--- a/include/hw/pci/pcie_host.h
+++ b/include/hw/pci/pcie_host.h
@@ -50,6 +50,7 @@ struct PCIExpressHost {
};
void pcie_host_mmcfg_unmap(PCIExpressHost *e);
+void pcie_host_mmcfg_init(PCIExpressHost *e, uint32_t size);
void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr, uint32_t size);
void pcie_host_mmcfg_update(PCIExpressHost *e,
int enable,
--
1.7.12.4
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-06 16:03 [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Alexander Graf
2015-01-06 16:03 ` [Qemu-devel] [PATCH 1/4] pci: Split pcie_host_mmcfg_map() Alexander Graf
@ 2015-01-06 16:03 ` Alexander Graf
2015-01-12 16:29 ` Claudio Fontana
2015-01-12 17:36 ` alvise rigo
2015-01-06 16:03 ` [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine Alexander Graf
` (4 subsequent siblings)
6 siblings, 2 replies; 44+ messages in thread
From: Alexander Graf @ 2015-01-06 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, ard.biesheuvel, rob.herring, mst, claudio.fontana,
stuart.yoder, a.rigo
With simple exposure of MMFG, ioport window, mmio window and an IRQ line we
can successfully create a workable PCIe host bridge that can be mapped anywhere
and only needs to get described to the OS using whatever means it likes.
This patch implements such a "generic" host bridge. It only supports a single
legacy IRQ line so far. MSIs need to be handled external to the host bridge.
This device is particularly useful for the "pci-host-ecam-generic" driver in
Linux.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
hw/pci-host/Makefile.objs | 1 +
hw/pci-host/gpex.c | 156 +++++++++++++++++++++++++++++++++++++++++++++
include/hw/pci-host/gpex.h | 56 ++++++++++++++++
3 files changed, 213 insertions(+)
create mode 100644 hw/pci-host/gpex.c
create mode 100644 include/hw/pci-host/gpex.h
diff --git a/hw/pci-host/Makefile.objs b/hw/pci-host/Makefile.objs
index bb65f9c..45f1f0e 100644
--- a/hw/pci-host/Makefile.objs
+++ b/hw/pci-host/Makefile.objs
@@ -15,3 +15,4 @@ common-obj-$(CONFIG_PCI_APB) += apb.o
common-obj-$(CONFIG_FULONG) += bonito.o
common-obj-$(CONFIG_PCI_PIIX) += piix.o
common-obj-$(CONFIG_PCI_Q35) += q35.o
+common-obj-$(CONFIG_PCI_GENERIC) += gpex.o
diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
new file mode 100644
index 0000000..bd62a3c
--- /dev/null
+++ b/hw/pci-host/gpex.c
@@ -0,0 +1,156 @@
+/*
+ * QEMU Generic PCI Express Bridge Emulation
+ *
+ * Copyright (C) 2015 Alexander Graf <agraf@suse.de>
+ *
+ * Code loosely based on q35.c.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+#include "hw/hw.h"
+#include "hw/pci-host/gpex.h"
+
+/****************************************************************************
+ * GPEX host
+ */
+
+static void gpex_set_irq(void *opaque, int irq_num, int level)
+{
+ GPEXHost *s = opaque;
+
+ qemu_set_irq(s->irq, level);
+}
+
+static int gpex_map_irq(PCIDevice *pci_dev, int irq_num)
+{
+ /* We only support one IRQ line so far */
+ return 0;
+}
+
+static void gpex_host_realize(DeviceState *dev, Error **errp)
+{
+ PCIHostState *pci = PCI_HOST_BRIDGE(dev);
+ GPEXHost *s = GPEX_HOST(dev);
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
+ PCIExpressHost *pex = PCIE_HOST_BRIDGE(dev);
+
+ pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MIN);
+ memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", s->mmio_window_size);
+ memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
+
+ sysbus_init_mmio(sbd, &pex->mmio);
+ sysbus_init_mmio(sbd, &s->io_mmio);
+ sysbus_init_mmio(sbd, &s->io_ioport);
+ sysbus_init_irq(sbd, &s->irq);
+
+ pci->bus = pci_register_bus(dev, "pcie.0", gpex_set_irq, gpex_map_irq, s,
+ &s->io_mmio, &s->io_ioport, 0, 1, TYPE_PCIE_BUS);
+
+ qdev_set_parent_bus(DEVICE(&s->gpex_root), BUS(pci->bus));
+ qdev_init_nofail(DEVICE(&s->gpex_root));
+}
+
+static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
+ PCIBus *rootbus)
+{
+ return "0000:00";
+}
+
+static Property gpex_root_props[] = {
+ DEFINE_PROP_UINT64("mmio_window_size", GPEXHost, mmio_window_size, 1ULL << 32),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void gpex_host_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ PCIHostBridgeClass *hc = PCI_HOST_BRIDGE_CLASS(klass);
+
+ hc->root_bus_path = gpex_host_root_bus_path;
+ dc->realize = gpex_host_realize;
+ dc->props = gpex_root_props;
+ set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
+ dc->fw_name = "pci";
+}
+
+static void gpex_host_initfn(Object *obj)
+{
+ GPEXHost *s = GPEX_HOST(obj);
+
+ object_initialize(&s->gpex_root, sizeof(s->gpex_root), TYPE_GPEX_ROOT_DEVICE);
+ object_property_add_child(OBJECT(s), "gpex_root", OBJECT(&s->gpex_root), NULL);
+ qdev_prop_set_uint32(DEVICE(&s->gpex_root), "addr", PCI_DEVFN(0, 0));
+ qdev_prop_set_bit(DEVICE(&s->gpex_root), "multifunction", false);
+}
+
+static const TypeInfo gpex_host_info = {
+ .name = TYPE_GPEX_HOST,
+ .parent = TYPE_PCIE_HOST_BRIDGE,
+ .instance_size = sizeof(GPEXHost),
+ .instance_init = gpex_host_initfn,
+ .class_init = gpex_host_class_init,
+};
+
+/****************************************************************************
+ * GPEX Root D0:F0
+ */
+
+static const VMStateDescription vmstate_gpex_root = {
+ .name = "gpex_root",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .fields = (VMStateField[]) {
+ VMSTATE_PCI_DEVICE(parent_obj, GPEXRootState),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
+static void gpex_root_class_init(ObjectClass *klass, void *data)
+{
+ PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
+ DeviceClass *dc = DEVICE_CLASS(klass);
+
+ set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
+ dc->desc = "Host bridge";
+ dc->vmsd = &vmstate_gpex_root;
+ k->vendor_id = PCI_VENDOR_ID_REDHAT;
+ k->device_id = PCI_DEVICE_ID_REDHAT_BRIDGE;
+ k->revision = 0;
+ k->class_id = PCI_CLASS_BRIDGE_HOST;
+ /*
+ * PCI-facing part of the host bridge, not usable without the
+ * host-facing part, which can't be device_add'ed, yet.
+ */
+ dc->cannot_instantiate_with_device_add_yet = true;
+}
+
+static const TypeInfo gpex_root_info = {
+ .name = TYPE_GPEX_ROOT_DEVICE,
+ .parent = TYPE_PCI_DEVICE,
+ .instance_size = sizeof(GPEXRootState),
+ .class_init = gpex_root_class_init,
+};
+
+static void gpex_register(void)
+{
+ type_register_static(&gpex_root_info);
+ type_register_static(&gpex_host_info);
+}
+
+type_init(gpex_register);
diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
new file mode 100644
index 0000000..5cf2073
--- /dev/null
+++ b/include/hw/pci-host/gpex.h
@@ -0,0 +1,56 @@
+/*
+ * gpex.h
+ *
+ * Copyright (C) 2015 Alexander Graf <agraf@suse.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>
+ */
+
+#ifndef HW_GPEX_H
+#define HW_GPEX_H
+
+#include "hw/hw.h"
+#include "hw/sysbus.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/pcie_host.h"
+
+#define TYPE_GPEX_HOST "gpex-pcihost"
+#define GPEX_HOST(obj) \
+ OBJECT_CHECK(GPEXHost, (obj), TYPE_GPEX_HOST)
+
+#define TYPE_GPEX_ROOT_DEVICE "gpex-root"
+#define MCH_PCI_DEVICE(obj) \
+ OBJECT_CHECK(GPEXRootState, (obj), TYPE_GPEX_ROOT_DEVICE)
+
+typedef struct GPEXRootState {
+ /*< private >*/
+ PCIDevice parent_obj;
+ /*< public >*/
+} GPEXRootState;
+
+typedef struct GPEXHost {
+ /*< private >*/
+ PCIExpressHost parent_obj;
+ /*< public >*/
+
+ GPEXRootState gpex_root;
+
+ MemoryRegion io_ioport;
+ MemoryRegion io_mmio;
+ qemu_irq irq;
+
+ uint64_t mmio_window_size;
+} GPEXHost;
+
+#endif /* HW_GPEX_H */
--
1.7.12.4
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-06 16:03 [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Alexander Graf
2015-01-06 16:03 ` [Qemu-devel] [PATCH 1/4] pci: Split pcie_host_mmcfg_map() Alexander Graf
2015-01-06 16:03 ` [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge Alexander Graf
@ 2015-01-06 16:03 ` Alexander Graf
2015-01-07 15:52 ` Claudio Fontana
` (2 more replies)
2015-01-06 16:03 ` [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA Alexander Graf
` (3 subsequent siblings)
6 siblings, 3 replies; 44+ messages in thread
From: Alexander Graf @ 2015-01-06 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, ard.biesheuvel, rob.herring, mst, claudio.fontana,
stuart.yoder, a.rigo
Now that we have a working "generic" PCIe host bridge driver, we can plug
it into ARMs virt machine to always have PCIe available to normal ARM VMs.
I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
into an AArch64 VM with this and they all lived happily ever after.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
systems. If you want to use it with AArch64 guests, please apply the following
patch or wait until upstream cleaned up the code properly:
http://csgraf.de/agraf/pci/pci-3.19.patch
---
default-configs/arm-softmmu.mak | 2 +
hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
2 files changed, 80 insertions(+), 5 deletions(-)
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
index f3513fa..7671ee2 100644
--- a/default-configs/arm-softmmu.mak
+++ b/default-configs/arm-softmmu.mak
@@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
CONFIG_VERSATILE_PCI=y
CONFIG_VERSATILE_I2C=y
+CONFIG_PCI_GENERIC=y
+
CONFIG_SDHCI=y
CONFIG_INTEGRATOR_DEBUG=y
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 2353440..b7635ac 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -42,6 +42,7 @@
#include "exec/address-spaces.h"
#include "qemu/bitops.h"
#include "qemu/error-report.h"
+#include "hw/pci-host/gpex.h"
#define NUM_VIRTIO_TRANSPORTS 32
@@ -69,6 +70,7 @@ enum {
VIRT_MMIO,
VIRT_RTC,
VIRT_FW_CFG,
+ VIRT_PCIE,
};
typedef struct MemMapEntry {
@@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
[VIRT_FW_CFG] = { 0x09020000, 0x0000000a },
[VIRT_MMIO] = { 0x0a000000, 0x00000200 },
/* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
- /* 0x10000000 .. 0x40000000 reserved for PCI */
+ [VIRT_PCIE] = { 0x10000000, 0x30000000 },
[VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
};
static const int a15irqmap[] = {
[VIRT_UART] = 1,
[VIRT_RTC] = 2,
+ [VIRT_PCIE] = 3,
[VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
};
@@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
}
}
-static void fdt_add_gic_node(const VirtBoardInfo *vbi)
+static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
{
uint32_t gic_phandle;
@@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
2, vbi->memmap[VIRT_GIC_CPU].base,
2, vbi->memmap[VIRT_GIC_CPU].size);
qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
+
+ return gic_phandle;
}
-static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
+static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
{
/* We create a standalone GIC v2 */
DeviceState *gicdev;
@@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
pic[i] = qdev_get_gpio_in(gicdev, i);
}
- fdt_add_gic_node(vbi);
+ return fdt_add_gic_node(vbi);
}
static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
@@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
g_free(nodename);
}
+static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
+ uint32_t gic_phandle)
+{
+ hwaddr base = vbi->memmap[VIRT_PCIE].base;
+ hwaddr size = vbi->memmap[VIRT_PCIE].size;
+ hwaddr size_ioport = 64 * 1024;
+ hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
+ hwaddr size_mmio = size - size_ecam - size_ioport;
+ hwaddr base_mmio = base;
+ hwaddr base_ioport = base_mmio + size_mmio;
+ hwaddr base_ecam = base_ioport + size_ioport;
+ int irq = vbi->irqmap[VIRT_PCIE];
+ MemoryRegion *mmio_alias;
+ MemoryRegion *mmio_reg;
+ DeviceState *dev;
+ char *nodename;
+
+ dev = qdev_create(NULL, TYPE_GPEX_HOST);
+
+ qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
+ qdev_init_nofail(dev);
+
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
+
+ /* Map the MMIO window at the same spot in bus and cpu layouts */
+ mmio_alias = g_new0(MemoryRegion, 1);
+ mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
+ memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
+ mmio_reg, base_mmio, size_mmio);
+ memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
+
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
+
+ nodename = g_strdup_printf("/pcie@%" PRIx64, base);
+ qemu_fdt_add_subnode(vbi->fdt, nodename);
+ qemu_fdt_setprop_string(vbi->fdt, nodename,
+ "compatible", "pci-host-ecam-generic");
+ qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
+ qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
+ qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
+ qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
+
+ qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
+ 2, base_ecam, 2, size_ecam);
+ qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
+ 1, 0x01000000, 2, 0,
+ 2, base_ioport, 2, size_ioport,
+
+ 1, 0x02000000, 2, base_mmio,
+ 2, base_mmio, 2, size_mmio);
+
+ qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
+ qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
+ 0, 0, 0, /* device */
+ 0, /* PCI irq */
+ gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
+ qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
+ 0, 0, 0, /* device */
+ 0 /* PCI irq */);
+
+ g_free(nodename);
+}
+
static void *machvirt_dtb(const struct arm_boot_info *binfo, int *fdt_size)
{
const VirtBoardInfo *board = (const VirtBoardInfo *)binfo;
@@ -573,6 +643,7 @@ static void machvirt_init(MachineState *machine)
MemoryRegion *ram = g_new(MemoryRegion, 1);
const char *cpu_model = machine->cpu_model;
VirtBoardInfo *vbi;
+ uint32_t gic_phandle;
if (!cpu_model) {
cpu_model = "cortex-a15";
@@ -634,12 +705,14 @@ static void machvirt_init(MachineState *machine)
create_flash(vbi);
- create_gic(vbi, pic);
+ gic_phandle = create_gic(vbi, pic);
create_uart(vbi, pic);
create_rtc(vbi, pic);
+ create_pcie(vbi, pic, gic_phandle);
+
/* Create mmio transports, so the user can create virtio backends
* (which will be automatically plugged in to the transports). If
* no backend is created the transport will just sit harmlessly idle.
--
1.7.12.4
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA
2015-01-06 16:03 [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Alexander Graf
` (2 preceding siblings ...)
2015-01-06 16:03 ` [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine Alexander Graf
@ 2015-01-06 16:03 ` Alexander Graf
2015-01-06 16:16 ` Peter Maydell
2015-01-07 13:52 ` [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Claudio Fontana
` (2 subsequent siblings)
6 siblings, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-06 16:03 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Maydell, ard.biesheuvel, rob.herring, mst, claudio.fontana,
stuart.yoder, a.rigo
Some ARM platforms can successfully map PCI devices into the guest, so it only
makes sense to also add support for the Bochs virtual VGA adapter on those.
Signed-off-by: Alexander Graf <agraf@suse.de>
---
default-configs/arm-softmmu.mak | 1 +
1 file changed, 1 insertion(+)
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
index 7671ee2..1581c85 100644
--- a/default-configs/arm-softmmu.mak
+++ b/default-configs/arm-softmmu.mak
@@ -83,6 +83,7 @@ CONFIG_VERSATILE_PCI=y
CONFIG_VERSATILE_I2C=y
CONFIG_PCI_GENERIC=y
+CONFIG_VGA_PCI=y
CONFIG_SDHCI=y
CONFIG_INTEGRATOR_DEBUG=y
--
1.7.12.4
^ permalink raw reply related [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA
2015-01-06 16:03 ` [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA Alexander Graf
@ 2015-01-06 16:16 ` Peter Maydell
2015-01-06 21:08 ` Alexander Graf
0 siblings, 1 reply; 44+ messages in thread
From: Peter Maydell @ 2015-01-06 16:16 UTC (permalink / raw)
To: Alexander Graf
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, Alvise Rigo, Stuart Yoder
On 6 January 2015 at 16:03, Alexander Graf <agraf@suse.de> wrote:
> Some ARM platforms can successfully map PCI devices into the guest, so it only
> makes sense to also add support for the Bochs virtual VGA adapter on those.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
> default-configs/arm-softmmu.mak | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
> index 7671ee2..1581c85 100644
> --- a/default-configs/arm-softmmu.mak
> +++ b/default-configs/arm-softmmu.mak
> @@ -83,6 +83,7 @@ CONFIG_VERSATILE_PCI=y
> CONFIG_VERSATILE_I2C=y
>
> CONFIG_PCI_GENERIC=y
> +CONFIG_VGA_PCI=y
Why isn't this just in pci.mak like all the other PCI devices?
-- PMM
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA
2015-01-06 16:16 ` Peter Maydell
@ 2015-01-06 21:08 ` Alexander Graf
2015-01-06 21:28 ` Peter Maydell
0 siblings, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-06 21:08 UTC (permalink / raw)
To: Peter Maydell
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, Alvise Rigo, Stuart Yoder
On 06.01.15 17:16, Peter Maydell wrote:
> On 6 January 2015 at 16:03, Alexander Graf <agraf@suse.de> wrote:
>> Some ARM platforms can successfully map PCI devices into the guest, so it only
>> makes sense to also add support for the Bochs virtual VGA adapter on those.
>>
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>> ---
>> default-configs/arm-softmmu.mak | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
>> index 7671ee2..1581c85 100644
>> --- a/default-configs/arm-softmmu.mak
>> +++ b/default-configs/arm-softmmu.mak
>> @@ -83,6 +83,7 @@ CONFIG_VERSATILE_PCI=y
>> CONFIG_VERSATILE_I2C=y
>>
>> CONFIG_PCI_GENERIC=y
>> +CONFIG_VGA_PCI=y
>
> Why isn't this just in pci.mak like all the other PCI devices?
Honestly, I have no idea. Maybe Michael knows? But if everyone agrees it
should be there, I'd be happy to move it.
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA
2015-01-06 21:08 ` Alexander Graf
@ 2015-01-06 21:28 ` Peter Maydell
2015-01-06 21:42 ` Alexander Graf
0 siblings, 1 reply; 44+ messages in thread
From: Peter Maydell @ 2015-01-06 21:28 UTC (permalink / raw)
To: Alexander Graf
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, Alvise Rigo, Stuart Yoder
On 6 January 2015 at 21:08, Alexander Graf <agraf@suse.de> wrote:
> On 06.01.15 17:16, Peter Maydell wrote:
>> On 6 January 2015 at 16:03, Alexander Graf <agraf@suse.de> wrote:
>>> +CONFIG_VGA_PCI=y
>>
>> Why isn't this just in pci.mak like all the other PCI devices?
>
> Honestly, I have no idea. Maybe Michael knows? But if everyone agrees it
> should be there, I'd be happy to move it.
Well, currently the only configs which include pci.mak and don't
also define CONFIG_VGA_PCI are arm, m68k, sh4 and sh4eb, and with
your change arm would move into the other category. It seems more
likely to me that it's just oversight that it's not included in
pci.mak...
At any rate, given that both sh4 and m68k are pretty much orphan
currently, I don't think anybody's going to notice or complain
about the existence of another PCI device :-)
-- PMM
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA
2015-01-06 21:28 ` Peter Maydell
@ 2015-01-06 21:42 ` Alexander Graf
2015-01-07 6:22 ` Paolo Bonzini
0 siblings, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-06 21:42 UTC (permalink / raw)
To: Peter Maydell
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, Alvise Rigo, Stuart Yoder
On 06.01.15 22:28, Peter Maydell wrote:
> On 6 January 2015 at 21:08, Alexander Graf <agraf@suse.de> wrote:
>> On 06.01.15 17:16, Peter Maydell wrote:
>>> On 6 January 2015 at 16:03, Alexander Graf <agraf@suse.de> wrote:
>>>> +CONFIG_VGA_PCI=y
>>>
>>> Why isn't this just in pci.mak like all the other PCI devices?
>>
>> Honestly, I have no idea. Maybe Michael knows? But if everyone agrees it
>> should be there, I'd be happy to move it.
>
> Well, currently the only configs which include pci.mak and don't
> also define CONFIG_VGA_PCI are arm, m68k, sh4 and sh4eb, and with
> your change arm would move into the other category. It seems more
> likely to me that it's just oversight that it's not included in
> pci.mak...
>
> At any rate, given that both sh4 and m68k are pretty much orphan
> currently, I don't think anybody's going to notice or complain
> about the existence of another PCI device :-)
Ok, works for me. I've changed to patch to move the PCI VGA and VGA
options to pci.mak.
I've not moved CIRRUS or QXL yet though. When I tried, cirrus didn't
work - it probably needs access to the legacy VGA regions that don't get
mapped with the gpex phb. And for QXL I'd rather have someone put a
stamp on it saying that it at least has a remote chance of working ;).
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA
2015-01-06 21:42 ` Alexander Graf
@ 2015-01-07 6:22 ` Paolo Bonzini
0 siblings, 0 replies; 44+ messages in thread
From: Paolo Bonzini @ 2015-01-07 6:22 UTC (permalink / raw)
To: Alexander Graf, Peter Maydell
Cc: Rob Herring, Stuart Yoder, Ard Biesheuvel, Michael S. Tsirkin,
Claudio Fontana, Alvise Rigo, QEMU Developers
On 06/01/2015 22:42, Alexander Graf wrote:
> I've not moved CIRRUS or QXL yet though. When I tried, cirrus didn't
> work - it probably needs access to the legacy VGA regions that don't get
> mapped with the gpex phb.
Yes, Bochs VGA has the registers-in-BAR thing, so it always works.
Paolo
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-06 16:03 [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Alexander Graf
` (3 preceding siblings ...)
2015-01-06 16:03 ` [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA Alexander Graf
@ 2015-01-07 13:52 ` Claudio Fontana
2015-01-07 14:07 ` Alexander Graf
2015-01-12 16:24 ` Claudio Fontana
2015-01-21 12:59 ` Claudio Fontana
6 siblings, 1 reply; 44+ messages in thread
From: Claudio Fontana @ 2015-01-07 13:52 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
Hi Alexander, happy new year!
On 06.01.2015 17:03, Alexander Graf wrote:
> Linux implements a nice binding to describe a "generic" PCI Express host bridge
> using only device tree.
>
> This patch set adds enough emulation logic to expose the parts that are
> "generic" as a simple sysbus device and maps it into ARM's virt machine.
>
> With this patch set, we can finally spawn PCI devices on ARM VMs. I was able
> to have a fully DRM enabled virtual machine with VGA, e1000 and XHCI (for
> keyboard and mouse) up and working.
>
> It's only a small step for QEMU, but a big step for ARM VM's usability.
I tried to test your patches, but I get in trouble quite early:
I usually run qemu-system-aarch64 for pci for OSv with the following command line (using the patches from Alvise):
./qemu-system-aarch64 -nographic -machine type=virt -enable-kvm -kernel ./loader.img -cpu host -m 1024M -drive file=usr.img,if=none,id=hd0,media=disk -device virtio-blk-pci,id=blk0,bootindex=0,drive=hd0,scsi=off,vectors=0 -device virtio-rng-pci -netdev user,id=un0,net=xxx.xxx.xxx.xxx/xx,host=xxx.xxx.xxx.xxx -redir tcp:2222::22 -device virtio-net-pci,netdev=un0,vectors=0
and with this series I get:
qemu-system-aarch64: Unknown device 'gpex-pcihost' for default sysbus
Is there something I need to mention in the command line to enable the gpex-pcihost maybe?
Thank you,
Claudio
>
>
> Happy new year!
>
> Alexander Graf (4):
> pci: Split pcie_host_mmcfg_map()
> pci: Add generic PCIe host bridge
> arm: Add PCIe host bridge in virt machine
> arm: enable Bochs PCI VGA
>
> default-configs/arm-softmmu.mak | 3 +
> hw/arm/virt.c | 83 +++++++++++++++++++--
> hw/pci-host/Makefile.objs | 1 +
> hw/pci-host/gpex.c | 156 ++++++++++++++++++++++++++++++++++++++++
> hw/pci/pcie_host.c | 9 ++-
> include/hw/pci-host/gpex.h | 56 +++++++++++++++
> include/hw/pci/pcie_host.h | 1 +
> 7 files changed, 302 insertions(+), 7 deletions(-)
> create mode 100644 hw/pci-host/gpex.c
> create mode 100644 include/hw/pci-host/gpex.h
>
--
Claudio Fontana
Server Virtualization Architect
Huawei Technologies Duesseldorf GmbH
Riesstraße 25 - 80992 München
office: +49 89 158834 4135
mobile: +49 15253060158
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-07 13:52 ` [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Claudio Fontana
@ 2015-01-07 14:07 ` Alexander Graf
2015-01-07 14:26 ` Claudio Fontana
0 siblings, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-07 14:07 UTC (permalink / raw)
To: Claudio Fontana
Cc: Peter Maydell, <ard.biesheuvel@linaro.org>,
<rob.herring@linaro.org>, <qemu-devel@nongnu.org>,
<mst@redhat.com>, <a.rigo@virtualopensystems.com>,
<stuart.yoder@freescale.com>
> Am 07.01.2015 um 14:52 schrieb Claudio Fontana <claudio.fontana@huawei.com>:
>
> Hi Alexander, happy new year!
>
>> On 06.01.2015 17:03, Alexander Graf wrote:
>> Linux implements a nice binding to describe a "generic" PCI Express host bridge
>> using only device tree.
>>
>> This patch set adds enough emulation logic to expose the parts that are
>> "generic" as a simple sysbus device and maps it into ARM's virt machine.
>>
>> With this patch set, we can finally spawn PCI devices on ARM VMs. I was able
>> to have a fully DRM enabled virtual machine with VGA, e1000 and XHCI (for
>> keyboard and mouse) up and working.
>>
>> It's only a small step for QEMU, but a big step for ARM VM's usability.
>
> I tried to test your patches, but I get in trouble quite early:
>
> I usually run qemu-system-aarch64 for pci for OSv with the following command line (using the patches from Alvise):
>
> ./qemu-system-aarch64 -nographic -machine type=virt -enable-kvm -kernel ./loader.img -cpu host -m 1024M -drive file=usr.img,if=none,id=hd0,media=disk -device virtio-blk-pci,id=blk0,bootindex=0,drive=hd0,scsi=off,vectors=0 -device virtio-rng-pci -netdev user,id=un0,net=xxx.xxx.xxx.xxx/xx,host=xxx.xxx.xxx.xxx -redir tcp:2222::22 -device virtio-net-pci,netdev=un0,vectors=0
>
> and with this series I get:
>
> qemu-system-aarch64: Unknown device 'gpex-pcihost' for default sysbus
>
> Is there something I need to mention in the command line to enable the gpex-pcihost maybe?
If I had to guess I'd say you're missing the object file in your binary. Did you run configure again after applying the patches?
Alex
>
> Thank you,
>
> Claudio
>
>
>>
>>
>> Happy new year!
>>
>> Alexander Graf (4):
>> pci: Split pcie_host_mmcfg_map()
>> pci: Add generic PCIe host bridge
>> arm: Add PCIe host bridge in virt machine
>> arm: enable Bochs PCI VGA
>>
>> default-configs/arm-softmmu.mak | 3 +
>> hw/arm/virt.c | 83 +++++++++++++++++++--
>> hw/pci-host/Makefile.objs | 1 +
>> hw/pci-host/gpex.c | 156 ++++++++++++++++++++++++++++++++++++++++
>> hw/pci/pcie_host.c | 9 ++-
>> include/hw/pci-host/gpex.h | 56 +++++++++++++++
>> include/hw/pci/pcie_host.h | 1 +
>> 7 files changed, 302 insertions(+), 7 deletions(-)
>> create mode 100644 hw/pci-host/gpex.c
>> create mode 100644 include/hw/pci-host/gpex.h
>>
>
>
> --
> Claudio Fontana
> Server Virtualization Architect
> Huawei Technologies Duesseldorf GmbH
> Riesstraße 25 - 80992 München
>
> office: +49 89 158834 4135
> mobile: +49 15253060158
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-07 14:07 ` Alexander Graf
@ 2015-01-07 14:26 ` Claudio Fontana
2015-01-07 14:36 ` Alexander Graf
2015-01-07 16:31 ` Peter Maydell
0 siblings, 2 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-07 14:26 UTC (permalink / raw)
To: Alexander Graf
Cc: Peter Maydell, <ard.biesheuvel@linaro.org>,
<rob.herring@linaro.org>, <qemu-devel@nongnu.org>,
<mst@redhat.com>, <a.rigo@virtualopensystems.com>,
<stuart.yoder@freescale.com>
On 07.01.2015 15:07, Alexander Graf wrote:
>
>
>
>> Am 07.01.2015 um 14:52 schrieb Claudio Fontana <claudio.fontana@huawei.com>:
>>
>> Hi Alexander, happy new year!
>>
>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>> Linux implements a nice binding to describe a "generic" PCI Express host bridge
>>> using only device tree.
>>>
>>> This patch set adds enough emulation logic to expose the parts that are
>>> "generic" as a simple sysbus device and maps it into ARM's virt machine.
>>>
>>> With this patch set, we can finally spawn PCI devices on ARM VMs. I was able
>>> to have a fully DRM enabled virtual machine with VGA, e1000 and XHCI (for
>>> keyboard and mouse) up and working.
>>>
>>> It's only a small step for QEMU, but a big step for ARM VM's usability.
>>
>> I tried to test your patches, but I get in trouble quite early:
>>
>> I usually run qemu-system-aarch64 for pci for OSv with the following command line (using the patches from Alvise):
>>
>> ./qemu-system-aarch64 -nographic -machine type=virt -enable-kvm -kernel ./loader.img -cpu host -m 1024M -drive file=usr.img,if=none,id=hd0,media=disk -device virtio-blk-pci,id=blk0,bootindex=0,drive=hd0,scsi=off,vectors=0 -device virtio-rng-pci -netdev user,id=un0,net=xxx.xxx.xxx.xxx/xx,host=xxx.xxx.xxx.xxx -redir tcp:2222::22 -device virtio-net-pci,netdev=un0,vectors=0
>>
>> and with this series I get:
>>
>> qemu-system-aarch64: Unknown device 'gpex-pcihost' for default sysbus
>>
>> Is there something I need to mention in the command line to enable the gpex-pcihost maybe?
>
> If I had to guess I'd say you're missing the object file in your binary. Did you run configure again after applying the patches?
>
> Alex
Yes I did but it seems it's not picking up the CONFIG_PCI_GENERIC=y for some reason.
If I force hw/pci-host/Makefile.objs to build it by making it a common-obj-y then it builds.
I see the CONFIG_PCI_GENERIC in default-configs/arm-softmmu.mak,
and there is a default-configs/aarch64-softmmu.mak which says
include arm-softmmu.mak
but still it does not seem to pick it up over here..
while it picks up the CONFIG_PCI from the other mak files for example.
Wierd.. does it build on AArch64 for you? Or did you test only 32bit?
Ciao,
Claudio
>
>>
>> Thank you,
>>
>> Claudio
>>
>>
>>>
>>>
>>> Happy new year!
>>>
>>> Alexander Graf (4):
>>> pci: Split pcie_host_mmcfg_map()
>>> pci: Add generic PCIe host bridge
>>> arm: Add PCIe host bridge in virt machine
>>> arm: enable Bochs PCI VGA
>>>
>>> default-configs/arm-softmmu.mak | 3 +
>>> hw/arm/virt.c | 83 +++++++++++++++++++--
>>> hw/pci-host/Makefile.objs | 1 +
>>> hw/pci-host/gpex.c | 156 ++++++++++++++++++++++++++++++++++++++++
>>> hw/pci/pcie_host.c | 9 ++-
>>> include/hw/pci-host/gpex.h | 56 +++++++++++++++
>>> include/hw/pci/pcie_host.h | 1 +
>>> 7 files changed, 302 insertions(+), 7 deletions(-)
>>> create mode 100644 hw/pci-host/gpex.c
>>> create mode 100644 include/hw/pci-host/gpex.h
>>>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-07 14:26 ` Claudio Fontana
@ 2015-01-07 14:36 ` Alexander Graf
2015-01-07 15:16 ` Claudio Fontana
2015-01-07 16:31 ` Peter Maydell
1 sibling, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-07 14:36 UTC (permalink / raw)
To: Claudio Fontana
Cc: Peter Maydell, <ard.biesheuvel@linaro.org>,
<rob.herring@linaro.org>, <qemu-devel@nongnu.org>,
<mst@redhat.com>, <a.rigo@virtualopensystems.com>,
<stuart.yoder@freescale.com>
On 07.01.15 15:26, Claudio Fontana wrote:
> On 07.01.2015 15:07, Alexander Graf wrote:
>>
>>
>>
>>> Am 07.01.2015 um 14:52 schrieb Claudio Fontana <claudio.fontana@huawei.com>:
>>>
>>> Hi Alexander, happy new year!
>>>
>>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>> Linux implements a nice binding to describe a "generic" PCI Express host bridge
>>>> using only device tree.
>>>>
>>>> This patch set adds enough emulation logic to expose the parts that are
>>>> "generic" as a simple sysbus device and maps it into ARM's virt machine.
>>>>
>>>> With this patch set, we can finally spawn PCI devices on ARM VMs. I was able
>>>> to have a fully DRM enabled virtual machine with VGA, e1000 and XHCI (for
>>>> keyboard and mouse) up and working.
>>>>
>>>> It's only a small step for QEMU, but a big step for ARM VM's usability.
>>>
>>> I tried to test your patches, but I get in trouble quite early:
>>>
>>> I usually run qemu-system-aarch64 for pci for OSv with the following command line (using the patches from Alvise):
>>>
>>> ./qemu-system-aarch64 -nographic -machine type=virt -enable-kvm -kernel ./loader.img -cpu host -m 1024M -drive file=usr.img,if=none,id=hd0,media=disk -device virtio-blk-pci,id=blk0,bootindex=0,drive=hd0,scsi=off,vectors=0 -device virtio-rng-pci -netdev user,id=un0,net=xxx.xxx.xxx.xxx/xx,host=xxx.xxx.xxx.xxx -redir tcp:2222::22 -device virtio-net-pci,netdev=un0,vectors=0
>>>
>>> and with this series I get:
>>>
>>> qemu-system-aarch64: Unknown device 'gpex-pcihost' for default sysbus
>>>
>>> Is there something I need to mention in the command line to enable the gpex-pcihost maybe?
>>
>> If I had to guess I'd say you're missing the object file in your binary. Did you run configure again after applying the patches?
>>
>> Alex
>
> Yes I did but it seems it's not picking up the CONFIG_PCI_GENERIC=y for some reason.
> If I force hw/pci-host/Makefile.objs to build it by making it a common-obj-y then it builds.
>
> I see the CONFIG_PCI_GENERIC in default-configs/arm-softmmu.mak,
>
> and there is a default-configs/aarch64-softmmu.mak which says
>
> include arm-softmmu.mak
>
> but still it does not seem to pick it up over here..
>
> while it picks up the CONFIG_PCI from the other mak files for example.
>
> Wierd.. does it build on AArch64 for you? Or did you test only 32bit?
Well, in fact I only tested aarch64 :). It definitely worked for me.
Can you try to do a fresh checkout and a new configure run on that one?
You should have a line saying
CONFIG_PCI_GENERIC=y
in aarch64-softmmu/config-devices.mak.
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-07 14:36 ` Alexander Graf
@ 2015-01-07 15:16 ` Claudio Fontana
0 siblings, 0 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-07 15:16 UTC (permalink / raw)
To: Alexander Graf
Cc: Peter Maydell, <ard.biesheuvel@linaro.org>,
<rob.herring@linaro.org>, <qemu-devel@nongnu.org>,
<mst@redhat.com>, <a.rigo@virtualopensystems.com>,
<stuart.yoder@freescale.com>
On 07.01.2015 15:36, Alexander Graf wrote:
>
>
> On 07.01.15 15:26, Claudio Fontana wrote:
>> On 07.01.2015 15:07, Alexander Graf wrote:
>>>
>>>
>>>
>>>> Am 07.01.2015 um 14:52 schrieb Claudio Fontana <claudio.fontana@huawei.com>:
>>>>
>>>> Hi Alexander, happy new year!
>>>>
>>>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>>> Linux implements a nice binding to describe a "generic" PCI Express host bridge
>>>>> using only device tree.
>>>>>
>>>>> This patch set adds enough emulation logic to expose the parts that are
>>>>> "generic" as a simple sysbus device and maps it into ARM's virt machine.
>>>>>
>>>>> With this patch set, we can finally spawn PCI devices on ARM VMs. I was able
>>>>> to have a fully DRM enabled virtual machine with VGA, e1000 and XHCI (for
>>>>> keyboard and mouse) up and working.
>>>>>
>>>>> It's only a small step for QEMU, but a big step for ARM VM's usability.
>>>>
>>>> I tried to test your patches, but I get in trouble quite early:
>>>>
>>>> I usually run qemu-system-aarch64 for pci for OSv with the following command line (using the patches from Alvise):
>>>>
>>>> ./qemu-system-aarch64 -nographic -machine type=virt -enable-kvm -kernel ./loader.img -cpu host -m 1024M -drive file=usr.img,if=none,id=hd0,media=disk -device virtio-blk-pci,id=blk0,bootindex=0,drive=hd0,scsi=off,vectors=0 -device virtio-rng-pci -netdev user,id=un0,net=xxx.xxx.xxx.xxx/xx,host=xxx.xxx.xxx.xxx -redir tcp:2222::22 -device virtio-net-pci,netdev=un0,vectors=0
>>>>
>>>> and with this series I get:
>>>>
>>>> qemu-system-aarch64: Unknown device 'gpex-pcihost' for default sysbus
>>>>
>>>> Is there something I need to mention in the command line to enable the gpex-pcihost maybe?
>>>
>>> If I had to guess I'd say you're missing the object file in your binary. Did you run configure again after applying the patches?
>>>
>>> Alex
>>
>> Yes I did but it seems it's not picking up the CONFIG_PCI_GENERIC=y for some reason.
>> If I force hw/pci-host/Makefile.objs to build it by making it a common-obj-y then it builds.
>>
>> I see the CONFIG_PCI_GENERIC in default-configs/arm-softmmu.mak,
>>
>> and there is a default-configs/aarch64-softmmu.mak which says
>>
>> include arm-softmmu.mak
>>
>> but still it does not seem to pick it up over here..
>>
>> while it picks up the CONFIG_PCI from the other mak files for example.
>>
>> Wierd.. does it build on AArch64 for you? Or did you test only 32bit?
>
> Well, in fact I only tested aarch64 :). It definitely worked for me.
>
> Can you try to do a fresh checkout and a new configure run on that one?
> You should have a line saying
>
> CONFIG_PCI_GENERIC=y
>
> in aarch64-softmmu/config-devices.mak.
>
>
> Alex
yep, fresh checkout fixed it. Weird, the working tree where it does not work appears as clean..
nm thanks, I'll test it now.
Claudio
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-06 16:03 ` [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine Alexander Graf
@ 2015-01-07 15:52 ` Claudio Fontana
2015-01-07 21:47 ` Alexander Graf
2015-01-08 10:31 ` Peter Maydell
2015-01-12 16:20 ` Claudio Fontana
2015-01-12 16:49 ` alvise rigo
2 siblings, 2 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-07 15:52 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 06.01.2015 17:03, Alexander Graf wrote:
> Now that we have a working "generic" PCIe host bridge driver, we can plug
> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>
> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
> into an AArch64 VM with this and they all lived happily ever after.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
>
> ---
>
> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
> systems. If you want to use it with AArch64 guests, please apply the following
> patch or wait until upstream cleaned up the code properly:
>
> http://csgraf.de/agraf/pci/pci-3.19.patch
> ---
> default-configs/arm-softmmu.mak | 2 +
> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
> 2 files changed, 80 insertions(+), 5 deletions(-)
>
> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
> index f3513fa..7671ee2 100644
> --- a/default-configs/arm-softmmu.mak
> +++ b/default-configs/arm-softmmu.mak
> @@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
> CONFIG_VERSATILE_PCI=y
> CONFIG_VERSATILE_I2C=y
>
> +CONFIG_PCI_GENERIC=y
> +
> CONFIG_SDHCI=y
> CONFIG_INTEGRATOR_DEBUG=y
>
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 2353440..b7635ac 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -42,6 +42,7 @@
> #include "exec/address-spaces.h"
> #include "qemu/bitops.h"
> #include "qemu/error-report.h"
> +#include "hw/pci-host/gpex.h"
>
> #define NUM_VIRTIO_TRANSPORTS 32
>
> @@ -69,6 +70,7 @@ enum {
> VIRT_MMIO,
> VIRT_RTC,
> VIRT_FW_CFG,
> + VIRT_PCIE,
> };
>
> typedef struct MemMapEntry {
> @@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
> [VIRT_FW_CFG] = { 0x09020000, 0x0000000a },
> [VIRT_MMIO] = { 0x0a000000, 0x00000200 },
> /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
> - /* 0x10000000 .. 0x40000000 reserved for PCI */
> + [VIRT_PCIE] = { 0x10000000, 0x30000000 },
> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
> };
>
> static const int a15irqmap[] = {
> [VIRT_UART] = 1,
> [VIRT_RTC] = 2,
> + [VIRT_PCIE] = 3,
> [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
> };
>
> @@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
> }
> }
>
> -static void fdt_add_gic_node(const VirtBoardInfo *vbi)
> +static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
> {
> uint32_t gic_phandle;
>
> @@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
> 2, vbi->memmap[VIRT_GIC_CPU].base,
> 2, vbi->memmap[VIRT_GIC_CPU].size);
> qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
> +
> + return gic_phandle;
> }
>
> -static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> +static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> {
> /* We create a standalone GIC v2 */
> DeviceState *gicdev;
> @@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> pic[i] = qdev_get_gpio_in(gicdev, i);
> }
>
> - fdt_add_gic_node(vbi);
> + return fdt_add_gic_node(vbi);
> }
>
> static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
> @@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
> g_free(nodename);
> }
>
> +static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
> + uint32_t gic_phandle)
> +{
> + hwaddr base = vbi->memmap[VIRT_PCIE].base;
> + hwaddr size = vbi->memmap[VIRT_PCIE].size;
> + hwaddr size_ioport = 64 * 1024;
> + hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
> + hwaddr size_mmio = size - size_ecam - size_ioport;
> + hwaddr base_mmio = base;
> + hwaddr base_ioport = base_mmio + size_mmio;
> + hwaddr base_ecam = base_ioport + size_ioport;
> + int irq = vbi->irqmap[VIRT_PCIE];
> + MemoryRegion *mmio_alias;
> + MemoryRegion *mmio_reg;
> + DeviceState *dev;
> + char *nodename;
> +
> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
> +
> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
> + qdev_init_nofail(dev);
> +
> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
> +
> + /* Map the MMIO window at the same spot in bus and cpu layouts */
> + mmio_alias = g_new0(MemoryRegion, 1);
> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
> + mmio_reg, base_mmio, size_mmio);
> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
> +
> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
> +
> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
> + qemu_fdt_add_subnode(vbi->fdt, nodename);
> + qemu_fdt_setprop_string(vbi->fdt, nodename,
> + "compatible", "pci-host-ecam-generic");
is this the only compatible string we should set here? Is this not legacy pci compatible?
In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example,
would it be sensible to make it a list and include "pci" as well?
> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
> +
> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
> + 2, base_ecam, 2, size_ecam);
> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
> + 1, 0x01000000, 2, 0,
> + 2, base_ioport, 2, size_ioport,
> +
> + 1, 0x02000000, 2, base_mmio,
> + 2, base_mmio, 2, size_mmio);
> +
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
> + 0, 0, 0, /* device */
> + 0, /* PCI irq */
> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
> + 0, 0, 0, /* device */
> + 0 /* PCI irq */);
Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series. Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it).
How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm).
Thanks,
Claudio
> +
> + g_free(nodename);
> +}
> +
> static void *machvirt_dtb(const struct arm_boot_info *binfo, int *fdt_size)
> {
> const VirtBoardInfo *board = (const VirtBoardInfo *)binfo;
> @@ -573,6 +643,7 @@ static void machvirt_init(MachineState *machine)
> MemoryRegion *ram = g_new(MemoryRegion, 1);
> const char *cpu_model = machine->cpu_model;
> VirtBoardInfo *vbi;
> + uint32_t gic_phandle;
>
> if (!cpu_model) {
> cpu_model = "cortex-a15";
> @@ -634,12 +705,14 @@ static void machvirt_init(MachineState *machine)
>
> create_flash(vbi);
>
> - create_gic(vbi, pic);
> + gic_phandle = create_gic(vbi, pic);
>
> create_uart(vbi, pic);
>
> create_rtc(vbi, pic);
>
> + create_pcie(vbi, pic, gic_phandle);
> +
> /* Create mmio transports, so the user can create virtio backends
> * (which will be automatically plugged in to the transports). If
> * no backend is created the transport will just sit harmlessly idle.
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-07 14:26 ` Claudio Fontana
2015-01-07 14:36 ` Alexander Graf
@ 2015-01-07 16:31 ` Peter Maydell
1 sibling, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2015-01-07 16:31 UTC (permalink / raw)
To: Claudio Fontana
Cc: <rob.herring@linaro.org>, <mst@redhat.com>,
<qemu-devel@nongnu.org>, <ard.biesheuvel@linaro.org>,
<a.rigo@virtualopensystems.com>,
<stuart.yoder@freescale.com>, Alexander Graf
On 7 January 2015 at 14:26, Claudio Fontana <claudio.fontana@huawei.com> wrote:
> Yes I did but it seems it's not picking up the CONFIG_PCI_GENERIC=y for some reason.
> If I force hw/pci-host/Makefile.objs to build it by making it a common-obj-y then it builds.
This is a long-standing bug in our makefiles:
http://lists.gnu.org/archive/html/qemu-devel/2014-06/msg04850.html
where there is a dependency on default-configs/$TARGET.mak but
not on any .mak files which are included by that file.
-- PMM
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-07 15:52 ` Claudio Fontana
@ 2015-01-07 21:47 ` Alexander Graf
2015-01-08 12:55 ` Claudio Fontana
2015-01-08 10:31 ` Peter Maydell
1 sibling, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-07 21:47 UTC (permalink / raw)
To: Claudio Fontana, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 07.01.15 16:52, Claudio Fontana wrote:
> On 06.01.2015 17:03, Alexander Graf wrote:
>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>
>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>> into an AArch64 VM with this and they all lived happily ever after.
>>
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>
>> ---
>>
>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>> systems. If you want to use it with AArch64 guests, please apply the following
>> patch or wait until upstream cleaned up the code properly:
>>
>> http://csgraf.de/agraf/pci/pci-3.19.patch
>> ---
>> default-configs/arm-softmmu.mak | 2 +
>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
>> 2 files changed, 80 insertions(+), 5 deletions(-)
>>
>> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
>> index f3513fa..7671ee2 100644
>> --- a/default-configs/arm-softmmu.mak
>> +++ b/default-configs/arm-softmmu.mak
>> @@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
>> CONFIG_VERSATILE_PCI=y
>> CONFIG_VERSATILE_I2C=y
>>
>> +CONFIG_PCI_GENERIC=y
>> +
>> CONFIG_SDHCI=y
>> CONFIG_INTEGRATOR_DEBUG=y
>>
>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>> index 2353440..b7635ac 100644
>> --- a/hw/arm/virt.c
>> +++ b/hw/arm/virt.c
>> @@ -42,6 +42,7 @@
>> #include "exec/address-spaces.h"
>> #include "qemu/bitops.h"
>> #include "qemu/error-report.h"
>> +#include "hw/pci-host/gpex.h"
>>
>> #define NUM_VIRTIO_TRANSPORTS 32
>>
>> @@ -69,6 +70,7 @@ enum {
>> VIRT_MMIO,
>> VIRT_RTC,
>> VIRT_FW_CFG,
>> + VIRT_PCIE,
>> };
>>
>> typedef struct MemMapEntry {
>> @@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
>> [VIRT_FW_CFG] = { 0x09020000, 0x0000000a },
>> [VIRT_MMIO] = { 0x0a000000, 0x00000200 },
>> /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
>> - /* 0x10000000 .. 0x40000000 reserved for PCI */
>> + [VIRT_PCIE] = { 0x10000000, 0x30000000 },
>> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
>> };
>>
>> static const int a15irqmap[] = {
>> [VIRT_UART] = 1,
>> [VIRT_RTC] = 2,
>> + [VIRT_PCIE] = 3,
>> [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
>> };
>>
>> @@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
>> }
>> }
>>
>> -static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>> +static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
>> {
>> uint32_t gic_phandle;
>>
>> @@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>> 2, vbi->memmap[VIRT_GIC_CPU].base,
>> 2, vbi->memmap[VIRT_GIC_CPU].size);
>> qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
>> +
>> + return gic_phandle;
>> }
>>
>> -static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>> +static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>> {
>> /* We create a standalone GIC v2 */
>> DeviceState *gicdev;
>> @@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>> pic[i] = qdev_get_gpio_in(gicdev, i);
>> }
>>
>> - fdt_add_gic_node(vbi);
>> + return fdt_add_gic_node(vbi);
>> }
>>
>> static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
>> @@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
>> g_free(nodename);
>> }
>>
>> +static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
>> + uint32_t gic_phandle)
>> +{
>> + hwaddr base = vbi->memmap[VIRT_PCIE].base;
>> + hwaddr size = vbi->memmap[VIRT_PCIE].size;
>> + hwaddr size_ioport = 64 * 1024;
>> + hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
>> + hwaddr size_mmio = size - size_ecam - size_ioport;
>> + hwaddr base_mmio = base;
>> + hwaddr base_ioport = base_mmio + size_mmio;
>> + hwaddr base_ecam = base_ioport + size_ioport;
>> + int irq = vbi->irqmap[VIRT_PCIE];
>> + MemoryRegion *mmio_alias;
>> + MemoryRegion *mmio_reg;
>> + DeviceState *dev;
>> + char *nodename;
>> +
>> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
>> +
>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>> + qdev_init_nofail(dev);
>> +
>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>> +
>> + /* Map the MMIO window at the same spot in bus and cpu layouts */
>> + mmio_alias = g_new0(MemoryRegion, 1);
>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>> + mmio_reg, base_mmio, size_mmio);
>> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>> +
>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>> +
>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
>> + qemu_fdt_add_subnode(vbi->fdt, nodename);
>> + qemu_fdt_setprop_string(vbi->fdt, nodename,
>> + "compatible", "pci-host-ecam-generic");
>
> is this the only compatible string we should set here? Is this not legacy pci compatible?
> In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example,
> would it be sensible to make it a list and include "pci" as well?
I couldn't find anything that defines what an "pci" compatible should
look like. We definitely don't implement the legacy PCI config space
accessor registers.
>
>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>> +
>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>> + 2, base_ecam, 2, size_ecam);
>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>> + 1, 0x01000000, 2, 0,
>> + 2, base_ioport, 2, size_ioport,
>> +
>> + 1, 0x02000000, 2, base_mmio,
>> + 2, base_mmio, 2, size_mmio);
>> +
>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>> + 0, 0, 0, /* device */
>> + 0, /* PCI irq */
>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>> + 0, 0, 0, /* device */
>> + 0 /* PCI irq */);
>
> Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series.
How exactly is this undocumented? The "mask" is a mask over the first
fields of an interrupt-map row plus an IRQ offset. So the mask above
means "Any device with any function and any IRQ on it, map to device IRQ
0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
> Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it).
You're saying you'd prefer a define?
> How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm).
His implementation explicitly listed every PCI slot, while mine actually
makes use of the mask and simply routes everything to a single IRQ line.
The benefit of masking devfn out is that you don't need to worry about
the number of slots you support - and anything performance critical
should go via MSI-X anyway ;).
So what exactly do you test it with? I've successfully received IRQs
with a Linux guest, so I'm slightly puzzled you're running into problems.
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-07 15:52 ` Claudio Fontana
2015-01-07 21:47 ` Alexander Graf
@ 2015-01-08 10:31 ` Peter Maydell
2015-01-08 12:30 ` Claudio Fontana
1 sibling, 1 reply; 44+ messages in thread
From: Peter Maydell @ 2015-01-08 10:31 UTC (permalink / raw)
To: Claudio Fontana
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Alvise Rigo, Stuart Yoder, Alexander Graf
On 7 January 2015 at 15:52, Claudio Fontana <claudio.fontana@huawei.com> wrote:
> Interrupt map does not seem to work for me; incidentally this ends up being
> the same kind of undocumented blob that Alvise posted in his series. Can
> you add a good comment about what the ranges property contains
> (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but
> there is no need to be cryptic about it).
The binding docs live in the kernel:
https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/host-generic-pci.txt
(which reference the upstream openfirmware specs):
http://www.firmware.org/1275/practice/imap/imap0_9d.pdf
so we can provide a brief summary comment here, but if the kernel
binding docs are confusing (which they kind of are) we should really
get them improved...
On the 'compatible' string:
https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/host-generic-pci.txt
doesn't say anything about using "pci" here in either the
text or the example binding.
-- PMM
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-08 10:31 ` Peter Maydell
@ 2015-01-08 12:30 ` Claudio Fontana
0 siblings, 0 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-08 12:30 UTC (permalink / raw)
To: Peter Maydell
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Alvise Rigo, Stuart Yoder, Alexander Graf
On 08.01.2015 11:31, Peter Maydell wrote:
> On 7 January 2015 at 15:52, Claudio Fontana <claudio.fontana@huawei.com> wrote:
>> Interrupt map does not seem to work for me; incidentally this ends up being
>> the same kind of undocumented blob that Alvise posted in his series. Can
>> you add a good comment about what the ranges property contains
>> (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but
>> there is no need to be cryptic about it).
>
> The binding docs live in the kernel:
> https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/host-generic-pci.txt
> (which reference the upstream openfirmware specs):
> http://www.firmware.org/1275/practice/imap/imap0_9d.pdf
>
> so we can provide a brief summary comment here, but if the kernel
> binding docs are confusing (which they kind of are) we should really
> get them improved...
>
> On the 'compatible' string:
> https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/host-generic-pci.txt
> doesn't say anything about using "pci" here in either the
> text or the example binding.
Thank you for these pointers.
I think that putting a comment with this information (even just a pointer to the effect of "just look at host-generic-pci.txt in the Linux kernel documentation to understand what's going on here" would be helpful, or even go directly referring to the "Open Firmware Recommended Practice: Interrupt Mapping", since QEMU should be guest OS agnostic - to some extent.. -).
Also I think Alex' proposal to use defines for IO Space vs Memory Space instead of hardcoding 0x1000000 / 0x2000000 is a good thing.
I think the kernel binding docs could be made more helpful, but since I am in the position of trying to figure this stuff out I am not in the best position to make them better..
Thanks,
Claudio
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-07 21:47 ` Alexander Graf
@ 2015-01-08 12:55 ` Claudio Fontana
2015-01-08 13:26 ` Alexander Graf
2015-01-08 13:36 ` alvise rigo
0 siblings, 2 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-08 12:55 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
Alvise Rigo
(added cc: Alvise which I mistakenly assumed was in Cc: already)
On 07.01.2015 22:47, Alexander Graf wrote:
>
>
> On 07.01.15 16:52, Claudio Fontana wrote:
>> On 06.01.2015 17:03, Alexander Graf wrote:
>>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>>
>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>>> into an AArch64 VM with this and they all lived happily ever after.
>>>
>>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>>
>>> ---
>>>
>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>>> systems. If you want to use it with AArch64 guests, please apply the following
>>> patch or wait until upstream cleaned up the code properly:
>>>
>>> http://csgraf.de/agraf/pci/pci-3.19.patch
>>> ---
>>> default-configs/arm-softmmu.mak | 2 +
>>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
>>> 2 files changed, 80 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
>>> index f3513fa..7671ee2 100644
>>> --- a/default-configs/arm-softmmu.mak
>>> +++ b/default-configs/arm-softmmu.mak
>>> @@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
>>> CONFIG_VERSATILE_PCI=y
>>> CONFIG_VERSATILE_I2C=y
>>>
>>> +CONFIG_PCI_GENERIC=y
>>> +
>>> CONFIG_SDHCI=y
>>> CONFIG_INTEGRATOR_DEBUG=y
>>>
>>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>>> index 2353440..b7635ac 100644
>>> --- a/hw/arm/virt.c
>>> +++ b/hw/arm/virt.c
>>> @@ -42,6 +42,7 @@
>>> #include "exec/address-spaces.h"
>>> #include "qemu/bitops.h"
>>> #include "qemu/error-report.h"
>>> +#include "hw/pci-host/gpex.h"
>>>
>>> #define NUM_VIRTIO_TRANSPORTS 32
>>>
>>> @@ -69,6 +70,7 @@ enum {
>>> VIRT_MMIO,
>>> VIRT_RTC,
>>> VIRT_FW_CFG,
>>> + VIRT_PCIE,
>>> };
>>>
>>> typedef struct MemMapEntry {
>>> @@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
>>> [VIRT_FW_CFG] = { 0x09020000, 0x0000000a },
>>> [VIRT_MMIO] = { 0x0a000000, 0x00000200 },
>>> /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
>>> - /* 0x10000000 .. 0x40000000 reserved for PCI */
>>> + [VIRT_PCIE] = { 0x10000000, 0x30000000 },
>>> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
>>> };
>>>
>>> static const int a15irqmap[] = {
>>> [VIRT_UART] = 1,
>>> [VIRT_RTC] = 2,
>>> + [VIRT_PCIE] = 3,
>>> [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
>>> };
>>>
>>> @@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
>>> }
>>> }
>>>
>>> -static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>>> +static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
>>> {
>>> uint32_t gic_phandle;
>>>
>>> @@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>>> 2, vbi->memmap[VIRT_GIC_CPU].base,
>>> 2, vbi->memmap[VIRT_GIC_CPU].size);
>>> qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
>>> +
>>> + return gic_phandle;
>>> }
>>>
>>> -static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>>> +static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>>> {
>>> /* We create a standalone GIC v2 */
>>> DeviceState *gicdev;
>>> @@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>>> pic[i] = qdev_get_gpio_in(gicdev, i);
>>> }
>>>
>>> - fdt_add_gic_node(vbi);
>>> + return fdt_add_gic_node(vbi);
>>> }
>>>
>>> static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
>>> @@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
>>> g_free(nodename);
>>> }
>>>
>>> +static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
>>> + uint32_t gic_phandle)
>>> +{
>>> + hwaddr base = vbi->memmap[VIRT_PCIE].base;
>>> + hwaddr size = vbi->memmap[VIRT_PCIE].size;
>>> + hwaddr size_ioport = 64 * 1024;
>>> + hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
>>> + hwaddr size_mmio = size - size_ecam - size_ioport;
>>> + hwaddr base_mmio = base;
>>> + hwaddr base_ioport = base_mmio + size_mmio;
>>> + hwaddr base_ecam = base_ioport + size_ioport;
>>> + int irq = vbi->irqmap[VIRT_PCIE];
>>> + MemoryRegion *mmio_alias;
>>> + MemoryRegion *mmio_reg;
>>> + DeviceState *dev;
>>> + char *nodename;
>>> +
>>> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
>>> +
>>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>>> + qdev_init_nofail(dev);
>>> +
>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>>> +
>>> + /* Map the MMIO window at the same spot in bus and cpu layouts */
>>> + mmio_alias = g_new0(MemoryRegion, 1);
>>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>>> + mmio_reg, base_mmio, size_mmio);
>>> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>>> +
>>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>>> +
>>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
>>> + qemu_fdt_add_subnode(vbi->fdt, nodename);
>>> + qemu_fdt_setprop_string(vbi->fdt, nodename,
>>> + "compatible", "pci-host-ecam-generic");
>>
>> is this the only compatible string we should set here? Is this not legacy pci compatible?
>> In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example,
>> would it be sensible to make it a list and include "pci" as well?
>
> I couldn't find anything that defines what an "pci" compatible should
> look like. We definitely don't implement the legacy PCI config space
> accessor registers.
I see, I assumed this patch would support both CAM and ECAM as configuration methods, while now I understand
Alvise's patches support only CAM, while these support only ECAM..
So basically I should look at the compatible string and then choose configuration method accordingly.
I wonder if I should deal with the case where the compatible string contains both ECAM and CAM.
>>
>>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>>> +
>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>>> + 2, base_ecam, 2, size_ecam);
>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>>> + 1, 0x01000000, 2, 0,
>>> + 2, base_ioport, 2, size_ioport,
>>> +
>>> + 1, 0x02000000, 2, base_mmio,
>>> + 2, base_mmio, 2, size_mmio);
>>> +
>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>>> + 0, 0, 0, /* device */
>>> + 0, /* PCI irq */
>>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>>> + 0, 0, 0, /* device */
>>> + 0 /* PCI irq */);
>>
>> Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series.
>
> How exactly is this undocumented? The "mask" is a mask over the first
> fields of an interrupt-map row plus an IRQ offset. So the mask above
> means "Any device with any function and any IRQ on it, map to device IRQ
> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
(see my answer to Peter below in thread)
this is a bit different to what Alvise's series is doing I think (see later).
>
>> Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it).
>
> You're saying you'd prefer a define?
Yes that would be helpful :)
>
>> How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm).
>
> His implementation explicitly listed every PCI slot, while mine actually
> makes use of the mask and simply routes everything to a single IRQ line.
>
> The benefit of masking devfn out is that you don't need to worry about
> the number of slots you support - and anything performance critical
> should go via MSI-X anyway ;).
The benefit for me (but for me only probably..) is that with one IRQ per slot I didn't have to implement shared irqs and msi / msi-x in the guest yet. But that should be done eventually anyway..
>
> So what exactly do you test it with? I've successfully received IRQs
> with a Linux guest, so I'm slightly puzzled you're running into problems.
Of course, I am just implementing PCI guest support for OSv/AArch64.
Works with the legacy pci support using Alvise's series to do virtio-pci with block, net, rng etc,
but I am now considering going for PCIE support directly although that means more implementation work for me.
>
>
> Alex
>
Ciao & thanks,
Claudio
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-08 12:55 ` Claudio Fontana
@ 2015-01-08 13:26 ` Alexander Graf
2015-01-08 15:01 ` Claudio Fontana
2015-01-08 13:36 ` alvise rigo
1 sibling, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-08 13:26 UTC (permalink / raw)
To: Claudio Fontana, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 08.01.15 13:55, Claudio Fontana wrote:
> (added cc: Alvise which I mistakenly assumed was in Cc: already)
He was in CC :). Now he's there twice.
>
> On 07.01.2015 22:47, Alexander Graf wrote:
>>
>>
>> On 07.01.15 16:52, Claudio Fontana wrote:
>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>>>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>>>
>>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>>>> into an AArch64 VM with this and they all lived happily ever after.
>>>>
>>>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>>>
>>>> ---
>>>>
>>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>>>> systems. If you want to use it with AArch64 guests, please apply the following
>>>> patch or wait until upstream cleaned up the code properly:
>>>>
>>>> http://csgraf.de/agraf/pci/pci-3.19.patch
>>>> ---
>>>> default-configs/arm-softmmu.mak | 2 +
>>>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
>>>> 2 files changed, 80 insertions(+), 5 deletions(-)
>>>>
[...]
>>>> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
>>>> +
>>>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>>>> + qdev_init_nofail(dev);
>>>> +
>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>>>> +
>>>> + /* Map the MMIO window at the same spot in bus and cpu layouts */
>>>> + mmio_alias = g_new0(MemoryRegion, 1);
>>>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>>>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>>>> + mmio_reg, base_mmio, size_mmio);
>>>> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>>>> +
>>>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>>>> +
>>>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
>>>> + qemu_fdt_add_subnode(vbi->fdt, nodename);
>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename,
>>>> + "compatible", "pci-host-ecam-generic");
>>>
>>> is this the only compatible string we should set here? Is this not legacy pci compatible?
>>> In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example,
>>> would it be sensible to make it a list and include "pci" as well?
>>
>> I couldn't find anything that defines what an "pci" compatible should
>> look like. We definitely don't implement the legacy PCI config space
>> accessor registers.
>
> I see, I assumed this patch would support both CAM and ECAM as configuration methods, while now I understand
> Alvise's patches support only CAM, while these support only ECAM..
> So basically I should look at the compatible string and then choose configuration method accordingly.
> I wonder if I should deal with the case where the compatible string contains both ECAM and CAM.
Well, original PCI didn't even do CAM. You only had 2 32bit ioport
registers that you tunnel all config space access through.
>
>>>
>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>>>> +
>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>>>> + 2, base_ecam, 2, size_ecam);
>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>>>> + 1, 0x01000000, 2, 0,
>>>> + 2, base_ioport, 2, size_ioport,
>>>> +
>>>> + 1, 0x02000000, 2, base_mmio,
>>>> + 2, base_mmio, 2, size_mmio);
>>>> +
>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>>>> + 0, 0, 0, /* device */
>>>> + 0, /* PCI irq */
>>>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>>>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>>>> + 0, 0, 0, /* device */
>>>> + 0 /* PCI irq */);
>>>
>>> Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series.
>>
>> How exactly is this undocumented? The "mask" is a mask over the first
>> fields of an interrupt-map row plus an IRQ offset. So the mask above
>> means "Any device with any function and any IRQ on it, map to device IRQ
>> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
>
> (see my answer to Peter below in thread)
>
> this is a bit different to what Alvise's series is doing I think (see later).
Yes, but it's easier :).
>
>>
>>> Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it).
>>
>> You're saying you'd prefer a define?
>
> Yes that would be helpful :)
>
>>
>>> How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm).
>>
>> His implementation explicitly listed every PCI slot, while mine actually
>> makes use of the mask and simply routes everything to a single IRQ line.
>>
>> The benefit of masking devfn out is that you don't need to worry about
>> the number of slots you support - and anything performance critical
>> should go via MSI-X anyway ;).
>
> The benefit for me (but for me only probably..) is that with one IRQ per slot I didn't have to implement shared irqs and msi / msi-x in the guest yet. But that should be done eventually anyway..
You most likely wouldn't get one IRQ per slot anyway. Most PHBs expose 4
outgoing IRQ lines, so you'll need to deal with sharing IRQs regardless.
Also, sharing IRQ lines isn't incredibly black magic and quite a
fundamental PCI IRQ concept, so I'm glad I'm pushing you into the
direction of implementing it early on :).
>
>>
>> So what exactly do you test it with? I've successfully received IRQs
>> with a Linux guest, so I'm slightly puzzled you're running into problems.
>
> Of course, I am just implementing PCI guest support for OSv/AArch64.
> Works with the legacy pci support using Alvise's series to do virtio-pci with block, net, rng etc,
> but I am now considering going for PCIE support directly although that means more implementation work for me.
Both should look quite similar from your point of view. If you do the
level interrupt handling, irq masking and window handling correctly the
only difference should be CAM vs ECAM.
I'm glad to hear that it breaks your implementation though and not
something already existing (like Linux, BSD, etc). That means there's a
good chance my patches are correct and you just took some shortcuts
which can be easily fixed :).
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-08 12:55 ` Claudio Fontana
2015-01-08 13:26 ` Alexander Graf
@ 2015-01-08 13:36 ` alvise rigo
1 sibling, 0 replies; 44+ messages in thread
From: alvise rigo @ 2015-01-08 13:36 UTC (permalink / raw)
To: Claudio Fontana
Cc: Peter Maydell, stuart.yoder@freescale.com, ard.biesheuvel,
Michael S. Tsirkin, Alexander Graf, QEMU Developers, Rob Herring
On Thu, Jan 8, 2015 at 1:55 PM, Claudio Fontana
<claudio.fontana@huawei.com> wrote:
> (added cc: Alvise which I mistakenly assumed was in Cc: already)
>
> On 07.01.2015 22:47, Alexander Graf wrote:
>>
>>
>> On 07.01.15 16:52, Claudio Fontana wrote:
>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>>>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>>>
>>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>>>> into an AArch64 VM with this and they all lived happily ever after.
>>>>
>>>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>>>
>>>> ---
>>>>
>>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>>>> systems. If you want to use it with AArch64 guests, please apply the following
>>>> patch or wait until upstream cleaned up the code properly:
>>>>
>>>> http://csgraf.de/agraf/pci/pci-3.19.patch
>>>> ---
>>>> default-configs/arm-softmmu.mak | 2 +
>>>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
>>>> 2 files changed, 80 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
>>>> index f3513fa..7671ee2 100644
>>>> --- a/default-configs/arm-softmmu.mak
>>>> +++ b/default-configs/arm-softmmu.mak
>>>> @@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
>>>> CONFIG_VERSATILE_PCI=y
>>>> CONFIG_VERSATILE_I2C=y
>>>>
>>>> +CONFIG_PCI_GENERIC=y
>>>> +
>>>> CONFIG_SDHCI=y
>>>> CONFIG_INTEGRATOR_DEBUG=y
>>>>
>>>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>>>> index 2353440..b7635ac 100644
>>>> --- a/hw/arm/virt.c
>>>> +++ b/hw/arm/virt.c
>>>> @@ -42,6 +42,7 @@
>>>> #include "exec/address-spaces.h"
>>>> #include "qemu/bitops.h"
>>>> #include "qemu/error-report.h"
>>>> +#include "hw/pci-host/gpex.h"
>>>>
>>>> #define NUM_VIRTIO_TRANSPORTS 32
>>>>
>>>> @@ -69,6 +70,7 @@ enum {
>>>> VIRT_MMIO,
>>>> VIRT_RTC,
>>>> VIRT_FW_CFG,
>>>> + VIRT_PCIE,
>>>> };
>>>>
>>>> typedef struct MemMapEntry {
>>>> @@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
>>>> [VIRT_FW_CFG] = { 0x09020000, 0x0000000a },
>>>> [VIRT_MMIO] = { 0x0a000000, 0x00000200 },
>>>> /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
>>>> - /* 0x10000000 .. 0x40000000 reserved for PCI */
>>>> + [VIRT_PCIE] = { 0x10000000, 0x30000000 },
>>>> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
>>>> };
>>>>
>>>> static const int a15irqmap[] = {
>>>> [VIRT_UART] = 1,
>>>> [VIRT_RTC] = 2,
>>>> + [VIRT_PCIE] = 3,
>>>> [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
>>>> };
>>>>
>>>> @@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
>>>> }
>>>> }
>>>>
>>>> -static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>>>> +static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
>>>> {
>>>> uint32_t gic_phandle;
>>>>
>>>> @@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>>>> 2, vbi->memmap[VIRT_GIC_CPU].base,
>>>> 2, vbi->memmap[VIRT_GIC_CPU].size);
>>>> qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
>>>> +
>>>> + return gic_phandle;
>>>> }
>>>>
>>>> -static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>>>> +static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>>>> {
>>>> /* We create a standalone GIC v2 */
>>>> DeviceState *gicdev;
>>>> @@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>>>> pic[i] = qdev_get_gpio_in(gicdev, i);
>>>> }
>>>>
>>>> - fdt_add_gic_node(vbi);
>>>> + return fdt_add_gic_node(vbi);
>>>> }
>>>>
>>>> static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
>>>> @@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
>>>> g_free(nodename);
>>>> }
>>>>
>>>> +static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
>>>> + uint32_t gic_phandle)
>>>> +{
>>>> + hwaddr base = vbi->memmap[VIRT_PCIE].base;
>>>> + hwaddr size = vbi->memmap[VIRT_PCIE].size;
>>>> + hwaddr size_ioport = 64 * 1024;
>>>> + hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
>>>> + hwaddr size_mmio = size - size_ecam - size_ioport;
>>>> + hwaddr base_mmio = base;
>>>> + hwaddr base_ioport = base_mmio + size_mmio;
>>>> + hwaddr base_ecam = base_ioport + size_ioport;
>>>> + int irq = vbi->irqmap[VIRT_PCIE];
>>>> + MemoryRegion *mmio_alias;
>>>> + MemoryRegion *mmio_reg;
>>>> + DeviceState *dev;
>>>> + char *nodename;
>>>> +
>>>> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
>>>> +
>>>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>>>> + qdev_init_nofail(dev);
>>>> +
>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>>>> +
>>>> + /* Map the MMIO window at the same spot in bus and cpu layouts */
>>>> + mmio_alias = g_new0(MemoryRegion, 1);
>>>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>>>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>>>> + mmio_reg, base_mmio, size_mmio);
>>>> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>>>> +
>>>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>>>> +
>>>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
>>>> + qemu_fdt_add_subnode(vbi->fdt, nodename);
>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename,
>>>> + "compatible", "pci-host-ecam-generic");
>>>
>>> is this the only compatible string we should set here? Is this not legacy pci compatible?
>>> In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example,
>>> would it be sensible to make it a list and include "pci" as well?
>>
>> I couldn't find anything that defines what an "pci" compatible should
>> look like. We definitely don't implement the legacy PCI config space
>> accessor registers.
>
> I see, I assumed this patch would support both CAM and ECAM as configuration methods, while now I understand
> Alvise's patches support only CAM, while these support only ECAM..
Hi Claudio,
I've run some tests using these patches with an ARM guest and Linux.
Both virtio-net-pci and lsi (the devices I used to test my series)
were working.
So I would say that at least from a Linux perspective these patches
support both CAM and ECAM.
Regards,
alvise
> So basically I should look at the compatible string and then choose configuration method accordingly.
> I wonder if I should deal with the case where the compatible string contains both ECAM and CAM.
>
>>>
>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>>>> +
>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>>>> + 2, base_ecam, 2, size_ecam);
>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>>>> + 1, 0x01000000, 2, 0,
>>>> + 2, base_ioport, 2, size_ioport,
>>>> +
>>>> + 1, 0x02000000, 2, base_mmio,
>>>> + 2, base_mmio, 2, size_mmio);
>>>> +
>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>>>> + 0, 0, 0, /* device */
>>>> + 0, /* PCI irq */
>>>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>>>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>>>> + 0, 0, 0, /* device */
>>>> + 0 /* PCI irq */);
>>>
>>> Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series.
>>
>> How exactly is this undocumented? The "mask" is a mask over the first
>> fields of an interrupt-map row plus an IRQ offset. So the mask above
>> means "Any device with any function and any IRQ on it, map to device IRQ
>> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
>
> (see my answer to Peter below in thread)
>
> this is a bit different to what Alvise's series is doing I think (see later).
>
>>
>>> Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it).
>>
>> You're saying you'd prefer a define?
>
> Yes that would be helpful :)
>
>>
>>> How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm).
>>
>> His implementation explicitly listed every PCI slot, while mine actually
>> makes use of the mask and simply routes everything to a single IRQ line.
>>
>> The benefit of masking devfn out is that you don't need to worry about
>> the number of slots you support - and anything performance critical
>> should go via MSI-X anyway ;).
>
> The benefit for me (but for me only probably..) is that with one IRQ per slot I didn't have to implement shared irqs and msi / msi-x in the guest yet. But that should be done eventually anyway..
>
>>
>> So what exactly do you test it with? I've successfully received IRQs
>> with a Linux guest, so I'm slightly puzzled you're running into problems.
>
> Of course, I am just implementing PCI guest support for OSv/AArch64.
> Works with the legacy pci support using Alvise's series to do virtio-pci with block, net, rng etc,
> but I am now considering going for PCIE support directly although that means more implementation work for me.
>
>>
>>
>> Alex
>>
>
> Ciao & thanks,
>
> Claudio
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-08 13:26 ` Alexander Graf
@ 2015-01-08 15:01 ` Claudio Fontana
2015-01-12 16:23 ` Claudio Fontana
0 siblings, 1 reply; 44+ messages in thread
From: Claudio Fontana @ 2015-01-08 15:01 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 08.01.2015 14:26, Alexander Graf wrote:
>
>
> On 08.01.15 13:55, Claudio Fontana wrote:
>> (added cc: Alvise which I mistakenly assumed was in Cc: already)
>
> He was in CC :). Now he's there twice.
>
>>
>> On 07.01.2015 22:47, Alexander Graf wrote:
>>>
>>>
>>> On 07.01.15 16:52, Claudio Fontana wrote:
>>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>>>>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>>>>
>>>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>>>>> into an AArch64 VM with this and they all lived happily ever after.
>>>>>
>>>>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>>>>
>>>>> ---
>>>>>
>>>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>>>>> systems. If you want to use it with AArch64 guests, please apply the following
>>>>> patch or wait until upstream cleaned up the code properly:
>>>>>
>>>>> http://csgraf.de/agraf/pci/pci-3.19.patch
>>>>> ---
>>>>> default-configs/arm-softmmu.mak | 2 +
>>>>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
>>>>> 2 files changed, 80 insertions(+), 5 deletions(-)
>>>>>
>
> [...]
>
>>>>> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
>>>>> +
>>>>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>>>>> + qdev_init_nofail(dev);
>>>>> +
>>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>>>>> +
>>>>> + /* Map the MMIO window at the same spot in bus and cpu layouts */
>>>>> + mmio_alias = g_new0(MemoryRegion, 1);
>>>>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>>>>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>>>>> + mmio_reg, base_mmio, size_mmio);
>>>>> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>>>>> +
>>>>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>>>>> +
>>>>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
>>>>> + qemu_fdt_add_subnode(vbi->fdt, nodename);
>>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename,
>>>>> + "compatible", "pci-host-ecam-generic");
>>>>
>>>> is this the only compatible string we should set here? Is this not legacy pci compatible?
>>>> In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example,
>>>> would it be sensible to make it a list and include "pci" as well?
>>>
>>> I couldn't find anything that defines what an "pci" compatible should
>>> look like. We definitely don't implement the legacy PCI config space
>>> accessor registers.
>>
>> I see, I assumed this patch would support both CAM and ECAM as configuration methods, while now I understand
>> Alvise's patches support only CAM, while these support only ECAM..
>> So basically I should look at the compatible string and then choose configuration method accordingly.
>> I wonder if I should deal with the case where the compatible string contains both ECAM and CAM.
>
> Well, original PCI didn't even do CAM. You only had 2 32bit ioport
> registers that you tunnel all config space access through.
>
>>
>>>>
>>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>>>>> +
>>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>>>>> + 2, base_ecam, 2, size_ecam);
>>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>>>>> + 1, 0x01000000, 2, 0,
>>>>> + 2, base_ioport, 2, size_ioport,
>>>>> +
>>>>> + 1, 0x02000000, 2, base_mmio,
>>>>> + 2, base_mmio, 2, size_mmio);
>>>>> +
>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>>>>> + 0, 0, 0, /* device */
>>>>> + 0, /* PCI irq */
>>>>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>>>>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>>>>> + 0, 0, 0, /* device */
>>>>> + 0 /* PCI irq */);
>>>>
>>>> Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series.
>>>
>>> How exactly is this undocumented? The "mask" is a mask over the first
>>> fields of an interrupt-map row plus an IRQ offset. So the mask above
>>> means "Any device with any function and any IRQ on it, map to device IRQ
>>> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
>>
>> (see my answer to Peter below in thread)
>>
>> this is a bit different to what Alvise's series is doing I think (see later).
>
> Yes, but it's easier :).
>
>>
>>>
>>>> Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it).
>>>
>>> You're saying you'd prefer a define?
>>
>> Yes that would be helpful :)
>>
>>>
>>>> How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm).
>>>
>>> His implementation explicitly listed every PCI slot, while mine actually
>>> makes use of the mask and simply routes everything to a single IRQ line.
>>>
>>> The benefit of masking devfn out is that you don't need to worry about
>>> the number of slots you support - and anything performance critical
>>> should go via MSI-X anyway ;).
>>
>> The benefit for me (but for me only probably..) is that with one IRQ per slot I didn't have to implement shared irqs and msi / msi-x in the guest yet. But that should be done eventually anyway..
>
> You most likely wouldn't get one IRQ per slot anyway. Most PHBs expose 4
> outgoing IRQ lines, so you'll need to deal with sharing IRQs regardless.
>
> Also, sharing IRQ lines isn't incredibly black magic and quite a
> fundamental PCI IRQ concept, so I'm glad I'm pushing you into the
> direction of implementing it early on :).
>
damn open source making me work lol :)
It's actually good to have these two implementations, so I can validate my guest support against both series and ensure that I don't hardcode too many shortcuts.
Thanks,
Claudio
>>
>>>
>>> So what exactly do you test it with? I've successfully received IRQs
>>> with a Linux guest, so I'm slightly puzzled you're running into problems.
>>
>> Of course, I am just implementing PCI guest support for OSv/AArch64.
>> Works with the legacy pci support using Alvise's series to do virtio-pci with block, net, rng etc,
>> but I am now considering going for PCIE support directly although that means more implementation work for me.
>
> Both should look quite similar from your point of view. If you do the
> level interrupt handling, irq masking and window handling correctly the
> only difference should be CAM vs ECAM.
>
> I'm glad to hear that it breaks your implementation though and not
> something already existing (like Linux, BSD, etc). That means there's a
> good chance my patches are correct and you just took some shortcuts
> which can be easily fixed :).
>
>
> Alex
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-06 16:03 ` [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine Alexander Graf
2015-01-07 15:52 ` Claudio Fontana
@ 2015-01-12 16:20 ` Claudio Fontana
2015-01-12 16:36 ` Alexander Graf
2015-01-12 16:49 ` alvise rigo
2 siblings, 1 reply; 44+ messages in thread
From: Claudio Fontana @ 2015-01-12 16:20 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
Just adding a nit here below:
On 06.01.2015 17:03, Alexander Graf wrote:
> Now that we have a working "generic" PCIe host bridge driver, we can plug
> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>
> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
> into an AArch64 VM with this and they all lived happily ever after.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
>
> ---
>
> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
> systems. If you want to use it with AArch64 guests, please apply the following
> patch or wait until upstream cleaned up the code properly:
>
> http://csgraf.de/agraf/pci/pci-3.19.patch
> ---
> default-configs/arm-softmmu.mak | 2 +
> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
> 2 files changed, 80 insertions(+), 5 deletions(-)
>
> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
> index f3513fa..7671ee2 100644
> --- a/default-configs/arm-softmmu.mak
> +++ b/default-configs/arm-softmmu.mak
> @@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
> CONFIG_VERSATILE_PCI=y
> CONFIG_VERSATILE_I2C=y
>
> +CONFIG_PCI_GENERIC=y
> +
> CONFIG_SDHCI=y
> CONFIG_INTEGRATOR_DEBUG=y
>
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 2353440..b7635ac 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -42,6 +42,7 @@
> #include "exec/address-spaces.h"
> #include "qemu/bitops.h"
> #include "qemu/error-report.h"
> +#include "hw/pci-host/gpex.h"
>
> #define NUM_VIRTIO_TRANSPORTS 32
>
> @@ -69,6 +70,7 @@ enum {
> VIRT_MMIO,
> VIRT_RTC,
> VIRT_FW_CFG,
> + VIRT_PCIE,
> };
>
> typedef struct MemMapEntry {
> @@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
> [VIRT_FW_CFG] = { 0x09020000, 0x0000000a },
> [VIRT_MMIO] = { 0x0a000000, 0x00000200 },
> /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
> - /* 0x10000000 .. 0x40000000 reserved for PCI */
> + [VIRT_PCIE] = { 0x10000000, 0x30000000 },
> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
> };
>
> static const int a15irqmap[] = {
> [VIRT_UART] = 1,
> [VIRT_RTC] = 2,
> + [VIRT_PCIE] = 3,
> [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
> };
>
> @@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
> }
> }
>
> -static void fdt_add_gic_node(const VirtBoardInfo *vbi)
> +static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
> {
> uint32_t gic_phandle;
>
> @@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
> 2, vbi->memmap[VIRT_GIC_CPU].base,
> 2, vbi->memmap[VIRT_GIC_CPU].size);
> qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
> +
> + return gic_phandle;
> }
>
> -static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> +static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> {
> /* We create a standalone GIC v2 */
> DeviceState *gicdev;
> @@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> pic[i] = qdev_get_gpio_in(gicdev, i);
> }
>
> - fdt_add_gic_node(vbi);
> + return fdt_add_gic_node(vbi);
> }
>
> static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
> @@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
> g_free(nodename);
> }
>
> +static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
> + uint32_t gic_phandle)
> +{
> + hwaddr base = vbi->memmap[VIRT_PCIE].base;
> + hwaddr size = vbi->memmap[VIRT_PCIE].size;
> + hwaddr size_ioport = 64 * 1024;
> + hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
> + hwaddr size_mmio = size - size_ecam - size_ioport;
> + hwaddr base_mmio = base;
> + hwaddr base_ioport = base_mmio + size_mmio;
> + hwaddr base_ecam = base_ioport + size_ioport;
> + int irq = vbi->irqmap[VIRT_PCIE];
> + MemoryRegion *mmio_alias;
> + MemoryRegion *mmio_reg;
> + DeviceState *dev;
> + char *nodename;
> +
> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
> +
> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
> + qdev_init_nofail(dev);
> +
> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
> +
> + /* Map the MMIO window at the same spot in bus and cpu layouts */
> + mmio_alias = g_new0(MemoryRegion, 1);
> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
> + mmio_reg, base_mmio, size_mmio);
> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
> +
> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
> +
> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
> + qemu_fdt_add_subnode(vbi->fdt, nodename);
> + qemu_fdt_setprop_string(vbi->fdt, nodename,
> + "compatible", "pci-host-ecam-generic");
> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
> +
> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
> + 2, base_ecam, 2, size_ecam);
> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
> + 1, 0x01000000, 2, 0,
> + 2, base_ioport, 2, size_ioport,
> +
> + 1, 0x02000000, 2, base_mmio,
> + 2, base_mmio, 2, size_mmio);
> +
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
> + 0, 0, 0, /* device */
> + 0, /* PCI irq */
> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
nit: are there two extra spaces here? (alignment)
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
> + 0, 0, 0, /* device */
> + 0 /* PCI irq */);
> +
> + g_free(nodename);
> +}
> +
> static void *machvirt_dtb(const struct arm_boot_info *binfo, int *fdt_size)
> {
> const VirtBoardInfo *board = (const VirtBoardInfo *)binfo;
> @@ -573,6 +643,7 @@ static void machvirt_init(MachineState *machine)
> MemoryRegion *ram = g_new(MemoryRegion, 1);
> const char *cpu_model = machine->cpu_model;
> VirtBoardInfo *vbi;
> + uint32_t gic_phandle;
>
> if (!cpu_model) {
> cpu_model = "cortex-a15";
> @@ -634,12 +705,14 @@ static void machvirt_init(MachineState *machine)
>
> create_flash(vbi);
>
> - create_gic(vbi, pic);
> + gic_phandle = create_gic(vbi, pic);
>
> create_uart(vbi, pic);
>
> create_rtc(vbi, pic);
>
> + create_pcie(vbi, pic, gic_phandle);
> +
> /* Create mmio transports, so the user can create virtio backends
> * (which will be automatically plugged in to the transports). If
> * no backend is created the transport will just sit harmlessly idle.
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-08 15:01 ` Claudio Fontana
@ 2015-01-12 16:23 ` Claudio Fontana
2015-01-12 16:35 ` Alexander Graf
0 siblings, 1 reply; 44+ messages in thread
From: Claudio Fontana @ 2015-01-12 16:23 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 08.01.2015 16:01, Claudio Fontana wrote:
> On 08.01.2015 14:26, Alexander Graf wrote:
>>
>>
>> On 08.01.15 13:55, Claudio Fontana wrote:
>>> (added cc: Alvise which I mistakenly assumed was in Cc: already)
>>
>> He was in CC :). Now he's there twice.
>>
>>>
>>> On 07.01.2015 22:47, Alexander Graf wrote:
>>>>
>>>>
>>>> On 07.01.15 16:52, Claudio Fontana wrote:
>>>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>>>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>>>>>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>>>>>
>>>>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>>>>>> into an AArch64 VM with this and they all lived happily ever after.
>>>>>>
>>>>>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>>>>>> systems. If you want to use it with AArch64 guests, please apply the following
>>>>>> patch or wait until upstream cleaned up the code properly:
>>>>>>
>>>>>> http://csgraf.de/agraf/pci/pci-3.19.patch
>>>>>> ---
>>>>>> default-configs/arm-softmmu.mak | 2 +
>>>>>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
>>>>>> 2 files changed, 80 insertions(+), 5 deletions(-)
>>>>>>
>>
>> [...]
>>
>>>>>> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
>>>>>> +
>>>>>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>>>>>> + qdev_init_nofail(dev);
>>>>>> +
>>>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>>>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>>>>>> +
>>>>>> + /* Map the MMIO window at the same spot in bus and cpu layouts */
>>>>>> + mmio_alias = g_new0(MemoryRegion, 1);
>>>>>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>>>>>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>>>>>> + mmio_reg, base_mmio, size_mmio);
>>>>>> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>>>>>> +
>>>>>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>>>>>> +
>>>>>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
>>>>>> + qemu_fdt_add_subnode(vbi->fdt, nodename);
>>>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename,
>>>>>> + "compatible", "pci-host-ecam-generic");
>>>>>
>>>>> is this the only compatible string we should set here? Is this not legacy pci compatible?
>>>>> In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example,
>>>>> would it be sensible to make it a list and include "pci" as well?
>>>>
>>>> I couldn't find anything that defines what an "pci" compatible should
>>>> look like. We definitely don't implement the legacy PCI config space
>>>> accessor registers.
>>>
>>> I see, I assumed this patch would support both CAM and ECAM as configuration methods, while now I understand
>>> Alvise's patches support only CAM, while these support only ECAM..
>>> So basically I should look at the compatible string and then choose configuration method accordingly.
>>> I wonder if I should deal with the case where the compatible string contains both ECAM and CAM.
>>
>> Well, original PCI didn't even do CAM. You only had 2 32bit ioport
>> registers that you tunnel all config space access through.
>>
>>>
>>>>>
>>>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>>>>>> +
>>>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>>>>>> + 2, base_ecam, 2, size_ecam);
>>>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>>>>>> + 1, 0x01000000, 2, 0,
>>>>>> + 2, base_ioport, 2, size_ioport,
>>>>>> +
>>>>>> + 1, 0x02000000, 2, base_mmio,
>>>>>> + 2, base_mmio, 2, size_mmio);
>>>>>> +
>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>>>>>> + 0, 0, 0, /* device */
>>>>>> + 0, /* PCI irq */
>>>>>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>>>>>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>>>>>> + 0, 0, 0, /* device */
>>>>>> + 0 /* PCI irq */);
>>>>>
>>>>> Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series.
>>>>
>>>> How exactly is this undocumented? The "mask" is a mask over the first
>>>> fields of an interrupt-map row plus an IRQ offset. So the mask above
>>>> means "Any device with any function and any IRQ on it, map to device IRQ
>>>> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
>>>
>>> (see my answer to Peter below in thread)
>>>
>>> this is a bit different to what Alvise's series is doing I think (see later).
>>
>> Yes, but it's easier :).
>>
>>>
>>>>
>>>>> Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it).
>>>>
>>>> You're saying you'd prefer a define?
>>>
>>> Yes that would be helpful :)
>>>
>>>>
>>>>> How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm).
>>>>
>>>> His implementation explicitly listed every PCI slot, while mine actually
>>>> makes use of the mask and simply routes everything to a single IRQ line.
>>>>
>>>> The benefit of masking devfn out is that you don't need to worry about
>>>> the number of slots you support - and anything performance critical
>>>> should go via MSI-X anyway ;).
>>>
>>> The benefit for me (but for me only probably..) is that with one IRQ per slot I didn't have to implement shared irqs and msi / msi-x in the guest yet. But that should be done eventually anyway..
>>
>> You most likely wouldn't get one IRQ per slot anyway. Most PHBs expose 4
>> outgoing IRQ lines, so you'll need to deal with sharing IRQs regardless.
>>
>> Also, sharing IRQ lines isn't incredibly black magic and quite a
>> fundamental PCI IRQ concept, so I'm glad I'm pushing you into the
>> direction of implementing it early on :).
Ok I have tentatively implemented this and I tested both Alvise's series and yours and both work ok for my use case.
Note that I did not test any MSI/MSI-X, only the INTx method.
>>
>
> damn open source making me work lol :)
>
> It's actually good to have these two implementations, so I can validate my guest support against both series and ensure that I don't hardcode too many shortcuts.
>
> Thanks,
>
> Claudio
>
>>>
>>>>
>>>> So what exactly do you test it with? I've successfully received IRQs
>>>> with a Linux guest, so I'm slightly puzzled you're running into problems.
>>>
>>> Of course, I am just implementing PCI guest support for OSv/AArch64.
>>> Works with the legacy pci support using Alvise's series to do virtio-pci with block, net, rng etc,
>>> but I am now considering going for PCIE support directly although that means more implementation work for me.
>>
>> Both should look quite similar from your point of view. If you do the
>> level interrupt handling, irq masking and window handling correctly the
>> only difference should be CAM vs ECAM.
>>
>> I'm glad to hear that it breaks your implementation though and not
>> something already existing (like Linux, BSD, etc). That means there's a
>> good chance my patches are correct and you just took some shortcuts
>> which can be easily fixed :).
>>
>>
>> Alex
>>
>
>
--
Claudio Fontana
Server Virtualization Architect
Huawei Technologies Duesseldorf GmbH
Riesstraße 25 - 80992 München
office: +49 89 158834 4135
mobile: +49 15253060158
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-06 16:03 [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Alexander Graf
` (4 preceding siblings ...)
2015-01-07 13:52 ` [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Claudio Fontana
@ 2015-01-12 16:24 ` Claudio Fontana
2015-01-21 12:59 ` Claudio Fontana
6 siblings, 0 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-12 16:24 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 06.01.2015 17:03, Alexander Graf wrote:
> Linux implements a nice binding to describe a "generic" PCI Express host bridge
> using only device tree.
>
> This patch set adds enough emulation logic to expose the parts that are
> "generic" as a simple sysbus device and maps it into ARM's virt machine.
>
> With this patch set, we can finally spawn PCI devices on ARM VMs. I was able
> to have a fully DRM enabled virtual machine with VGA, e1000 and XHCI (for
> keyboard and mouse) up and working.
>
> It's only a small step for QEMU, but a big step for ARM VM's usability.
>
>
> Happy new year!
>
> Alexander Graf (4):
> pci: Split pcie_host_mmcfg_map()
> pci: Add generic PCIe host bridge
> arm: Add PCIe host bridge in virt machine
> arm: enable Bochs PCI VGA
>
> default-configs/arm-softmmu.mak | 3 +
> hw/arm/virt.c | 83 +++++++++++++++++++--
> hw/pci-host/Makefile.objs | 1 +
> hw/pci-host/gpex.c | 156 ++++++++++++++++++++++++++++++++++++++++
> hw/pci/pcie_host.c | 9 ++-
> include/hw/pci-host/gpex.h | 56 +++++++++++++++
> include/hw/pci/pcie_host.h | 1 +
> 7 files changed, 302 insertions(+), 7 deletions(-)
> create mode 100644 hw/pci-host/gpex.c
> create mode 100644 include/hw/pci-host/gpex.h
>
Tested with a modified OSv guest on AArch64 using INTx interrupt method.
Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 1/4] pci: Split pcie_host_mmcfg_map()
2015-01-06 16:03 ` [Qemu-devel] [PATCH 1/4] pci: Split pcie_host_mmcfg_map() Alexander Graf
@ 2015-01-12 16:28 ` Claudio Fontana
0 siblings, 0 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-12 16:28 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 06.01.2015 17:03, Alexander Graf wrote:
> The mmcfg space is a memory region that allows access to PCI config space
> in the PCIe world. To maintain abstraction layers, I would like to expose
> the mmcfg space as a sysbus mmio region rather than have it mapped straight
> into the system's memory address space though.
>
> So this patch splits the initialization of the mmcfg space from the actual
> mapping, allowing us to only have an mmfg memory region without the map.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
> hw/pci/pcie_host.c | 9 +++++++--
> include/hw/pci/pcie_host.h | 1 +
> 2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/hw/pci/pcie_host.c b/hw/pci/pcie_host.c
> index 3db038f..dfb4a2b 100644
> --- a/hw/pci/pcie_host.c
> +++ b/hw/pci/pcie_host.c
> @@ -98,8 +98,7 @@ void pcie_host_mmcfg_unmap(PCIExpressHost *e)
> }
> }
>
> -void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr,
> - uint32_t size)
> +void pcie_host_mmcfg_init(PCIExpressHost *e, uint32_t size)
> {
> assert(!(size & (size - 1))); /* power of 2 */
> assert(size >= PCIE_MMCFG_SIZE_MIN);
> @@ -107,6 +106,12 @@ void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr,
> e->size = size;
> memory_region_init_io(&e->mmio, OBJECT(e), &pcie_mmcfg_ops, e,
> "pcie-mmcfg", e->size);
> +}
> +
> +void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr,
> + uint32_t size)
> +{
> + pcie_host_mmcfg_init(e, size);
> e->base_addr = addr;
> memory_region_add_subregion(get_system_memory(), e->base_addr, &e->mmio);
> }
> diff --git a/include/hw/pci/pcie_host.h b/include/hw/pci/pcie_host.h
> index ff44ef6..4d23c80 100644
> --- a/include/hw/pci/pcie_host.h
> +++ b/include/hw/pci/pcie_host.h
> @@ -50,6 +50,7 @@ struct PCIExpressHost {
> };
>
> void pcie_host_mmcfg_unmap(PCIExpressHost *e);
> +void pcie_host_mmcfg_init(PCIExpressHost *e, uint32_t size);
> void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr, uint32_t size);
> void pcie_host_mmcfg_update(PCIExpressHost *e,
> int enable,
>
Fine for me.
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
--
Claudio Fontana
Server Virtualization Architect
Huawei Technologies Duesseldorf GmbH
Riesstraße 25 - 80992 München
office: +49 89 158834 4135
mobile: +49 15253060158
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-06 16:03 ` [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge Alexander Graf
@ 2015-01-12 16:29 ` Claudio Fontana
2015-01-12 17:36 ` alvise rigo
1 sibling, 0 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-12 16:29 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 06.01.2015 17:03, Alexander Graf wrote:
> With simple exposure of MMFG, ioport window, mmio window and an IRQ line we
> can successfully create a workable PCIe host bridge that can be mapped anywhere
> and only needs to get described to the OS using whatever means it likes.
>
> This patch implements such a "generic" host bridge. It only supports a single
> legacy IRQ line so far. MSIs need to be handled external to the host bridge.
>
> This device is particularly useful for the "pci-host-ecam-generic" driver in
> Linux.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
> hw/pci-host/Makefile.objs | 1 +
> hw/pci-host/gpex.c | 156 +++++++++++++++++++++++++++++++++++++++++++++
> include/hw/pci-host/gpex.h | 56 ++++++++++++++++
> 3 files changed, 213 insertions(+)
> create mode 100644 hw/pci-host/gpex.c
> create mode 100644 include/hw/pci-host/gpex.h
>
> diff --git a/hw/pci-host/Makefile.objs b/hw/pci-host/Makefile.objs
> index bb65f9c..45f1f0e 100644
> --- a/hw/pci-host/Makefile.objs
> +++ b/hw/pci-host/Makefile.objs
> @@ -15,3 +15,4 @@ common-obj-$(CONFIG_PCI_APB) += apb.o
> common-obj-$(CONFIG_FULONG) += bonito.o
> common-obj-$(CONFIG_PCI_PIIX) += piix.o
> common-obj-$(CONFIG_PCI_Q35) += q35.o
> +common-obj-$(CONFIG_PCI_GENERIC) += gpex.o
> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
> new file mode 100644
> index 0000000..bd62a3c
> --- /dev/null
> +++ b/hw/pci-host/gpex.c
> @@ -0,0 +1,156 @@
> +/*
> + * QEMU Generic PCI Express Bridge Emulation
> + *
> + * Copyright (C) 2015 Alexander Graf <agraf@suse.de>
> + *
> + * Code loosely based on q35.c.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to deal
> + * in the Software without restriction, including without limitation the rights
> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> + * copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> + * THE SOFTWARE.
> + */
> +#include "hw/hw.h"
> +#include "hw/pci-host/gpex.h"
> +
> +/****************************************************************************
> + * GPEX host
> + */
> +
> +static void gpex_set_irq(void *opaque, int irq_num, int level)
> +{
> + GPEXHost *s = opaque;
> +
> + qemu_set_irq(s->irq, level);
> +}
> +
> +static int gpex_map_irq(PCIDevice *pci_dev, int irq_num)
> +{
> + /* We only support one IRQ line so far */
> + return 0;
> +}
> +
> +static void gpex_host_realize(DeviceState *dev, Error **errp)
> +{
> + PCIHostState *pci = PCI_HOST_BRIDGE(dev);
> + GPEXHost *s = GPEX_HOST(dev);
> + SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
> + PCIExpressHost *pex = PCIE_HOST_BRIDGE(dev);
> +
> + pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MIN);
> + memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", s->mmio_window_size);
> + memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
> +
> + sysbus_init_mmio(sbd, &pex->mmio);
> + sysbus_init_mmio(sbd, &s->io_mmio);
> + sysbus_init_mmio(sbd, &s->io_ioport);
> + sysbus_init_irq(sbd, &s->irq);
> +
> + pci->bus = pci_register_bus(dev, "pcie.0", gpex_set_irq, gpex_map_irq, s,
> + &s->io_mmio, &s->io_ioport, 0, 1, TYPE_PCIE_BUS);
> +
> + qdev_set_parent_bus(DEVICE(&s->gpex_root), BUS(pci->bus));
> + qdev_init_nofail(DEVICE(&s->gpex_root));
> +}
> +
> +static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
> + PCIBus *rootbus)
> +{
> + return "0000:00";
> +}
> +
> +static Property gpex_root_props[] = {
> + DEFINE_PROP_UINT64("mmio_window_size", GPEXHost, mmio_window_size, 1ULL << 32),
> + DEFINE_PROP_END_OF_LIST(),
> +};
> +
> +static void gpex_host_class_init(ObjectClass *klass, void *data)
> +{
> + DeviceClass *dc = DEVICE_CLASS(klass);
> + PCIHostBridgeClass *hc = PCI_HOST_BRIDGE_CLASS(klass);
> +
> + hc->root_bus_path = gpex_host_root_bus_path;
> + dc->realize = gpex_host_realize;
> + dc->props = gpex_root_props;
> + set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
> + dc->fw_name = "pci";
> +}
> +
> +static void gpex_host_initfn(Object *obj)
> +{
> + GPEXHost *s = GPEX_HOST(obj);
> +
> + object_initialize(&s->gpex_root, sizeof(s->gpex_root), TYPE_GPEX_ROOT_DEVICE);
> + object_property_add_child(OBJECT(s), "gpex_root", OBJECT(&s->gpex_root), NULL);
> + qdev_prop_set_uint32(DEVICE(&s->gpex_root), "addr", PCI_DEVFN(0, 0));
> + qdev_prop_set_bit(DEVICE(&s->gpex_root), "multifunction", false);
> +}
> +
> +static const TypeInfo gpex_host_info = {
> + .name = TYPE_GPEX_HOST,
> + .parent = TYPE_PCIE_HOST_BRIDGE,
> + .instance_size = sizeof(GPEXHost),
> + .instance_init = gpex_host_initfn,
> + .class_init = gpex_host_class_init,
> +};
> +
> +/****************************************************************************
> + * GPEX Root D0:F0
> + */
> +
> +static const VMStateDescription vmstate_gpex_root = {
> + .name = "gpex_root",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .fields = (VMStateField[]) {
> + VMSTATE_PCI_DEVICE(parent_obj, GPEXRootState),
> + VMSTATE_END_OF_LIST()
> + }
> +};
> +
> +static void gpex_root_class_init(ObjectClass *klass, void *data)
> +{
> + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> + DeviceClass *dc = DEVICE_CLASS(klass);
> +
> + set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
> + dc->desc = "Host bridge";
> + dc->vmsd = &vmstate_gpex_root;
> + k->vendor_id = PCI_VENDOR_ID_REDHAT;
> + k->device_id = PCI_DEVICE_ID_REDHAT_BRIDGE;
> + k->revision = 0;
> + k->class_id = PCI_CLASS_BRIDGE_HOST;
> + /*
> + * PCI-facing part of the host bridge, not usable without the
> + * host-facing part, which can't be device_add'ed, yet.
> + */
> + dc->cannot_instantiate_with_device_add_yet = true;
> +}
> +
> +static const TypeInfo gpex_root_info = {
> + .name = TYPE_GPEX_ROOT_DEVICE,
> + .parent = TYPE_PCI_DEVICE,
> + .instance_size = sizeof(GPEXRootState),
> + .class_init = gpex_root_class_init,
> +};
> +
> +static void gpex_register(void)
> +{
> + type_register_static(&gpex_root_info);
> + type_register_static(&gpex_host_info);
> +}
> +
> +type_init(gpex_register);
> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
> new file mode 100644
> index 0000000..5cf2073
> --- /dev/null
> +++ b/include/hw/pci-host/gpex.h
> @@ -0,0 +1,56 @@
> +/*
> + * gpex.h
> + *
> + * Copyright (C) 2015 Alexander Graf <agraf@suse.de>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, see <http://www.gnu.org/licenses/>
> + */
> +
> +#ifndef HW_GPEX_H
> +#define HW_GPEX_H
> +
> +#include "hw/hw.h"
> +#include "hw/sysbus.h"
> +#include "hw/pci/pci.h"
> +#include "hw/pci/pcie_host.h"
> +
> +#define TYPE_GPEX_HOST "gpex-pcihost"
> +#define GPEX_HOST(obj) \
> + OBJECT_CHECK(GPEXHost, (obj), TYPE_GPEX_HOST)
> +
> +#define TYPE_GPEX_ROOT_DEVICE "gpex-root"
> +#define MCH_PCI_DEVICE(obj) \
> + OBJECT_CHECK(GPEXRootState, (obj), TYPE_GPEX_ROOT_DEVICE)
> +
> +typedef struct GPEXRootState {
> + /*< private >*/
> + PCIDevice parent_obj;
> + /*< public >*/
> +} GPEXRootState;
> +
> +typedef struct GPEXHost {
> + /*< private >*/
> + PCIExpressHost parent_obj;
> + /*< public >*/
> +
> + GPEXRootState gpex_root;
> +
> + MemoryRegion io_ioport;
> + MemoryRegion io_mmio;
> + qemu_irq irq;
> +
> + uint64_t mmio_window_size;
> +} GPEXHost;
> +
> +#endif /* HW_GPEX_H */
>
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com>
--
Claudio Fontana
Server Virtualization Architect
Huawei Technologies Duesseldorf GmbH
Riesstraße 25 - 80992 München
office: +49 89 158834 4135
mobile: +49 15253060158
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-12 16:23 ` Claudio Fontana
@ 2015-01-12 16:35 ` Alexander Graf
0 siblings, 0 replies; 44+ messages in thread
From: Alexander Graf @ 2015-01-12 16:35 UTC (permalink / raw)
To: Claudio Fontana, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 12.01.15 17:23, Claudio Fontana wrote:
> On 08.01.2015 16:01, Claudio Fontana wrote:
>> On 08.01.2015 14:26, Alexander Graf wrote:
>>>
>>>
>>> On 08.01.15 13:55, Claudio Fontana wrote:
>>>> (added cc: Alvise which I mistakenly assumed was in Cc: already)
>>>
>>> He was in CC :). Now he's there twice.
>>>
>>>>
>>>> On 07.01.2015 22:47, Alexander Graf wrote:
>>>>>
>>>>>
>>>>> On 07.01.15 16:52, Claudio Fontana wrote:
>>>>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>>>>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>>>>>>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>>>>>>
>>>>>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>>>>>>> into an AArch64 VM with this and they all lived happily ever after.
>>>>>>>
>>>>>>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>>>>>>
>>>>>>> ---
>>>>>>>
>>>>>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>>>>>>> systems. If you want to use it with AArch64 guests, please apply the following
>>>>>>> patch or wait until upstream cleaned up the code properly:
>>>>>>>
>>>>>>> http://csgraf.de/agraf/pci/pci-3.19.patch
>>>>>>> ---
>>>>>>> default-configs/arm-softmmu.mak | 2 +
>>>>>>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
>>>>>>> 2 files changed, 80 insertions(+), 5 deletions(-)
>>>>>>>
>>>
>>> [...]
>>>
>>>>>>> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
>>>>>>> +
>>>>>>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>>>>>>> + qdev_init_nofail(dev);
>>>>>>> +
>>>>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>>>>>>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>>>>>>> +
>>>>>>> + /* Map the MMIO window at the same spot in bus and cpu layouts */
>>>>>>> + mmio_alias = g_new0(MemoryRegion, 1);
>>>>>>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>>>>>>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>>>>>>> + mmio_reg, base_mmio, size_mmio);
>>>>>>> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
>>>>>>> +
>>>>>>> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>>>>>>> +
>>>>>>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
>>>>>>> + qemu_fdt_add_subnode(vbi->fdt, nodename);
>>>>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename,
>>>>>>> + "compatible", "pci-host-ecam-generic");
>>>>>>
>>>>>> is this the only compatible string we should set here? Is this not legacy pci compatible?
>>>>>> In other device trees I see this mentioned as compatible = "arm,versatile-pci-hostbridge", "pci" for example,
>>>>>> would it be sensible to make it a list and include "pci" as well?
>>>>>
>>>>> I couldn't find anything that defines what an "pci" compatible should
>>>>> look like. We definitely don't implement the legacy PCI config space
>>>>> accessor registers.
>>>>
>>>> I see, I assumed this patch would support both CAM and ECAM as configuration methods, while now I understand
>>>> Alvise's patches support only CAM, while these support only ECAM..
>>>> So basically I should look at the compatible string and then choose configuration method accordingly.
>>>> I wonder if I should deal with the case where the compatible string contains both ECAM and CAM.
>>>
>>> Well, original PCI didn't even do CAM. You only had 2 32bit ioport
>>> registers that you tunnel all config space access through.
>>>
>>>>
>>>>>>
>>>>>>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>>>>>>> +
>>>>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>>>>>>> + 2, base_ecam, 2, size_ecam);
>>>>>>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>>>>>>> + 1, 0x01000000, 2, 0,
>>>>>>> + 2, base_ioport, 2, size_ioport,
>>>>>>> +
>>>>>>> + 1, 0x02000000, 2, base_mmio,
>>>>>>> + 2, base_mmio, 2, size_mmio);
>>>>>>> +
>>>>>>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>>>>>>> + 0, 0, 0, /* device */
>>>>>>> + 0, /* PCI irq */
>>>>>>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>>>>>>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>>>>>>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>>>>>>> + 0, 0, 0, /* device */
>>>>>>> + 0 /* PCI irq */);
>>>>>>
>>>>>> Interrupt map does not seem to work for me; incidentally this ends up being the same kind of undocumented blob that Alvise posted in his series.
>>>>>
>>>>> How exactly is this undocumented? The "mask" is a mask over the first
>>>>> fields of an interrupt-map row plus an IRQ offset. So the mask above
>>>>> means "Any device with any function and any IRQ on it, map to device IRQ
>>>>> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
>>>>
>>>> (see my answer to Peter below in thread)
>>>>
>>>> this is a bit different to what Alvise's series is doing I think (see later).
>>>
>>> Yes, but it's easier :).
>>>
>>>>
>>>>>
>>>>>> Can you add a good comment about what the ranges property contains (the 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there is no need to be cryptic about it).
>>>>>
>>>>> You're saying you'd prefer a define?
>>>>
>>>> Yes that would be helpful :)
>>>>
>>>>>
>>>>>> How does your interrupt map implementation differ from the patchset posted by Alvise? I ask because that one works for me (tm).
>>>>>
>>>>> His implementation explicitly listed every PCI slot, while mine actually
>>>>> makes use of the mask and simply routes everything to a single IRQ line.
>>>>>
>>>>> The benefit of masking devfn out is that you don't need to worry about
>>>>> the number of slots you support - and anything performance critical
>>>>> should go via MSI-X anyway ;).
>>>>
>>>> The benefit for me (but for me only probably..) is that with one IRQ per slot I didn't have to implement shared irqs and msi / msi-x in the guest yet. But that should be done eventually anyway..
>>>
>>> You most likely wouldn't get one IRQ per slot anyway. Most PHBs expose 4
>>> outgoing IRQ lines, so you'll need to deal with sharing IRQs regardless.
>>>
>>> Also, sharing IRQ lines isn't incredibly black magic and quite a
>>> fundamental PCI IRQ concept, so I'm glad I'm pushing you into the
>>> direction of implementing it early on :).
>
>
> Ok I have tentatively implemented this and I tested both Alvise's series and yours and both work ok for my use case.
>
> Note that I did not test any MSI/MSI-X, only the INTx method.
There is no MSI yet ;)
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-12 16:20 ` Claudio Fontana
@ 2015-01-12 16:36 ` Alexander Graf
0 siblings, 0 replies; 44+ messages in thread
From: Alexander Graf @ 2015-01-12 16:36 UTC (permalink / raw)
To: Claudio Fontana, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 12.01.15 17:20, Claudio Fontana wrote:
> Just adding a nit here below:
>
> On 06.01.2015 17:03, Alexander Graf wrote:
>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>
>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>> into an AArch64 VM with this and they all lived happily ever after.
>>
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>
>> ---
>>
>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>> systems. If you want to use it with AArch64 guests, please apply the following
>> patch or wait until upstream cleaned up the code properly:
>>
>> http://csgraf.de/agraf/pci/pci-3.19.patch
>> ---
[...]
>> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
>> + qemu_fdt_add_subnode(vbi->fdt, nodename);
>> + qemu_fdt_setprop_string(vbi->fdt, nodename,
>> + "compatible", "pci-host-ecam-generic");
>> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>> +
>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>> + 2, base_ecam, 2, size_ecam);
>> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>> + 1, 0x01000000, 2, 0,
>> + 2, base_ioport, 2, size_ioport,
>> +
>> + 1, 0x02000000, 2, base_mmio,
>> + 2, base_mmio, 2, size_mmio);
>> +
>> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>> + 0, 0, 0, /* device */
>> + 0, /* PCI irq */
>> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
>
>
> nit: are there two extra spaces here? (alignment)
Yes, because the attribute spans 2 lines ;)
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-06 16:03 ` [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine Alexander Graf
2015-01-07 15:52 ` Claudio Fontana
2015-01-12 16:20 ` Claudio Fontana
@ 2015-01-12 16:49 ` alvise rigo
2015-01-12 16:57 ` Alexander Graf
2 siblings, 1 reply; 44+ messages in thread
From: alvise rigo @ 2015-01-12 16:49 UTC (permalink / raw)
To: Alexander Graf
Cc: Peter Maydell, Ard Biesheuvel, Rob Herring, Michael S. Tsirkin,
Claudio Fontana, stuart.yoder@freescale.com, QEMU Developers
Hi Alexander,
Just a comment below.
On Tue, Jan 6, 2015 at 5:03 PM, Alexander Graf <agraf@suse.de> wrote:
> Now that we have a working "generic" PCIe host bridge driver, we can plug
> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>
> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
> into an AArch64 VM with this and they all lived happily ever after.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
>
> ---
>
> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
> systems. If you want to use it with AArch64 guests, please apply the following
> patch or wait until upstream cleaned up the code properly:
>
> http://csgraf.de/agraf/pci/pci-3.19.patch
> ---
> default-configs/arm-softmmu.mak | 2 +
> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
> 2 files changed, 80 insertions(+), 5 deletions(-)
>
> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
> index f3513fa..7671ee2 100644
> --- a/default-configs/arm-softmmu.mak
> +++ b/default-configs/arm-softmmu.mak
> @@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
> CONFIG_VERSATILE_PCI=y
> CONFIG_VERSATILE_I2C=y
>
> +CONFIG_PCI_GENERIC=y
> +
> CONFIG_SDHCI=y
> CONFIG_INTEGRATOR_DEBUG=y
>
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 2353440..b7635ac 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -42,6 +42,7 @@
> #include "exec/address-spaces.h"
> #include "qemu/bitops.h"
> #include "qemu/error-report.h"
> +#include "hw/pci-host/gpex.h"
>
> #define NUM_VIRTIO_TRANSPORTS 32
>
> @@ -69,6 +70,7 @@ enum {
> VIRT_MMIO,
> VIRT_RTC,
> VIRT_FW_CFG,
> + VIRT_PCIE,
> };
>
> typedef struct MemMapEntry {
> @@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
> [VIRT_FW_CFG] = { 0x09020000, 0x0000000a },
> [VIRT_MMIO] = { 0x0a000000, 0x00000200 },
> /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
> - /* 0x10000000 .. 0x40000000 reserved for PCI */
> + [VIRT_PCIE] = { 0x10000000, 0x30000000 },
> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
> };
>
> static const int a15irqmap[] = {
> [VIRT_UART] = 1,
> [VIRT_RTC] = 2,
> + [VIRT_PCIE] = 3,
> [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
> };
>
> @@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
> }
> }
>
> -static void fdt_add_gic_node(const VirtBoardInfo *vbi)
> +static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
> {
> uint32_t gic_phandle;
>
> @@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
> 2, vbi->memmap[VIRT_GIC_CPU].base,
> 2, vbi->memmap[VIRT_GIC_CPU].size);
> qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
> +
> + return gic_phandle;
> }
>
> -static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> +static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> {
> /* We create a standalone GIC v2 */
> DeviceState *gicdev;
> @@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
> pic[i] = qdev_get_gpio_in(gicdev, i);
> }
>
> - fdt_add_gic_node(vbi);
> + return fdt_add_gic_node(vbi);
> }
>
> static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
> @@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
> g_free(nodename);
> }
>
> +static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
> + uint32_t gic_phandle)
> +{
> + hwaddr base = vbi->memmap[VIRT_PCIE].base;
> + hwaddr size = vbi->memmap[VIRT_PCIE].size;
> + hwaddr size_ioport = 64 * 1024;
> + hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
> + hwaddr size_mmio = size - size_ecam - size_ioport;
> + hwaddr base_mmio = base;
> + hwaddr base_ioport = base_mmio + size_mmio;
> + hwaddr base_ecam = base_ioport + size_ioport;
> + int irq = vbi->irqmap[VIRT_PCIE];
> + MemoryRegion *mmio_alias;
> + MemoryRegion *mmio_reg;
> + DeviceState *dev;
> + char *nodename;
> +
> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
> +
> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
> + qdev_init_nofail(dev);
> +
> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
> +
> + /* Map the MMIO window at the same spot in bus and cpu layouts */
> + mmio_alias = g_new0(MemoryRegion, 1);
> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
> + mmio_reg, base_mmio, size_mmio);
Is it safe to have both mmio_alias and mmio_reg of size_mmio bytes?
Shouldn't be the container region at least (offset + size - 1) big?
Thank you,
alvise
> + memory_region_add_subregion(get_system_memory(), base_mmio, mmio_alias);
> +
> + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
> +
> + nodename = g_strdup_printf("/pcie@%" PRIx64, base);
> + qemu_fdt_add_subnode(vbi->fdt, nodename);
> + qemu_fdt_setprop_string(vbi->fdt, nodename,
> + "compatible", "pci-host-ecam-generic");
> + qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
> +
> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
> + 2, base_ecam, 2, size_ecam);
> + qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
> + 1, 0x01000000, 2, 0,
> + 2, base_ioport, 2, size_ioport,
> +
> + 1, 0x02000000, 2, base_mmio,
> + 2, base_mmio, 2, size_mmio);
> +
> + qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
> + 0, 0, 0, /* device */
> + 0, /* PCI irq */
> + gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
> + GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq */);
> + qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
> + 0, 0, 0, /* device */
> + 0 /* PCI irq */);
> +
> + g_free(nodename);
> +}
> +
> static void *machvirt_dtb(const struct arm_boot_info *binfo, int *fdt_size)
> {
> const VirtBoardInfo *board = (const VirtBoardInfo *)binfo;
> @@ -573,6 +643,7 @@ static void machvirt_init(MachineState *machine)
> MemoryRegion *ram = g_new(MemoryRegion, 1);
> const char *cpu_model = machine->cpu_model;
> VirtBoardInfo *vbi;
> + uint32_t gic_phandle;
>
> if (!cpu_model) {
> cpu_model = "cortex-a15";
> @@ -634,12 +705,14 @@ static void machvirt_init(MachineState *machine)
>
> create_flash(vbi);
>
> - create_gic(vbi, pic);
> + gic_phandle = create_gic(vbi, pic);
>
> create_uart(vbi, pic);
>
> create_rtc(vbi, pic);
>
> + create_pcie(vbi, pic, gic_phandle);
> +
> /* Create mmio transports, so the user can create virtio backends
> * (which will be automatically plugged in to the transports). If
> * no backend is created the transport will just sit harmlessly idle.
> --
> 1.7.12.4
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
2015-01-12 16:49 ` alvise rigo
@ 2015-01-12 16:57 ` Alexander Graf
0 siblings, 0 replies; 44+ messages in thread
From: Alexander Graf @ 2015-01-12 16:57 UTC (permalink / raw)
To: alvise rigo
Cc: Peter Maydell, Ard Biesheuvel, Rob Herring, Michael S. Tsirkin,
Claudio Fontana, stuart.yoder@freescale.com, QEMU Developers
On 12.01.15 17:49, alvise rigo wrote:
> Hi Alexander,
>
> Just a comment below.
>
> On Tue, Jan 6, 2015 at 5:03 PM, Alexander Graf <agraf@suse.de> wrote:
>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>> it into ARMs virt machine to always have PCIe available to normal ARM VMs.
>>
>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>> into an AArch64 VM with this and they all lived happily ever after.
>>
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>>
>> ---
>>
>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit ARM
>> systems. If you want to use it with AArch64 guests, please apply the following
>> patch or wait until upstream cleaned up the code properly:
>>
>> http://csgraf.de/agraf/pci/pci-3.19.patch
>> ---
>> default-configs/arm-softmmu.mak | 2 +
>> hw/arm/virt.c | 83 ++++++++++++++++++++++++++++++++++++++---
>> 2 files changed, 80 insertions(+), 5 deletions(-)
>>
>> diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
>> index f3513fa..7671ee2 100644
>> --- a/default-configs/arm-softmmu.mak
>> +++ b/default-configs/arm-softmmu.mak
>> @@ -82,6 +82,8 @@ CONFIG_ZYNQ=y
>> CONFIG_VERSATILE_PCI=y
>> CONFIG_VERSATILE_I2C=y
>>
>> +CONFIG_PCI_GENERIC=y
>> +
>> CONFIG_SDHCI=y
>> CONFIG_INTEGRATOR_DEBUG=y
>>
>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>> index 2353440..b7635ac 100644
>> --- a/hw/arm/virt.c
>> +++ b/hw/arm/virt.c
>> @@ -42,6 +42,7 @@
>> #include "exec/address-spaces.h"
>> #include "qemu/bitops.h"
>> #include "qemu/error-report.h"
>> +#include "hw/pci-host/gpex.h"
>>
>> #define NUM_VIRTIO_TRANSPORTS 32
>>
>> @@ -69,6 +70,7 @@ enum {
>> VIRT_MMIO,
>> VIRT_RTC,
>> VIRT_FW_CFG,
>> + VIRT_PCIE,
>> };
>>
>> typedef struct MemMapEntry {
>> @@ -129,13 +131,14 @@ static const MemMapEntry a15memmap[] = {
>> [VIRT_FW_CFG] = { 0x09020000, 0x0000000a },
>> [VIRT_MMIO] = { 0x0a000000, 0x00000200 },
>> /* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
>> - /* 0x10000000 .. 0x40000000 reserved for PCI */
>> + [VIRT_PCIE] = { 0x10000000, 0x30000000 },
>> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
>> };
>>
>> static const int a15irqmap[] = {
>> [VIRT_UART] = 1,
>> [VIRT_RTC] = 2,
>> + [VIRT_PCIE] = 3,
>> [VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
>> };
>>
>> @@ -312,7 +315,7 @@ static void fdt_add_cpu_nodes(const VirtBoardInfo *vbi)
>> }
>> }
>>
>> -static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>> +static uint32_t fdt_add_gic_node(const VirtBoardInfo *vbi)
>> {
>> uint32_t gic_phandle;
>>
>> @@ -331,9 +334,11 @@ static void fdt_add_gic_node(const VirtBoardInfo *vbi)
>> 2, vbi->memmap[VIRT_GIC_CPU].base,
>> 2, vbi->memmap[VIRT_GIC_CPU].size);
>> qemu_fdt_setprop_cell(vbi->fdt, "/intc", "phandle", gic_phandle);
>> +
>> + return gic_phandle;
>> }
>>
>> -static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>> +static uint32_t create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>> {
>> /* We create a standalone GIC v2 */
>> DeviceState *gicdev;
>> @@ -380,7 +385,7 @@ static void create_gic(const VirtBoardInfo *vbi, qemu_irq *pic)
>> pic[i] = qdev_get_gpio_in(gicdev, i);
>> }
>>
>> - fdt_add_gic_node(vbi);
>> + return fdt_add_gic_node(vbi);
>> }
>>
>> static void create_uart(const VirtBoardInfo *vbi, qemu_irq *pic)
>> @@ -556,6 +561,71 @@ static void create_fw_cfg(const VirtBoardInfo *vbi)
>> g_free(nodename);
>> }
>>
>> +static void create_pcie(const VirtBoardInfo *vbi, qemu_irq *pic,
>> + uint32_t gic_phandle)
>> +{
>> + hwaddr base = vbi->memmap[VIRT_PCIE].base;
>> + hwaddr size = vbi->memmap[VIRT_PCIE].size;
>> + hwaddr size_ioport = 64 * 1024;
>> + hwaddr size_ecam = PCIE_MMCFG_SIZE_MIN;
>> + hwaddr size_mmio = size - size_ecam - size_ioport;
>> + hwaddr base_mmio = base;
>> + hwaddr base_ioport = base_mmio + size_mmio;
>> + hwaddr base_ecam = base_ioport + size_ioport;
>> + int irq = vbi->irqmap[VIRT_PCIE];
>> + MemoryRegion *mmio_alias;
>> + MemoryRegion *mmio_reg;
>> + DeviceState *dev;
>> + char *nodename;
>> +
>> + dev = qdev_create(NULL, TYPE_GPEX_HOST);
>> +
>> + qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>> + qdev_init_nofail(dev);
>> +
>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>> + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>> +
>> + /* Map the MMIO window at the same spot in bus and cpu layouts */
>> + mmio_alias = g_new0(MemoryRegion, 1);
>> + mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>> + memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>> + mmio_reg, base_mmio, size_mmio);
>
> Is it safe to have both mmio_alias and mmio_reg of size_mmio bytes?
> Shouldn't be the container region at least (offset + size - 1) big?
You're right. The bridge's memory region shouldn't have any size
limitation, it should just be a flat 64bit memory region that the device
creator than maps aliases into its own address space from.
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-06 16:03 ` [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge Alexander Graf
2015-01-12 16:29 ` Claudio Fontana
@ 2015-01-12 17:36 ` alvise rigo
2015-01-12 17:38 ` Alexander Graf
1 sibling, 1 reply; 44+ messages in thread
From: alvise rigo @ 2015-01-12 17:36 UTC (permalink / raw)
To: Alexander Graf
Cc: Peter Maydell, Ard Biesheuvel, Rob Herring, Michael S. Tsirkin,
Claudio Fontana, stuart.yoder@freescale.com, QEMU Developers
Hi Alexander,
Just a comment below.
On Tue, Jan 6, 2015 at 5:03 PM, Alexander Graf <agraf@suse.de> wrote:
> With simple exposure of MMFG, ioport window, mmio window and an IRQ line we
> can successfully create a workable PCIe host bridge that can be mapped anywhere
> and only needs to get described to the OS using whatever means it likes.
>
> This patch implements such a "generic" host bridge. It only supports a single
> legacy IRQ line so far. MSIs need to be handled external to the host bridge.
>
> This device is particularly useful for the "pci-host-ecam-generic" driver in
> Linux.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
> hw/pci-host/Makefile.objs | 1 +
> hw/pci-host/gpex.c | 156 +++++++++++++++++++++++++++++++++++++++++++++
> include/hw/pci-host/gpex.h | 56 ++++++++++++++++
> 3 files changed, 213 insertions(+)
> create mode 100644 hw/pci-host/gpex.c
> create mode 100644 include/hw/pci-host/gpex.h
>
> diff --git a/hw/pci-host/Makefile.objs b/hw/pci-host/Makefile.objs
> index bb65f9c..45f1f0e 100644
> --- a/hw/pci-host/Makefile.objs
> +++ b/hw/pci-host/Makefile.objs
> @@ -15,3 +15,4 @@ common-obj-$(CONFIG_PCI_APB) += apb.o
> common-obj-$(CONFIG_FULONG) += bonito.o
> common-obj-$(CONFIG_PCI_PIIX) += piix.o
> common-obj-$(CONFIG_PCI_Q35) += q35.o
> +common-obj-$(CONFIG_PCI_GENERIC) += gpex.o
> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
> new file mode 100644
> index 0000000..bd62a3c
> --- /dev/null
> +++ b/hw/pci-host/gpex.c
> @@ -0,0 +1,156 @@
> +/*
> + * QEMU Generic PCI Express Bridge Emulation
> + *
> + * Copyright (C) 2015 Alexander Graf <agraf@suse.de>
> + *
> + * Code loosely based on q35.c.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to deal
> + * in the Software without restriction, including without limitation the rights
> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> + * copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> + * THE SOFTWARE.
> + */
> +#include "hw/hw.h"
> +#include "hw/pci-host/gpex.h"
> +
> +/****************************************************************************
> + * GPEX host
> + */
> +
> +static void gpex_set_irq(void *opaque, int irq_num, int level)
> +{
> + GPEXHost *s = opaque;
> +
> + qemu_set_irq(s->irq, level);
> +}
> +
> +static int gpex_map_irq(PCIDevice *pci_dev, int irq_num)
> +{
> + /* We only support one IRQ line so far */
> + return 0;
> +}
Regarding the request from Claudio to have one system interrupt for
each PCI device, we could address this by swizzling four (or more)
system interrupts for all the PCI devices. In this case a slightly
different interrupt-map and interrupt-map-mask is required as well as
a new map_irq callback (the legacy pci_swizzle_map_irq_fn is fine for
4 IRQs).
This of course would work as far as we have less PCI devices than the
number of swizzled IRQs.
Regards,
alvise
> +
> +static void gpex_host_realize(DeviceState *dev, Error **errp)
> +{
> + PCIHostState *pci = PCI_HOST_BRIDGE(dev);
> + GPEXHost *s = GPEX_HOST(dev);
> + SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
> + PCIExpressHost *pex = PCIE_HOST_BRIDGE(dev);
> +
> + pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MIN);
> + memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", s->mmio_window_size);
> + memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
> +
> + sysbus_init_mmio(sbd, &pex->mmio);
> + sysbus_init_mmio(sbd, &s->io_mmio);
> + sysbus_init_mmio(sbd, &s->io_ioport);
> + sysbus_init_irq(sbd, &s->irq);
> +
> + pci->bus = pci_register_bus(dev, "pcie.0", gpex_set_irq, gpex_map_irq, s,
> + &s->io_mmio, &s->io_ioport, 0, 1, TYPE_PCIE_BUS);
> +
> + qdev_set_parent_bus(DEVICE(&s->gpex_root), BUS(pci->bus));
> + qdev_init_nofail(DEVICE(&s->gpex_root));
> +}
> +
> +static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
> + PCIBus *rootbus)
> +{
> + return "0000:00";
> +}
> +
> +static Property gpex_root_props[] = {
> + DEFINE_PROP_UINT64("mmio_window_size", GPEXHost, mmio_window_size, 1ULL << 32),
> + DEFINE_PROP_END_OF_LIST(),
> +};
> +
> +static void gpex_host_class_init(ObjectClass *klass, void *data)
> +{
> + DeviceClass *dc = DEVICE_CLASS(klass);
> + PCIHostBridgeClass *hc = PCI_HOST_BRIDGE_CLASS(klass);
> +
> + hc->root_bus_path = gpex_host_root_bus_path;
> + dc->realize = gpex_host_realize;
> + dc->props = gpex_root_props;
> + set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
> + dc->fw_name = "pci";
> +}
> +
> +static void gpex_host_initfn(Object *obj)
> +{
> + GPEXHost *s = GPEX_HOST(obj);
> +
> + object_initialize(&s->gpex_root, sizeof(s->gpex_root), TYPE_GPEX_ROOT_DEVICE);
> + object_property_add_child(OBJECT(s), "gpex_root", OBJECT(&s->gpex_root), NULL);
> + qdev_prop_set_uint32(DEVICE(&s->gpex_root), "addr", PCI_DEVFN(0, 0));
> + qdev_prop_set_bit(DEVICE(&s->gpex_root), "multifunction", false);
> +}
> +
> +static const TypeInfo gpex_host_info = {
> + .name = TYPE_GPEX_HOST,
> + .parent = TYPE_PCIE_HOST_BRIDGE,
> + .instance_size = sizeof(GPEXHost),
> + .instance_init = gpex_host_initfn,
> + .class_init = gpex_host_class_init,
> +};
> +
> +/****************************************************************************
> + * GPEX Root D0:F0
> + */
> +
> +static const VMStateDescription vmstate_gpex_root = {
> + .name = "gpex_root",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .fields = (VMStateField[]) {
> + VMSTATE_PCI_DEVICE(parent_obj, GPEXRootState),
> + VMSTATE_END_OF_LIST()
> + }
> +};
> +
> +static void gpex_root_class_init(ObjectClass *klass, void *data)
> +{
> + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
> + DeviceClass *dc = DEVICE_CLASS(klass);
> +
> + set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
> + dc->desc = "Host bridge";
> + dc->vmsd = &vmstate_gpex_root;
> + k->vendor_id = PCI_VENDOR_ID_REDHAT;
> + k->device_id = PCI_DEVICE_ID_REDHAT_BRIDGE;
> + k->revision = 0;
> + k->class_id = PCI_CLASS_BRIDGE_HOST;
> + /*
> + * PCI-facing part of the host bridge, not usable without the
> + * host-facing part, which can't be device_add'ed, yet.
> + */
> + dc->cannot_instantiate_with_device_add_yet = true;
> +}
> +
> +static const TypeInfo gpex_root_info = {
> + .name = TYPE_GPEX_ROOT_DEVICE,
> + .parent = TYPE_PCI_DEVICE,
> + .instance_size = sizeof(GPEXRootState),
> + .class_init = gpex_root_class_init,
> +};
> +
> +static void gpex_register(void)
> +{
> + type_register_static(&gpex_root_info);
> + type_register_static(&gpex_host_info);
> +}
> +
> +type_init(gpex_register);
> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
> new file mode 100644
> index 0000000..5cf2073
> --- /dev/null
> +++ b/include/hw/pci-host/gpex.h
> @@ -0,0 +1,56 @@
> +/*
> + * gpex.h
> + *
> + * Copyright (C) 2015 Alexander Graf <agraf@suse.de>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, see <http://www.gnu.org/licenses/>
> + */
> +
> +#ifndef HW_GPEX_H
> +#define HW_GPEX_H
> +
> +#include "hw/hw.h"
> +#include "hw/sysbus.h"
> +#include "hw/pci/pci.h"
> +#include "hw/pci/pcie_host.h"
> +
> +#define TYPE_GPEX_HOST "gpex-pcihost"
> +#define GPEX_HOST(obj) \
> + OBJECT_CHECK(GPEXHost, (obj), TYPE_GPEX_HOST)
> +
> +#define TYPE_GPEX_ROOT_DEVICE "gpex-root"
> +#define MCH_PCI_DEVICE(obj) \
> + OBJECT_CHECK(GPEXRootState, (obj), TYPE_GPEX_ROOT_DEVICE)
> +
> +typedef struct GPEXRootState {
> + /*< private >*/
> + PCIDevice parent_obj;
> + /*< public >*/
> +} GPEXRootState;
> +
> +typedef struct GPEXHost {
> + /*< private >*/
> + PCIExpressHost parent_obj;
> + /*< public >*/
> +
> + GPEXRootState gpex_root;
> +
> + MemoryRegion io_ioport;
> + MemoryRegion io_mmio;
> + qemu_irq irq;
> +
> + uint64_t mmio_window_size;
> +} GPEXHost;
> +
> +#endif /* HW_GPEX_H */
> --
> 1.7.12.4
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-12 17:36 ` alvise rigo
@ 2015-01-12 17:38 ` Alexander Graf
2015-01-12 20:08 ` Peter Maydell
0 siblings, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-12 17:38 UTC (permalink / raw)
To: alvise rigo
Cc: Peter Maydell, Ard Biesheuvel, Rob Herring, Michael S. Tsirkin,
Claudio Fontana, stuart.yoder@freescale.com, QEMU Developers
On 12.01.15 18:36, alvise rigo wrote:
> Hi Alexander,
>
> Just a comment below.
>
> On Tue, Jan 6, 2015 at 5:03 PM, Alexander Graf <agraf@suse.de> wrote:
>> With simple exposure of MMFG, ioport window, mmio window and an IRQ line we
>> can successfully create a workable PCIe host bridge that can be mapped anywhere
>> and only needs to get described to the OS using whatever means it likes.
>>
>> This patch implements such a "generic" host bridge. It only supports a single
>> legacy IRQ line so far. MSIs need to be handled external to the host bridge.
>>
>> This device is particularly useful for the "pci-host-ecam-generic" driver in
>> Linux.
>>
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>> ---
>> hw/pci-host/Makefile.objs | 1 +
>> hw/pci-host/gpex.c | 156 +++++++++++++++++++++++++++++++++++++++++++++
>> include/hw/pci-host/gpex.h | 56 ++++++++++++++++
>> 3 files changed, 213 insertions(+)
>> create mode 100644 hw/pci-host/gpex.c
>> create mode 100644 include/hw/pci-host/gpex.h
>>
>> diff --git a/hw/pci-host/Makefile.objs b/hw/pci-host/Makefile.objs
>> index bb65f9c..45f1f0e 100644
>> --- a/hw/pci-host/Makefile.objs
>> +++ b/hw/pci-host/Makefile.objs
>> @@ -15,3 +15,4 @@ common-obj-$(CONFIG_PCI_APB) += apb.o
>> common-obj-$(CONFIG_FULONG) += bonito.o
>> common-obj-$(CONFIG_PCI_PIIX) += piix.o
>> common-obj-$(CONFIG_PCI_Q35) += q35.o
>> +common-obj-$(CONFIG_PCI_GENERIC) += gpex.o
>> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
>> new file mode 100644
>> index 0000000..bd62a3c
>> --- /dev/null
>> +++ b/hw/pci-host/gpex.c
>> @@ -0,0 +1,156 @@
>> +/*
>> + * QEMU Generic PCI Express Bridge Emulation
>> + *
>> + * Copyright (C) 2015 Alexander Graf <agraf@suse.de>
>> + *
>> + * Code loosely based on q35.c.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining a copy
>> + * of this software and associated documentation files (the "Software"), to deal
>> + * in the Software without restriction, including without limitation the rights
>> + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
>> + * copies of the Software, and to permit persons to whom the Software is
>> + * furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
>> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
>> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
>> + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
>> + * THE SOFTWARE.
>> + */
>> +#include "hw/hw.h"
>> +#include "hw/pci-host/gpex.h"
>> +
>> +/****************************************************************************
>> + * GPEX host
>> + */
>> +
>> +static void gpex_set_irq(void *opaque, int irq_num, int level)
>> +{
>> + GPEXHost *s = opaque;
>> +
>> + qemu_set_irq(s->irq, level);
>> +}
>> +
>> +static int gpex_map_irq(PCIDevice *pci_dev, int irq_num)
>> +{
>> + /* We only support one IRQ line so far */
>> + return 0;
>> +}
>
> Regarding the request from Claudio to have one system interrupt for
> each PCI device, we could address this by swizzling four (or more)
> system interrupts for all the PCI devices. In this case a slightly
> different interrupt-map and interrupt-map-mask is required as well as
> a new map_irq callback (the legacy pci_swizzle_map_irq_fn is fine for
> 4 IRQs).
> This of course would work as far as we have less PCI devices than the
> number of swizzled IRQs.
I'd prefer to keep things as easy as we humanly can for now. Then add
MSI. And if we then realize that we still need 4 rather than 1 shared
interrupt lines we can still change it :).
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-12 17:38 ` Alexander Graf
@ 2015-01-12 20:08 ` Peter Maydell
2015-01-12 21:06 ` Alexander Graf
0 siblings, 1 reply; 44+ messages in thread
From: Peter Maydell @ 2015-01-12 20:08 UTC (permalink / raw)
To: Alexander Graf
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, stuart.yoder@freescale.com, alvise rigo
On 12 January 2015 at 17:38, Alexander Graf <agraf@suse.de> wrote:
> I'd prefer to keep things as easy as we humanly can for now. Then add
> MSI. And if we then realize that we still need 4 rather than 1 shared
> interrupt lines we can still change it :)
Except that that would be a breaking change, so I would prefer
to think ahead where possible; at some point there will come
a time when we really can't make breaking changes to this
board any more...
-- PMM
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-12 20:08 ` Peter Maydell
@ 2015-01-12 21:06 ` Alexander Graf
2015-01-12 21:20 ` Peter Maydell
0 siblings, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-12 21:06 UTC (permalink / raw)
To: Peter Maydell
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, stuart.yoder@freescale.com, alvise rigo
On 12.01.15 21:08, Peter Maydell wrote:
> On 12 January 2015 at 17:38, Alexander Graf <agraf@suse.de> wrote:
>> I'd prefer to keep things as easy as we humanly can for now. Then add
>> MSI. And if we then realize that we still need 4 rather than 1 shared
>> interrupt lines we can still change it :)
>
> Except that that would be a breaking change, so I would prefer
> to think ahead where possible; at some point there will come
> a time when we really can't make breaking changes to this
> board any more...
Works for me, then we stay at a single interrupt line. The only reason
we have 4 in PCI is that back in the day you could have non-sharing PCI
devices that were essentially ISA ones.
If you actually care about interrupt latency and performance you will
want more than 4 IRQ lines, so you'll want MSI.
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-12 21:06 ` Alexander Graf
@ 2015-01-12 21:20 ` Peter Maydell
2015-01-13 0:13 ` Alexander Graf
2015-01-13 9:09 ` Claudio Fontana
0 siblings, 2 replies; 44+ messages in thread
From: Peter Maydell @ 2015-01-12 21:20 UTC (permalink / raw)
To: Alexander Graf
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, stuart.yoder@freescale.com, alvise rigo
On 12 January 2015 at 21:06, Alexander Graf <agraf@suse.de> wrote:
>
>
> On 12.01.15 21:08, Peter Maydell wrote:
>> On 12 January 2015 at 17:38, Alexander Graf <agraf@suse.de> wrote:
>>> I'd prefer to keep things as easy as we humanly can for now. Then add
>>> MSI. And if we then realize that we still need 4 rather than 1 shared
>>> interrupt lines we can still change it :)
>>
>> Except that that would be a breaking change, so I would prefer
>> to think ahead where possible; at some point there will come
>> a time when we really can't make breaking changes to this
>> board any more...
>
> Works for me, then we stay at a single interrupt line. The only reason
> we have 4 in PCI is that back in the day you could have non-sharing PCI
> devices that were essentially ISA ones.
Well, also your typical small system probably doesn't have more
than 4 PCI slots and so 4 IRQs is enough to give them each one.
Most small VMs probably won't have more than four PCI devices
either...
-- PMM
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-12 21:20 ` Peter Maydell
@ 2015-01-13 0:13 ` Alexander Graf
2015-01-13 10:07 ` Peter Maydell
2015-01-13 9:09 ` Claudio Fontana
1 sibling, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-13 0:13 UTC (permalink / raw)
To: Peter Maydell
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, stuart.yoder@freescale.com, alvise rigo
On 12.01.15 22:20, Peter Maydell wrote:
> On 12 January 2015 at 21:06, Alexander Graf <agraf@suse.de> wrote:
>>
>>
>> On 12.01.15 21:08, Peter Maydell wrote:
>>> On 12 January 2015 at 17:38, Alexander Graf <agraf@suse.de> wrote:
>>>> I'd prefer to keep things as easy as we humanly can for now. Then add
>>>> MSI. And if we then realize that we still need 4 rather than 1 shared
>>>> interrupt lines we can still change it :)
>>>
>>> Except that that would be a breaking change, so I would prefer
>>> to think ahead where possible; at some point there will come
>>> a time when we really can't make breaking changes to this
>>> board any more...
>>
>> Works for me, then we stay at a single interrupt line. The only reason
>> we have 4 in PCI is that back in the day you could have non-sharing PCI
>> devices that were essentially ISA ones.
>
> Well, also your typical small system probably doesn't have more
> than 4 PCI slots and so 4 IRQs is enough to give them each one.
> Most small VMs probably won't have more than four PCI devices
> either...
My main problem with multiple IRQs is that we'd have to describe the
mapping. I'd rather not have a fixed number of PCI slots hardcoded
anywhere, especially not in the map. So the only chance we have to keep
it dynamic would be to mask some field of the devfn to PCI IRQ lines.
How about we map the slots with a simple, pretty generic mask on the
lower bitsto 4 host IRQ lines? Would that make everyone happy?
I still don't think it's worth the hassle, but I'd be happy to do it if
people insist.
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-12 21:20 ` Peter Maydell
2015-01-13 0:13 ` Alexander Graf
@ 2015-01-13 9:09 ` Claudio Fontana
1 sibling, 0 replies; 44+ messages in thread
From: Claudio Fontana @ 2015-01-13 9:09 UTC (permalink / raw)
To: Peter Maydell, Alexander Graf
Cc: Rob Herring, Ard Biesheuvel, QEMU Developers, Michael S. Tsirkin,
stuart.yoder@freescale.com, alvise rigo
On 12.01.2015 22:20, Peter Maydell wrote:
> On 12 January 2015 at 21:06, Alexander Graf <agraf@suse.de> wrote:
>>
>>
>> On 12.01.15 21:08, Peter Maydell wrote:
>>> On 12 January 2015 at 17:38, Alexander Graf <agraf@suse.de> wrote:
>>>> I'd prefer to keep things as easy as we humanly can for now. Then add
>>>> MSI. And if we then realize that we still need 4 rather than 1 shared
>>>> interrupt lines we can still change it :)
>>>
>>> Except that that would be a breaking change, so I would prefer
>>> to think ahead where possible; at some point there will come
>>> a time when we really can't make breaking changes to this
>>> board any more...
>>
>> Works for me, then we stay at a single interrupt line. The only reason
>> we have 4 in PCI is that back in the day you could have non-sharing PCI
>> devices that were essentially ISA ones.
>
> Well, also your typical small system probably doesn't have more
> than 4 PCI slots and so 4 IRQs is enough to give them each one.
> Most small VMs probably won't have more than four PCI devices
> either...
>
> -- PMM
>
This solution with 4 slots, 4 irqs would be preferable to me..
Ciao,
Claudio
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge
2015-01-13 0:13 ` Alexander Graf
@ 2015-01-13 10:07 ` Peter Maydell
0 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2015-01-13 10:07 UTC (permalink / raw)
To: Alexander Graf
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, stuart.yoder@freescale.com, alvise rigo
On 13 January 2015 at 00:13, Alexander Graf <agraf@suse.de> wrote:
> My main problem with multiple IRQs is that we'd have to describe the
> mapping.
This is true...
> I'd rather not have a fixed number of PCI slots hardcoded
> anywhere, especially not in the map.
...but this doesn't follow. What you do is the standard thing
of swizzling the IRQ lines across PCI slots, which is describable
with a device-tree mapping without having to hardcode the
number of slots. See my other email on the subject.
-- PMM
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-06 16:03 [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Alexander Graf
` (5 preceding siblings ...)
2015-01-12 16:24 ` Claudio Fontana
@ 2015-01-21 12:59 ` Claudio Fontana
2015-01-21 13:01 ` Alexander Graf
6 siblings, 1 reply; 44+ messages in thread
From: Claudio Fontana @ 2015-01-21 12:59 UTC (permalink / raw)
To: Alexander Graf, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
Hi Alex,
are you planning a respin of this one?
Between your series and Alvise's I would just need one of the two to get merged, they are both fine for me, pending some small things that have been raised in the comments..
Ciao & thanks,
Claudio
On 06.01.2015 17:03, Alexander Graf wrote:
> Linux implements a nice binding to describe a "generic" PCI Express host bridge
> using only device tree.
>
> This patch set adds enough emulation logic to expose the parts that are
> "generic" as a simple sysbus device and maps it into ARM's virt machine.
>
> With this patch set, we can finally spawn PCI devices on ARM VMs. I was able
> to have a fully DRM enabled virtual machine with VGA, e1000 and XHCI (for
> keyboard and mouse) up and working.
>
> It's only a small step for QEMU, but a big step for ARM VM's usability.
>
>
> Happy new year!
>
> Alexander Graf (4):
> pci: Split pcie_host_mmcfg_map()
> pci: Add generic PCIe host bridge
> arm: Add PCIe host bridge in virt machine
> arm: enable Bochs PCI VGA
>
> default-configs/arm-softmmu.mak | 3 +
> hw/arm/virt.c | 83 +++++++++++++++++++--
> hw/pci-host/Makefile.objs | 1 +
> hw/pci-host/gpex.c | 156 ++++++++++++++++++++++++++++++++++++++++
> hw/pci/pcie_host.c | 9 ++-
> include/hw/pci-host/gpex.h | 56 +++++++++++++++
> include/hw/pci/pcie_host.h | 1 +
> 7 files changed, 302 insertions(+), 7 deletions(-)
> create mode 100644 hw/pci-host/gpex.c
> create mode 100644 include/hw/pci-host/gpex.h
>
--
Claudio Fontana
Server Virtualization Architect
Huawei Technologies Duesseldorf GmbH
Riesstraße 25 - 80992 München
office: +49 89 158834 4135
mobile: +49 15253060158
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-21 12:59 ` Claudio Fontana
@ 2015-01-21 13:01 ` Alexander Graf
2015-01-21 13:02 ` Peter Maydell
0 siblings, 1 reply; 44+ messages in thread
From: Alexander Graf @ 2015-01-21 13:01 UTC (permalink / raw)
To: Claudio Fontana, qemu-devel
Cc: Peter Maydell, ard.biesheuvel, mst, rob.herring, stuart.yoder,
a.rigo
On 21.01.15 13:59, Claudio Fontana wrote:
> Hi Alex,
>
> are you planning a respin of this one?
Yup, will send a respin with 4 IRQs this week.
Alex
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge
2015-01-21 13:01 ` Alexander Graf
@ 2015-01-21 13:02 ` Peter Maydell
0 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2015-01-21 13:02 UTC (permalink / raw)
To: Alexander Graf
Cc: Rob Herring, Michael S. Tsirkin, QEMU Developers, Ard Biesheuvel,
Claudio Fontana, Alvise Rigo, Stuart Yoder
On 21 January 2015 at 13:01, Alexander Graf <agraf@suse.de> wrote:
>
>
> On 21.01.15 13:59, Claudio Fontana wrote:
>> Hi Alex,
>>
>> are you planning a respin of this one?
>
> Yup, will send a respin with 4 IRQs this week.
I've finished reading my thousand-page book on PCIe,
so hopefully will be able to review the respin :-)
-- PMM
^ permalink raw reply [flat|nested] 44+ messages in thread
end of thread, other threads:[~2015-01-21 13:02 UTC | newest]
Thread overview: 44+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-06 16:03 [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Alexander Graf
2015-01-06 16:03 ` [Qemu-devel] [PATCH 1/4] pci: Split pcie_host_mmcfg_map() Alexander Graf
2015-01-12 16:28 ` Claudio Fontana
2015-01-06 16:03 ` [Qemu-devel] [PATCH 2/4] pci: Add generic PCIe host bridge Alexander Graf
2015-01-12 16:29 ` Claudio Fontana
2015-01-12 17:36 ` alvise rigo
2015-01-12 17:38 ` Alexander Graf
2015-01-12 20:08 ` Peter Maydell
2015-01-12 21:06 ` Alexander Graf
2015-01-12 21:20 ` Peter Maydell
2015-01-13 0:13 ` Alexander Graf
2015-01-13 10:07 ` Peter Maydell
2015-01-13 9:09 ` Claudio Fontana
2015-01-06 16:03 ` [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine Alexander Graf
2015-01-07 15:52 ` Claudio Fontana
2015-01-07 21:47 ` Alexander Graf
2015-01-08 12:55 ` Claudio Fontana
2015-01-08 13:26 ` Alexander Graf
2015-01-08 15:01 ` Claudio Fontana
2015-01-12 16:23 ` Claudio Fontana
2015-01-12 16:35 ` Alexander Graf
2015-01-08 13:36 ` alvise rigo
2015-01-08 10:31 ` Peter Maydell
2015-01-08 12:30 ` Claudio Fontana
2015-01-12 16:20 ` Claudio Fontana
2015-01-12 16:36 ` Alexander Graf
2015-01-12 16:49 ` alvise rigo
2015-01-12 16:57 ` Alexander Graf
2015-01-06 16:03 ` [Qemu-devel] [PATCH 4/4] arm: enable Bochs PCI VGA Alexander Graf
2015-01-06 16:16 ` Peter Maydell
2015-01-06 21:08 ` Alexander Graf
2015-01-06 21:28 ` Peter Maydell
2015-01-06 21:42 ` Alexander Graf
2015-01-07 6:22 ` Paolo Bonzini
2015-01-07 13:52 ` [Qemu-devel] [PATCH 0/4] ARM: Add support for a generic PCI Express host bridge Claudio Fontana
2015-01-07 14:07 ` Alexander Graf
2015-01-07 14:26 ` Claudio Fontana
2015-01-07 14:36 ` Alexander Graf
2015-01-07 15:16 ` Claudio Fontana
2015-01-07 16:31 ` Peter Maydell
2015-01-12 16:24 ` Claudio Fontana
2015-01-21 12:59 ` Claudio Fontana
2015-01-21 13:01 ` Alexander Graf
2015-01-21 13:02 ` Peter Maydell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).