* [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
@ 2014-09-19 11:54 frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 1/6] KVM: s390: Enable PCI instructions frank.blaschka
` (6 more replies)
0 siblings, 7 replies; 20+ messages in thread
From: frank.blaschka @ 2014-09-19 11:54 UTC (permalink / raw)
To: qemu-devel, linux-s390, kvm; +Cc: pbonzini, alex.williamson, agraf
This set of patches implements a vfio based solution for pci
pass-through on the s390 platform. The kernel stuff is pretty
much straight forward, but qemu needs more work.
Most interesting patch is:
vfio: make vfio run on s390 platform
I hope Alex & Alex can give me some guidance how to do the changes
in an appropriate way. After creating a separate iommmu address space
for each attached PCI device I can successfully run the vfio type1
iommu. So If we could extend type1 not registering all guest memory
(see patch) I think we do not need a special vfio iommu for s390
for the moment.
The patches implement the base pass-through support. s390 specific
virtualization functions are currently not included. This would
be a second step after the base support is done.
kernel patches apply to linux-kvm-next
KVM: s390: Enable PCI instructions
iommu: add iommu for s390 platform
vfio: make vfio build on s390
qemu patches apply to qemu-master
s390: Add PCI bus support
s390: implement pci instruction
vfio: make vfio run on s390 platform
Thx for feedback and review comments
Frank
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Qemu-devel] [RFC patch 1/6] KVM: s390: Enable PCI instructions
2014-09-19 11:54 [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 frank.blaschka
@ 2014-09-19 11:54 ` frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 2/6] iommu: add iommu for s390 platform frank.blaschka
` (5 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: frank.blaschka @ 2014-09-19 11:54 UTC (permalink / raw)
To: qemu-devel, linux-s390, kvm; +Cc: pbonzini, alex.williamson, agraf
[-- Attachment #1: 005-s390_kvm_enable_pci.patch --]
[-- Type: text/plain, Size: 510 bytes --]
Enable PCI instructions for s390 KVM.
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
---
arch/s390/kvm/kvm-s390.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -1787,7 +1787,7 @@ static int __init kvm_s390_init(void)
}
memcpy(vfacilities, S390_lowcore.stfle_fac_list, 16);
vfacilities[0] &= 0xff82fff3f4fc2000UL;
- vfacilities[1] &= 0x005c000000000000UL;
+ vfacilities[1] &= 0x07dc000000000000UL;
return 0;
}
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Qemu-devel] [RFC patch 2/6] iommu: add iommu for s390 platform
2014-09-19 11:54 [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 1/6] KVM: s390: Enable PCI instructions frank.blaschka
@ 2014-09-19 11:54 ` frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 3/6] vfio: make vfio build on s390 frank.blaschka
` (4 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: frank.blaschka @ 2014-09-19 11:54 UTC (permalink / raw)
To: qemu-devel, linux-s390, kvm; +Cc: pbonzini, alex.williamson, agraf
[-- Attachment #1: 006-s390_iommu-3.16.patch --]
[-- Type: text/plain, Size: 7284 bytes --]
From: Frank Blaschka <frank.blaschka@de.ibm.com>
Add a basic iommu for the s390 platform. The code is pretty
simple since on s390 each PCI device has its own virtual io address
space starting at the same vio address. For this a domain could
hold only one pci device. Also there is no relation between pci
devices so each device belongs to a separate iommu group.
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
---
arch/s390/include/asm/pci.h | 3
arch/s390/pci/pci_dma.c | 21 ++++-
drivers/iommu/Kconfig | 9 ++
drivers/iommu/Makefile | 1
drivers/iommu/s390-iommu.c | 181 ++++++++++++++++++++++++++++++++++++++++++++
5 files changed, 213 insertions(+), 2 deletions(-)
--- a/arch/s390/include/asm/pci.h
+++ b/arch/s390/include/asm/pci.h
@@ -177,6 +177,9 @@ struct zpci_dev *get_zdev_by_fid(u32);
/* DMA */
int zpci_dma_init(void);
void zpci_dma_exit(void);
+int dma_update_trans(struct zpci_dev *zdev, unsigned long pa,
+ dma_addr_t dma_addr, size_t size, int flags);
+void dma_purge_rto_entries(struct zpci_dev *zdev);
/* FMB */
int zpci_fmb_enable_device(struct zpci_dev *);
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -139,8 +139,8 @@ static void dma_update_cpu_trans(struct
entry_clr_protected(entry);
}
-static int dma_update_trans(struct zpci_dev *zdev, unsigned long pa,
- dma_addr_t dma_addr, size_t size, int flags)
+int dma_update_trans(struct zpci_dev *zdev, unsigned long pa,
+ dma_addr_t dma_addr, size_t size, int flags)
{
unsigned int nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
u8 *page_addr = (u8 *) (pa & PAGE_MASK);
@@ -180,6 +180,7 @@ no_refresh:
spin_unlock_irqrestore(&zdev->dma_table_lock, irq_flags);
return rc;
}
+EXPORT_SYMBOL_GPL(dma_update_trans);
static void dma_free_seg_table(unsigned long entry)
{
@@ -210,6 +211,22 @@ static void dma_cleanup_tables(struct zp
zdev->dma_table = NULL;
}
+void dma_purge_rto_entries(struct zpci_dev *zdev)
+{
+ unsigned long *table;
+ int rtx;
+
+ if (!zdev || !zdev->dma_table)
+ return;
+ table = zdev->dma_table;
+ for (rtx = 0; rtx < ZPCI_TABLE_ENTRIES; rtx++)
+ if (reg_entry_isvalid(table[rtx])) {
+ dma_free_seg_table(table[rtx]);
+ invalidate_table_entry(&table[rtx]);
+ }
+}
+EXPORT_SYMBOL_GPL(dma_purge_rto_entries);
+
static unsigned long __dma_alloc_iommu(struct zpci_dev *zdev,
unsigned long start, int size)
{
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -302,4 +302,13 @@ config ARM_SMMU
Say Y here if your SoC includes an IOMMU device implementing
the ARM SMMU architecture.
+config S390_IOMMU
+ bool "s390 IOMMU Support"
+ depends on S390
+ select IOMMU_API
+ help
+ Support for the IBM s/390 IOMMU
+
+ If unsure, say N here.
+
endif # IOMMU_SUPPORT
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -19,3 +19,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iom
obj-$(CONFIG_SHMOBILE_IOMMU) += shmobile-iommu.o
obj-$(CONFIG_SHMOBILE_IPMMU) += shmobile-ipmmu.o
obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
+obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
--- /dev/null
+++ b/drivers/iommu/s390-iommu.c
@@ -0,0 +1,181 @@
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/pm_runtime.h>
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/iommu.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/memblock.h>
+#include <linux/export.h>
+#include <linux/pci.h>
+#include <linux/sizes.h>
+#include <asm/pci_dma.h>
+
+#define S390_IOMMU_PGSIZES SZ_4K
+
+struct s390_domain {
+ struct zpci_dev *zdev;
+};
+
+static int s390_iommu_domain_init(struct iommu_domain *domain)
+{
+ struct s390_domain *priv;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ domain->priv = priv;
+ return 0;
+}
+
+static void s390_iommu_domain_destroy(struct iommu_domain *domain)
+{
+ kfree(domain->priv);
+ domain->priv = NULL;
+}
+
+static int s390_iommu_attach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct s390_domain *priv = domain->priv;
+
+ if (priv->zdev)
+ return -EEXIST;
+
+ priv->zdev = (struct zpci_dev *)to_pci_dev(dev)->sysdata;
+ return 0;
+}
+
+static void s390_iommu_detach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct s390_domain *priv = domain->priv;
+
+ dma_purge_rto_entries(priv->zdev);
+ priv->zdev = NULL;
+}
+
+static int s390_iommu_map(struct iommu_domain *domain, unsigned long iova,
+ phys_addr_t paddr, size_t size, int prot)
+{
+ struct s390_domain *priv = domain->priv;
+ int flags = 0;
+ int rc;
+
+ if (!priv->zdev)
+ return -ENODEV;
+
+ /* if (read only) flags |= ZPCI_TABLE_PROTECTED; */
+ rc = dma_update_trans(priv->zdev, (unsigned long)paddr, iova, size,
+ flags);
+
+ return rc;
+}
+
+static phys_addr_t s390_iommu_iova_to_phys(struct iommu_domain *domain,
+ dma_addr_t iova)
+{
+ struct s390_domain *priv = domain->priv;
+ phys_addr_t phys = 0;
+ unsigned long *sto, *pto, *rto;
+ unsigned int rtx, sx, px;
+
+ if (!priv->zdev)
+ return -ENODEV;
+
+ rtx = calc_rtx(iova);
+ sx = calc_sx(iova);
+ px = calc_px(iova);
+ rto = priv->zdev->dma_table;
+
+ if (reg_entry_isvalid(rto[rtx])) {
+ sto = get_rt_sto(rto[rtx]);
+ if (reg_entry_isvalid(sto[sx])) {
+ pto = get_st_pto(sto[sx]);
+ if ((pto[px] & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID)
+ phys = pto[px] & ZPCI_PTE_ADDR_MASK;
+ }
+ }
+
+ return phys;
+}
+
+static size_t s390_iommu_unmap(struct iommu_domain *domain,
+ unsigned long iova, size_t size)
+{
+ struct s390_domain *priv = domain->priv;
+ int flags = ZPCI_PTE_INVALID;
+ phys_addr_t paddr;
+ int rc;
+
+ if (!priv->zdev)
+ goto out;
+
+ paddr = s390_iommu_iova_to_phys(domain, iova);
+ if (!paddr)
+ goto out;
+
+ rc = dma_update_trans(priv->zdev, (unsigned long)paddr, iova, size,
+ flags);
+out:
+ return size;
+}
+
+static int s390_iommu_domain_has_cap(struct iommu_domain *domain,
+ unsigned long cap)
+{
+ switch (cap) {
+ case IOMMU_CAP_CACHE_COHERENCY:
+ return 1;
+ case IOMMU_CAP_INTR_REMAP:
+ return 1;
+ }
+
+ return 0;
+}
+
+static int s390_iommu_add_device(struct device *dev)
+{
+ struct iommu_group *group;
+ int ret;
+
+ group = iommu_group_alloc();
+ if (IS_ERR(group)) {
+ dev_err(dev, "Failed to allocate IOMMU group\n");
+ return PTR_ERR(group);
+ }
+
+ ret = iommu_group_add_device(group, dev);
+ return ret;
+}
+
+static void s390_iommu_remove_device(struct device *dev)
+{
+ iommu_group_remove_device(dev);
+}
+
+static struct iommu_ops s390_iommu_ops = {
+ .domain_init = s390_iommu_domain_init,
+ .domain_destroy = s390_iommu_domain_destroy,
+ .attach_dev = s390_iommu_attach_device,
+ .detach_dev = s390_iommu_detach_device,
+ .map = s390_iommu_map,
+ .unmap = s390_iommu_unmap,
+ .iova_to_phys = s390_iommu_iova_to_phys,
+ .domain_has_cap = s390_iommu_domain_has_cap,
+ .add_device = s390_iommu_add_device,
+ .remove_device = s390_iommu_remove_device,
+ .pgsize_bitmap = S390_IOMMU_PGSIZES,
+};
+
+static int __init s390_iommu_init(void)
+{
+ bus_set_iommu(&pci_bus_type, &s390_iommu_ops);
+ return 0;
+}
+subsys_initcall(s390_iommu_init);
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Qemu-devel] [RFC patch 3/6] vfio: make vfio build on s390
2014-09-19 11:54 [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 1/6] KVM: s390: Enable PCI instructions frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 2/6] iommu: add iommu for s390 platform frank.blaschka
@ 2014-09-19 11:54 ` frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 4/6] s390: Add PCI bus support frank.blaschka
` (3 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: frank.blaschka @ 2014-09-19 11:54 UTC (permalink / raw)
To: qemu-devel, linux-s390, kvm; +Cc: pbonzini, alex.williamson, agraf
[-- Attachment #1: 007-s390_vfio-3.16.patch --]
[-- Type: text/plain, Size: 1286 bytes --]
From: Frank Blaschka <frank.blaschka@de.ibm.com>
This patch adds some small changes to make vfio build on s390.
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
---
drivers/vfio/Kconfig | 2 +-
drivers/vfio/pci/vfio_pci_rdwr.c | 8 ++++++++
2 files changed, 9 insertions(+), 1 deletion(-)
--- a/drivers/vfio/Kconfig
+++ b/drivers/vfio/Kconfig
@@ -16,7 +16,7 @@ config VFIO_SPAPR_EEH
menuconfig VFIO
tristate "VFIO Non-Privileged userspace driver framework"
depends on IOMMU_API
- select VFIO_IOMMU_TYPE1 if X86
+ select VFIO_IOMMU_TYPE1 if (X86 || S390)
select VFIO_IOMMU_SPAPR_TCE if (PPC_POWERNV || PPC_PSERIES)
select VFIO_SPAPR_EEH if (PPC_POWERNV || PPC_PSERIES)
select ANON_INODES
--- a/drivers/vfio/pci/vfio_pci_rdwr.c
+++ b/drivers/vfio/pci/vfio_pci_rdwr.c
@@ -177,6 +177,13 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_
return done;
}
+#ifdef CONFIG_NO_IOPORT_MAP
+ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite)
+{
+ return -EINVAL;
+}
+#else
ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
size_t count, loff_t *ppos, bool iswrite)
{
@@ -236,3 +243,4 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_
return done;
}
+#endif
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Qemu-devel] [RFC patch 4/6] s390: Add PCI bus support
2014-09-19 11:54 [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 frank.blaschka
` (2 preceding siblings ...)
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 3/6] vfio: make vfio build on s390 frank.blaschka
@ 2014-09-19 11:54 ` frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 5/6] s390: implement pci instruction frank.blaschka
` (2 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: frank.blaschka @ 2014-09-19 11:54 UTC (permalink / raw)
To: qemu-devel, linux-s390, kvm; +Cc: pbonzini, alex.williamson, agraf
[-- Attachment #1: 101-qemu_bus.patch --]
[-- Type: text/plain, Size: 22433 bytes --]
From: Frank Blaschka <frank.blaschka@de.ibm.com>
This patch implements a pci bus for s390x together with some infrastructure
to generate and handle hotplug events. It also provides device
configuration/unconfiguration via sclp instruction interception.
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
---
default-configs/s390x-softmmu.mak | 1
hw/s390x/Makefile.objs | 1
hw/s390x/css.c | 5
hw/s390x/css.h | 1
hw/s390x/s390-pci-bus.c | 404 ++++++++++++++++++++++++++++++++++++++
hw/s390x/s390-pci-bus.h | 166 +++++++++++++++
hw/s390x/s390-virtio-ccw.c | 2
hw/s390x/sclp.c | 10
include/hw/s390x/sclp.h | 8
target-s390x/ioinst.c | 52 ++++
target-s390x/ioinst.h | 1
11 files changed, 650 insertions(+), 1 deletion(-)
--- a/default-configs/s390x-softmmu.mak
+++ b/default-configs/s390x-softmmu.mak
@@ -1,3 +1,4 @@
+include pci.mak
CONFIG_VIRTIO=y
CONFIG_SCLPCONSOLE=y
CONFIG_S390_FLIC=y
--- a/hw/s390x/Makefile.objs
+++ b/hw/s390x/Makefile.objs
@@ -8,3 +8,4 @@ obj-y += ipl.o
obj-y += css.o
obj-y += s390-virtio-ccw.o
obj-y += virtio-ccw.o
+obj-$(CONFIG_KVM) += s390-pci-bus.o
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -1281,6 +1281,11 @@ void css_generate_chp_crws(uint8_t cssid
/* TODO */
}
+void css_generate_css_crws(uint8_t cssid)
+{
+ css_queue_crw(CRW_RSC_CSS, 0, 0, 0);
+}
+
int css_enable_mcsse(void)
{
trace_css_enable_facility("mcsse");
--- a/hw/s390x/css.h
+++ b/hw/s390x/css.h
@@ -99,6 +99,7 @@ void css_queue_crw(uint8_t rsc, uint8_t
void css_generate_sch_crws(uint8_t cssid, uint8_t ssid, uint16_t schid,
int hotplugged, int add);
void css_generate_chp_crws(uint8_t cssid, uint8_t chpid);
+void css_generate_css_crws(uint8_t cssid);
void css_adapter_interrupt(uint8_t isc);
#define CSS_IO_ADAPTER_VIRTIO 1
--- /dev/null
+++ b/hw/s390x/s390-pci-bus.c
@@ -0,0 +1,404 @@
+/*
+ * s390 PCI BUS
+ *
+ * Copyright 2014 IBM Corp.
+ * Author(s): Frank Blaschka <frank.blaschka@de.ibm.com>
+ * Hong Bo Li <lihbbj@cn.ibm.com>
+ * Yi Min Zhao <zyimin@cn.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#include <hw/pci/pci.h>
+#include <hw/s390x/css.h>
+#include <hw/s390x/sclp.h>
+#include <hw/pci/msi.h>
+#include "qemu/error-report.h"
+#include "s390-pci-bus.h"
+
+/* #define DEBUG_S390PCI_BUS */
+#ifdef DEBUG_S390PCI_BUS
+#define DPRINTF(fmt, ...) \
+ do { fprintf(stderr, "S390pci-bus: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+ do { } while (0)
+#endif
+
+static const unsigned long be_to_le = BITS_PER_LONG - 1;
+static QTAILQ_HEAD(, SeiContainer) pending_sei =
+ QTAILQ_HEAD_INITIALIZER(pending_sei);
+static QTAILQ_HEAD(, S390PCIBusDevice) device_list =
+ QTAILQ_HEAD_INITIALIZER(device_list);
+
+int chsc_sei_nt2_get_event(void *res)
+{
+ ChscSeiNt2Res *nt2_res = (ChscSeiNt2Res *)res;
+ PciCcdfAvail *accdf;
+ PciCcdfErr *eccdf;
+ int rc = 1;
+ SeiContainer *sei_cont;
+
+ sei_cont = QTAILQ_FIRST(&pending_sei);
+ if (sei_cont) {
+ QTAILQ_REMOVE(&pending_sei, sei_cont, link);
+ nt2_res->nt = 2;
+ nt2_res->cc = sei_cont->cc;
+ switch (sei_cont->cc) {
+ case 1: /* error event */
+ eccdf = (PciCcdfErr *)nt2_res->ccdf;
+ eccdf->fid = cpu_to_be32(sei_cont->fid);
+ eccdf->fh = cpu_to_be32(sei_cont->fh);
+ break;
+ case 2: /* availability event */
+ accdf = (PciCcdfAvail *)nt2_res->ccdf;
+ accdf->fid = cpu_to_be32(sei_cont->fid);
+ accdf->fh = cpu_to_be32(sei_cont->fh);
+ accdf->pec = cpu_to_be16(sei_cont->pec);
+ break;
+ default:
+ abort();
+ }
+ g_free(sei_cont);
+ rc = 0;
+ }
+
+ return rc;
+}
+
+int chsc_sei_nt2_have_event(void)
+{
+ return !QTAILQ_EMPTY(&pending_sei);
+}
+
+static S390PCIBusDevice *s390_pci_find_dev_by_fid(uint32_t fid)
+{
+ S390PCIBusDevice *pbdev;
+
+ QTAILQ_FOREACH(pbdev, &device_list, next) {
+ if (pbdev->fid == fid) {
+ return pbdev;
+ }
+ }
+ return NULL;
+}
+
+void s390_pci_sclp_configure(int configure, SCCB *sccb)
+{
+ PciCfgSccb *psccb = (PciCfgSccb *)sccb;
+ S390PCIBusDevice *pbdev = s390_pci_find_dev_by_fid(be32_to_cpu(psccb->aid));
+ uint16_t rc;
+
+ if (pbdev) {
+ if ((configure == 1 && pbdev->configured == true) ||
+ (configure == 0 && pbdev->configured == false)) {
+ rc = SCLP_RC_NO_ACTION_REQUIRED;
+ } else {
+ pbdev->configured = !pbdev->configured;
+ rc = SCLP_RC_NORMAL_COMPLETION;
+ }
+ } else {
+ DPRINTF("sclp config %d no dev found\n", configure);
+ rc = SCLP_RC_ADAPTER_ID_NOT_RECOGNIZED;
+ }
+
+ psccb->header.response_code = cpu_to_be16(rc);
+ return;
+}
+
+static uint32_t s390_pci_get_pfid(PCIDevice *pdev)
+{
+ return PCI_SLOT(pdev->devfn);
+}
+
+static uint32_t s390_pci_get_pfh(PCIDevice *pdev)
+{
+ return PCI_SLOT(pdev->devfn) | FH_VIRT;
+}
+
+S390PCIBusDevice *s390_pci_find_dev_by_idx(uint32_t idx)
+{
+ S390PCIBusDevice *dev;
+ int i = 0;
+
+ QTAILQ_FOREACH(dev, &device_list, next) {
+ if (i == idx) {
+ return dev;
+ }
+ i++;
+ }
+ return NULL;
+}
+
+S390PCIBusDevice *s390_pci_find_dev_by_fh(uint32_t fh)
+{
+ S390PCIBusDevice *pbdev;
+
+ QTAILQ_FOREACH(pbdev, &device_list, next) {
+ if (pbdev->fh == fh) {
+ return pbdev;
+ }
+ }
+ return NULL;
+}
+
+static S390PCIBusDevice *s390_pci_find_dev_by_pdev(PCIDevice *pdev)
+{
+ S390PCIBusDevice *pbdev;
+
+ QTAILQ_FOREACH(pbdev, &device_list, next) {
+ if (pbdev->pdev == pdev) {
+ return pbdev;
+ }
+ }
+ return NULL;
+}
+
+void s390_msix_notify(PCIDevice *dev, unsigned vector)
+{
+ S390PCIBusDevice *pbdev;
+ unsigned long *aibv, *aisb;
+ int summary_set;
+ hwaddr aibv_len, aisb_len;
+ uint32_t io_int_word;
+
+ pbdev = s390_pci_find_dev_by_pdev(dev);
+ if (!pbdev) {
+ DPRINTF("msix_notify do dev\n");
+ return;
+ }
+ aibv_len = aisb_len = 8;
+ aibv = cpu_physical_memory_map(pbdev->routes.adapter.ind_addr,
+ &aibv_len, 1);
+ aisb = cpu_physical_memory_map(pbdev->routes.adapter.summary_addr,
+ &aisb_len, 1);
+
+ set_bit(vector ^ be_to_le, aibv);
+ summary_set = test_and_set_bit(pbdev->routes.adapter.summary_offset
+ ^ be_to_le, aisb);
+
+ if (!summary_set) {
+ io_int_word = (pbdev->isc << 27) | IO_INT_WORD_AI;
+ s390_io_interrupt(0, 0, 0, io_int_word);
+ }
+
+ cpu_physical_memory_unmap(aibv, aibv_len, 1, aibv_len);
+ cpu_physical_memory_unmap(aisb, aisb_len, 1, aisb_len);
+}
+
+int s390_irqchip_add_msi_route(PCIDevice *pdev, KVMState *s, MSIMessage msg)
+{
+ S390PCIBusDevice *pbdev;
+ int virq;
+
+ pbdev = s390_pci_find_dev_by_pdev(pdev);
+ if (!pbdev) {
+ DPRINTF("390_add_msi_virq no dev\n");
+ return -ENODEV;
+ }
+
+ pbdev->routes.adapter.ind_offset = msg.data;
+
+ virq = kvm_irqchip_add_adapter_route(s, &pbdev->routes.adapter);
+
+ return virq;
+}
+
+static void s390_pci_generate_plug_event(uint16_t pec, uint32_t fh,
+ uint32_t fid)
+{
+ SeiContainer *sei_cont = g_malloc0(sizeof(SeiContainer));
+
+ sei_cont->fh = fh;
+ sei_cont->fid = fid;
+ sei_cont->cc = 2;
+ sei_cont->pec = pec;
+
+ QTAILQ_INSERT_TAIL(&pending_sei, sei_cont, link);
+ css_generate_css_crws(0);
+}
+
+static void s390_pci_set_irq(void *opaque, int irq, int level)
+{
+ /* nothing to do */
+}
+
+static int s390_pci_map_irq(PCIDevice *pci_dev, int irq_num)
+{
+ /* nothing to do */
+ return 0;
+}
+
+void s390_pci_bus_init(void)
+{
+ DeviceState *dev;
+
+ dev = qdev_create(NULL, TYPE_S390_PCI_HOST_BRIDGE);
+ qdev_init_nofail(dev);
+}
+
+static IOMMUTLBEntry s390_translate_iommu(MemoryRegion *iommu, hwaddr addr,
+ bool is_write)
+{
+ IOMMUTLBEntry ret;
+
+ /* implement this the time we need it */
+ assert(0);
+ return ret;
+}
+
+static const MemoryRegionIOMMUOps s390_iommu_ops = {
+ .translate = s390_translate_iommu,
+};
+
+static AddressSpace *s390_pci_dma_iommu(PCIBus *bus, void *opaque, int devfn)
+{
+ S390pciState *s = opaque;
+
+ return &s->iommu[PCI_SLOT(devfn)].as;
+}
+
+static void s390_pcihost_init_iommu(S390pciState *s)
+{
+ int i;
+
+ for (i = 0; i < PCI_SLOT_MAX; i++) {
+ memory_region_init_iommu(&s->iommu[i].mr, OBJECT(s),
+ &s390_iommu_ops, "iommu-s390", UINT64_MAX);
+ address_space_init(&s->iommu[i].as, &s->iommu[i].mr, "iommu-pci");
+ }
+}
+
+static int s390_pcihost_init(SysBusDevice *dev)
+{
+ PCIBus *b;
+ BusState *bus;
+ PCIHostState *phb = PCI_HOST_BRIDGE(dev);
+ S390pciState *s = S390_PCI_HOST_BRIDGE(dev);
+
+ DPRINTF("host_init\n");
+
+ b = pci_register_bus(DEVICE(dev), NULL,
+ s390_pci_set_irq, s390_pci_map_irq, NULL,
+ get_system_memory(), get_system_io(), 0, 64,
+ TYPE_PCI_BUS);
+ s390_pcihost_init_iommu(s);
+ pci_setup_iommu(b, s390_pci_dma_iommu, s);
+
+ bus = BUS(b);
+ qbus_set_hotplug_handler(bus, DEVICE(dev), NULL);
+ phb->bus = b;
+
+ return 0;
+}
+
+static int s390_pcihost_setup_msix(S390PCIBusDevice *pbdev)
+{
+ uint8_t pos;
+ uint16_t ctrl;
+ uint32_t table, pba;
+
+ pos = pci_find_capability(pbdev->pdev, PCI_CAP_ID_MSIX);
+ if (!pos) {
+ return 0;
+ }
+
+ ctrl = pci_host_config_read_common(pbdev->pdev, pos + PCI_CAP_FLAGS,
+ pci_config_size(pbdev->pdev), sizeof(ctrl));
+ table = pci_host_config_read_common(pbdev->pdev, pos + PCI_MSIX_TABLE,
+ pci_config_size(pbdev->pdev), sizeof(table));
+ pba = pci_host_config_read_common(pbdev->pdev, pos + PCI_MSIX_PBA,
+ pci_config_size(pbdev->pdev), sizeof(pba));
+
+ pbdev->msix_table_bar = table & PCI_MSIX_FLAGS_BIRMASK;
+ pbdev->msix_table_offset = table & ~PCI_MSIX_FLAGS_BIRMASK;
+ pbdev->msix_pba_bar = pba & PCI_MSIX_FLAGS_BIRMASK;
+ pbdev->msix_pba_offset = pba & ~PCI_MSIX_FLAGS_BIRMASK;
+ pbdev->msix_entries = (ctrl & PCI_MSIX_FLAGS_QSIZE) + 1;
+
+ return 0;
+}
+
+static void s390_pcihost_hot_plug(HotplugHandler *hotplug_dev,
+ DeviceState *dev, Error **errp)
+{
+ PCIDevice *pci_dev = PCI_DEVICE(dev);
+ S390PCIBusDevice *pbdev;
+
+ pbdev = g_malloc0(sizeof(*pbdev));
+
+ pbdev->fid = s390_pci_get_pfid(pci_dev);
+ pbdev->pdev = pci_dev;
+ pbdev->configured = true;
+
+ pbdev->fh = s390_pci_get_pfh(pci_dev);
+ pbdev->is_virt = 1;
+
+ s390_pcihost_setup_msix(pbdev);
+
+ QTAILQ_INSERT_TAIL(&device_list, pbdev, next);
+ if (dev->hotplugged) {
+ s390_pci_generate_plug_event(HP_EVENT_RESERVED_TO_STANDBY,
+ pbdev->fh, pbdev->fid);
+ s390_pci_generate_plug_event(HP_EVENT_TO_CONFIGURED,
+ pbdev->fh, pbdev->fid);
+ }
+ return;
+}
+
+static void s390_pcihost_hot_unplug(HotplugHandler *hotplug_dev,
+ DeviceState *dev, Error **errp)
+{
+ PCIDevice *pci_dev = PCI_DEVICE(dev);
+ S390PCIBusDevice *pbdev;
+
+ pbdev = s390_pci_find_dev_by_pdev(pci_dev);
+ if (!pbdev) {
+ DPRINTF("Error, can't find hot-unplug device in list\n");
+ return;
+ }
+
+ if (pbdev->configured) {
+ pbdev->configured = false;
+ s390_pci_generate_plug_event(HP_EVENT_CONFIGURED_TO_STBRES,
+ pbdev->fh, pbdev->fid);
+ }
+
+ QTAILQ_REMOVE(&device_list, pbdev, next);
+ s390_pci_generate_plug_event(HP_EVENT_STANDBY_TO_RESERVED, 0, 0);
+ object_unparent(OBJECT(pci_dev));
+ g_free(pbdev);
+}
+
+static void s390_pcihost_class_init(ObjectClass *klass, void *data)
+{
+ SysBusDeviceClass *k = SYS_BUS_DEVICE_CLASS(klass);
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ HotplugHandlerClass *hc = HOTPLUG_HANDLER_CLASS(klass);
+
+ dc->cannot_instantiate_with_device_add_yet = true;
+ k->init = s390_pcihost_init;
+ hc->plug = s390_pcihost_hot_plug;
+ hc->unplug = s390_pcihost_hot_unplug;
+ msi_supported = true;
+}
+
+static const TypeInfo s390_pcihost_info = {
+ .name = TYPE_S390_PCI_HOST_BRIDGE,
+ .parent = TYPE_PCI_HOST_BRIDGE,
+ .instance_size = sizeof(S390pciState),
+ .class_init = s390_pcihost_class_init,
+ .interfaces = (InterfaceInfo[]) {
+ { TYPE_HOTPLUG_HANDLER },
+ { }
+ }
+};
+
+static void s390_pci_register_types(void)
+{
+ type_register_static(&s390_pcihost_info);
+}
+
+type_init(s390_pci_register_types)
--- /dev/null
+++ b/hw/s390x/s390-pci-bus.h
@@ -0,0 +1,166 @@
+/*
+ * s390 PCI BUS definitions
+ *
+ * Copyright 2014 IBM Corp.
+ * Author(s): Frank Blaschka <frank.blaschka@de.ibm.com>
+ * Hong Bo Li <lihbbj@cn.ibm.com>
+ * Yi Min Zhao <zyimin@cn.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#ifndef HW_S390_PCI_BUS_H
+#define HW_S390_PCI_BUS_H
+
+#include <hw/pci/pci.h>
+#include <hw/pci/pci_host.h>
+#include "hw/s390x/sclp.h"
+#include "hw/s390x/s390_flic.h"
+#include "hw/s390x/css.h"
+
+#define TYPE_S390_PCI_HOST_BRIDGE "s390-pcihost"
+#define FH_VIRT 0x00ff0000
+#define ENABLE_BIT_OFFSET 31
+#define S390_PCIPT_ADAPTER 2
+
+#define S390_PCI_HOST_BRIDGE(obj) \
+ OBJECT_CHECK(S390pciState, (obj), TYPE_S390_PCI_HOST_BRIDGE)
+
+#define HP_EVENT_TO_CONFIGURED 0x0301
+#define HP_EVENT_RESERVED_TO_STANDBY 0x0302
+#define HP_EVENT_CONFIGURED_TO_STBRES 0x0304
+#define HP_EVENT_STANDBY_TO_RESERVED 0x0308
+
+typedef struct SeiContainer {
+ QTAILQ_ENTRY(SeiContainer) link;
+ uint32_t fid;
+ uint32_t fh;
+ uint8_t cc;
+ uint16_t pec;
+} SeiContainer;
+
+typedef struct PciCcdfErr {
+ uint32_t reserved1;
+ uint32_t fh;
+ uint32_t fid;
+ uint32_t reserved2;
+ uint64_t faddr;
+ uint32_t reserved3;
+ uint16_t reserved4;
+ uint16_t pec;
+} QEMU_PACKED PciCcdfErr;
+
+typedef struct PciCcdfAvail {
+ uint32_t reserved1;
+ uint32_t fh;
+ uint32_t fid;
+ uint32_t reserved2;
+ uint32_t reserved3;
+ uint32_t reserved4;
+ uint32_t reserved5;
+ uint16_t reserved6;
+ uint16_t pec;
+} QEMU_PACKED PciCcdfAvail;
+
+typedef struct ChscSeiNt2Res {
+ uint16_t length;
+ uint16_t code;
+ uint16_t reserved1;
+ uint8_t reserved2;
+ uint8_t nt;
+ uint8_t flags;
+ uint8_t reserved3;
+ uint8_t reserved4;
+ uint8_t cc;
+ uint32_t reserved5[13];
+ uint8_t ccdf[4016];
+} QEMU_PACKED ChscSeiNt2Res;
+
+typedef struct PciCfgSccb {
+ SCCBHeader header;
+ uint8_t atype;
+ uint8_t reserved1;
+ uint16_t reserved2;
+ uint32_t aid;
+} QEMU_PACKED PciCfgSccb;
+
+typedef struct S390pciState {
+ PCIHostState parent_obj;
+ struct {
+ AddressSpace as;
+ MemoryRegion mr;
+ } iommu[PCI_SLOT_MAX];
+} S390pciState;
+
+typedef struct S390PCIBusDevice {
+ PCIDevice *pdev;
+ bool is_virt;
+ bool configured;
+ uint32_t fh;
+ uint32_t fid;
+ uint64_t g_iota;
+ uint8_t isc;
+ uint8_t msix_table_bar;
+ uint8_t msix_pba_bar;
+ uint16_t msix_entries;
+ uint32_t msix_table_offset;
+ uint32_t msix_pba_offset;
+ AdapterRoutes routes;
+ QTAILQ_ENTRY(S390PCIBusDevice) next;
+} S390PCIBusDevice;
+
+#ifdef CONFIG_KVM
+void s390_msix_notify(PCIDevice *dev, unsigned vector);
+int s390_irqchip_add_msi_route(PCIDevice *pdev, KVMState *s, MSIMessage msg);
+int chsc_sei_nt2_get_event(void *res);
+int chsc_sei_nt2_have_event(void);
+void s390_pci_sclp_configure(int configure, SCCB *sccb);
+S390PCIBusDevice *s390_pci_find_dev_by_idx(uint32_t idx);
+S390PCIBusDevice *s390_pci_find_dev_by_fh(uint32_t fh);
+void s390_pci_bus_init(void);
+#else
+static void s390_msix_notify(PCIDevice *dev, unsigned vector)
+{
+ return;
+}
+
+static int s390_irqchip_add_msi_route(PCIDevice *pdev, KVMState *s,
+ MSIMessage msg)
+{
+ return 0;
+}
+
+static inline int chsc_sei_nt2_get_event(void *res)
+{
+ return 1;
+}
+
+static inline int chsc_sei_nt2_have_event(void)
+{
+ return 0;
+}
+
+static inline void s390_pci_sclp_configure(int configure, SCCB *sccb)
+{
+ return;
+}
+
+static inline S390PCIBusDevice *s390_pci_find_dev_by_idx(uint32_t idx)
+{
+ return NULL;
+}
+
+static inline S390PCIBusDevice *s390_pci_find_dev_by_fh(uint32_t fh)
+{
+ return NULL;
+}
+
+static inline void s390_pci_bus_init(void)
+{
+ return;
+}
+#endif
+
+#endif
--- a/hw/s390x/s390-virtio-ccw.c
+++ b/hw/s390x/s390-virtio-ccw.c
@@ -18,6 +18,7 @@
#include "css.h"
#include "virtio-ccw.h"
#include "qemu/config-file.h"
+#include "s390-pci-bus.h"
#define TYPE_S390_CCW_MACHINE "s390-ccw-machine"
@@ -126,6 +127,7 @@ static void ccw_init(MachineState *machi
s390_init_ipl_dev(machine->kernel_filename, machine->kernel_cmdline,
machine->initrd_filename, "s390-ccw.img");
s390_flic_init();
+ s390_pci_bus_init();
/* register hypercalls */
virtio_ccw_register_hcalls();
--- a/hw/s390x/sclp.c
+++ b/hw/s390x/sclp.c
@@ -20,6 +20,7 @@
#include "qemu/config-file.h"
#include "hw/s390x/sclp.h"
#include "hw/s390x/event-facility.h"
+#include "hw/s390x/s390-pci-bus.h"
static inline SCLPEventFacility *get_event_facility(void)
{
@@ -62,7 +63,8 @@ static void read_SCP_info(SCCB *sccb)
read_info->entries[i].type = 0;
}
- read_info->facilities = cpu_to_be64(SCLP_HAS_CPU_INFO);
+ read_info->facilities = cpu_to_be64(SCLP_HAS_CPU_INFO |
+ SCLP_HAS_PCI_RECONFIG);
/*
* The storage increment size is a multiple of 1M and is a power of 2.
@@ -350,6 +352,12 @@ static void sclp_execute(SCCB *sccb, uin
case SCLP_UNASSIGN_STORAGE:
unassign_storage(sccb);
break;
+ case SCLP_CMDW_CONFIGURE_PCI:
+ s390_pci_sclp_configure(1, sccb);
+ break;
+ case SCLP_CMDW_DECONFIGURE_PCI:
+ s390_pci_sclp_configure(0, sccb);
+ break;
default:
efc->command_handler(ef, sccb, code);
break;
--- a/include/hw/s390x/sclp.h
+++ b/include/hw/s390x/sclp.h
@@ -45,14 +45,22 @@
#define SCLP_CMDW_CONFIGURE_CPU 0x00110001
#define SCLP_CMDW_DECONFIGURE_CPU 0x00100001
+/* SCLP PCI codes */
+#define SCLP_HAS_PCI_RECONFIG 0x0000000040000000ULL
+#define SCLP_CMDW_CONFIGURE_PCI 0x001a0001
+#define SCLP_CMDW_DECONFIGURE_PCI 0x001b0001
+#define SCLP_RECONFIG_PCI_ATPYE 2
+
/* SCLP response codes */
#define SCLP_RC_NORMAL_READ_COMPLETION 0x0010
#define SCLP_RC_NORMAL_COMPLETION 0x0020
#define SCLP_RC_SCCB_BOUNDARY_VIOLATION 0x0100
+#define SCLP_RC_NO_ACTION_REQUIRED 0x0120
#define SCLP_RC_INVALID_SCLP_COMMAND 0x01f0
#define SCLP_RC_CONTAINED_EQUIPMENT_CHECK 0x0340
#define SCLP_RC_INSUFFICIENT_SCCB_LENGTH 0x0300
#define SCLP_RC_STANDBY_READ_COMPLETION 0x0410
+#define SCLP_RC_ADAPTER_ID_NOT_RECOGNIZED 0x09f0
#define SCLP_RC_INVALID_FUNCTION 0x40f0
#define SCLP_RC_NO_EVENT_BUFFERS_STORED 0x60f0
#define SCLP_RC_INVALID_SELECTION_MASK 0x70f0
--- a/target-s390x/ioinst.c
+++ b/target-s390x/ioinst.c
@@ -14,6 +14,7 @@
#include "cpu.h"
#include "ioinst.h"
#include "trace.h"
+#include "hw/s390x/s390-pci-bus.h"
int ioinst_disassemble_sch_ident(uint32_t value, int *m, int *cssid, int *ssid,
int *schid)
@@ -398,6 +399,7 @@ typedef struct ChscResp {
#define CHSC_SCPD 0x0002
#define CHSC_SCSC 0x0010
#define CHSC_SDA 0x0031
+#define CHSC_SEI 0x000e
#define CHSC_SCPD_0_M 0x20000000
#define CHSC_SCPD_0_C 0x10000000
@@ -566,6 +568,53 @@ out:
res->param = 0;
}
+static int chsc_sei_nt0_get_event(void *res)
+{
+ /* no events yet */
+ return 1;
+}
+
+static int chsc_sei_nt0_have_event(void)
+{
+ /* no events yet */
+ return 0;
+}
+
+#define CHSC_SEI_NT0 (1ULL << 63)
+#define CHSC_SEI_NT2 (1ULL << 61)
+static void ioinst_handle_chsc_sei(ChscReq *req, ChscResp *res)
+{
+ uint64_t selection_mask = be64_to_cpu(*(uint64_t *)&req->param1);
+ uint8_t *res_flags = (uint8_t *)res->data;
+ int have_event = 0;
+ int have_more = 0;
+
+ /* regarding architecture nt0 can not be masked */
+ have_event = !chsc_sei_nt0_get_event(res);
+ have_more = chsc_sei_nt0_have_event();
+
+ if (selection_mask & CHSC_SEI_NT2) {
+ if (!have_event) {
+ have_event = !chsc_sei_nt2_get_event(res);
+ }
+
+ if (!have_more) {
+ have_more = chsc_sei_nt2_have_event();
+ }
+ }
+
+ if (have_event) {
+ res->code = cpu_to_be16(0x0001);
+ if (have_more) {
+ (*res_flags) |= 0x80;
+ } else {
+ (*res_flags) &= ~0x80;
+ }
+ } else {
+ res->code = cpu_to_be16(0x0004);
+ }
+}
+
static void ioinst_handle_chsc_unimplemented(ChscResp *res)
{
res->len = cpu_to_be16(CHSC_MIN_RESP_LEN);
@@ -617,6 +666,9 @@ void ioinst_handle_chsc(S390CPU *cpu, ui
case CHSC_SDA:
ioinst_handle_chsc_sda(req, res);
break;
+ case CHSC_SEI:
+ ioinst_handle_chsc_sei(req, res);
+ break;
default:
ioinst_handle_chsc_unimplemented(res);
break;
--- a/target-s390x/ioinst.h
+++ b/target-s390x/ioinst.h
@@ -194,6 +194,7 @@ typedef struct CRW {
#define CRW_RSC_SUBCH 0x3
#define CRW_RSC_CHP 0x4
+#define CRW_RSC_CSS 0xb
/* I/O interruption code */
typedef struct IOIntCode {
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Qemu-devel] [RFC patch 5/6] s390: implement pci instruction
2014-09-19 11:54 [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 frank.blaschka
` (3 preceding siblings ...)
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 4/6] s390: Add PCI bus support frank.blaschka
@ 2014-09-19 11:54 ` frank.blaschka
2014-09-19 15:12 ` Thomas Huth
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 6/6] vfio: make vfio run on s390 platform frank.blaschka
2014-09-22 20:47 ` [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 Alex Williamson
6 siblings, 1 reply; 20+ messages in thread
From: frank.blaschka @ 2014-09-19 11:54 UTC (permalink / raw)
To: qemu-devel, linux-s390, kvm; +Cc: pbonzini, alex.williamson, agraf
[-- Attachment #1: 102-qemu_pci_ic.patch --]
[-- Type: text/plain, Size: 35996 bytes --]
From: Frank Blaschka <frank.blaschka@de.ibm.com>
This patch implements the s390 pci instructions in qemu. This allows
to attach qemu pci devices including vfio. This does not mean the
devices are functional but at least detection and config/memory space
access is working.
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
---
target-s390x/Makefile.objs | 2
target-s390x/kvm.c | 52 +++
target-s390x/pci_ic.c | 621 +++++++++++++++++++++++++++++++++++++++++++++
target-s390x/pci_ic.h | 425 ++++++++++++++++++++++++++++++
4 files changed, 1099 insertions(+), 1 deletion(-)
--- a/target-s390x/Makefile.objs
+++ b/target-s390x/Makefile.objs
@@ -2,4 +2,4 @@ obj-y += translate.o helper.o cpu.o inte
obj-y += int_helper.o fpu_helper.o cc_helper.o mem_helper.o misc_helper.o
obj-y += gdbstub.o
obj-$(CONFIG_SOFTMMU) += ioinst.o arch_dump.o
-obj-$(CONFIG_KVM) += kvm.o
+obj-$(CONFIG_KVM) += kvm.o pci_ic.o
--- a/target-s390x/kvm.c
+++ b/target-s390x/kvm.c
@@ -40,6 +40,7 @@
#include "exec/gdbstub.h"
#include "trace.h"
#include "qapi-event.h"
+#include "pci_ic.h"
/* #define DEBUG_KVM */
@@ -56,6 +57,7 @@
#define IPA0_B2 0xb200
#define IPA0_B9 0xb900
#define IPA0_EB 0xeb00
+#define IPA0_E3 0xe300
#define PRIV_B2_SCLP_CALL 0x20
#define PRIV_B2_CSCH 0x30
@@ -76,8 +78,17 @@
#define PRIV_B2_XSCH 0x76
#define PRIV_EB_SQBS 0x8a
+#define PRIV_EB_PCISTB 0xd0
+#define PRIV_EB_SIC 0xd1
#define PRIV_B9_EQBS 0x9c
+#define PRIV_B9_CLP 0xa0
+#define PRIV_B9_PCISTG 0xd0
+#define PRIV_B9_PCILG 0xd2
+#define PRIV_B9_RPCIT 0xd3
+
+#define PRIV_E3_MPCIFC 0xd0
+#define PRIV_E3_STPCIFC 0xd4
#define DIAG_IPL 0x308
#define DIAG_KVM_HYPERCALL 0x500
@@ -813,6 +824,18 @@ static int handle_b9(S390CPU *cpu, struc
int r = 0;
switch (ipa1) {
+ case PRIV_B9_CLP:
+ r = kvm_clp_service_call(cpu, run);
+ break;
+ case PRIV_B9_PCISTG:
+ r = kvm_pcistg_service_call(cpu, run);
+ break;
+ case PRIV_B9_PCILG:
+ r = kvm_pcilg_service_call(cpu, run);
+ break;
+ case PRIV_B9_RPCIT:
+ r = kvm_rpcit_service_call(cpu, run);
+ break;
case PRIV_B9_EQBS:
/* just inject exception */
r = -1;
@@ -831,6 +854,12 @@ static int handle_eb(S390CPU *cpu, struc
int r = 0;
switch (ipa1) {
+ case PRIV_EB_PCISTB:
+ r = kvm_pcistb_service_call(cpu, run);
+ break;
+ case PRIV_EB_SIC:
+ r = kvm_sic_service_call(cpu, run);
+ break;
case PRIV_EB_SQBS:
/* just inject exception */
r = -1;
@@ -844,6 +873,26 @@ static int handle_eb(S390CPU *cpu, struc
return r;
}
+static int handle_e3(S390CPU *cpu, struct kvm_run *run, uint8_t ipa1)
+{
+ int r = 0;
+
+ switch (ipa1) {
+ case PRIV_E3_MPCIFC:
+ r = kvm_mpcifc_service_call(cpu, run);
+ break;
+ case PRIV_E3_STPCIFC:
+ r = kvm_stpcifc_service_call(cpu, run);
+ break;
+ default:
+ r = -1;
+ DPRINTF("KVM: unhandled PRIV: 0xe3%x\n", ipa1);
+ break;
+ }
+
+ return r;
+}
+
static int handle_hypercall(S390CPU *cpu, struct kvm_run *run)
{
CPUS390XState *env = &cpu->env;
@@ -1038,6 +1087,9 @@ static int handle_instruction(S390CPU *c
case IPA0_EB:
r = handle_eb(cpu, run, ipa1);
break;
+ case IPA0_E3:
+ r = handle_e3(cpu, run, run->s390_sieic.ipb & 0xff);
+ break;
case IPA0_DIAG:
r = handle_diag(cpu, run, run->s390_sieic.ipb);
break;
--- /dev/null
+++ b/target-s390x/pci_ic.c
@@ -0,0 +1,621 @@
+/*
+ * s390 PCI intercepts
+ *
+ * Copyright 2014 IBM Corp.
+ * Author(s): Frank Blaschka <frank.blaschka@de.ibm.com>
+ * Hong Bo Li <lihbbj@cn.ibm.com>
+ * Yi Min Zhao <zyimin@cn.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+
+#include <linux/kvm.h>
+#include <asm/ptrace.h>
+#include <hw/pci/pci.h>
+#include <hw/pci/pci_host.h>
+#include <net/net.h>
+
+#include "qemu-common.h"
+#include "qemu/timer.h"
+#include "migration/qemu-file.h"
+#include "sysemu/sysemu.h"
+#include "sysemu/kvm.h"
+#include "cpu.h"
+#include "sysemu/device_tree.h"
+#include "monitor/monitor.h"
+#include "pci_ic.h"
+
+#include "hw/hw.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/pci_bridge.h"
+#include "hw/pci/pci_bus.h"
+#include "hw/pci/pci_host.h"
+#include "hw/s390x/s390-pci-bus.h"
+#include "exec/exec-all.h"
+
+/* #define DEBUG_S390PCI_IC */
+#ifdef DEBUG_S390PCI_IC
+#define DPRINTF(fmt, ...) \
+ do { fprintf(stderr, "s390pci_ic: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+ do { } while (0)
+#endif
+
+static uint64_t resume_token;
+
+static uint8_t barsize(uint64_t size)
+{
+ uint64_t mask = 1;
+ int i;
+
+ if (!size) {
+ return 0;
+ }
+
+ for (i = 0; i < 64; i++) {
+ if (size & mask) {
+ break;
+ }
+ mask = (mask << 1);
+ }
+
+ return i;
+}
+
+static int list_pci(ClpReqRspListPci *rrb, uint8_t *cc)
+{
+ S390PCIBusDevice *pbdev;
+ uint32_t res_code, initial_l2, g_l2, finish;
+ int rc, idx;
+
+ rc = 0;
+ if (be16_to_cpu(rrb->request.hdr.len) != 32) {
+ res_code = CLP_RC_LEN;
+ rc = -EINVAL;
+ goto out;
+ }
+
+ if ((be32_to_cpu(rrb->request.fmt) & CLP_MASK_FMT) != 0) {
+ res_code = CLP_RC_FMT;
+ rc = -EINVAL;
+ goto out;
+ }
+
+ if ((be32_to_cpu(rrb->request.fmt) & ~CLP_MASK_FMT) != 0 ||
+ rrb->request.reserved1 != 0 ||
+ rrb->request.reserved2 != 0) {
+ res_code = CLP_RC_RESNOT0;
+ rc = -EINVAL;
+ goto out;
+ }
+
+ if (be64_to_cpu(rrb->request.resume_token) == 0) {
+ resume_token = 0;
+ } else if (be64_to_cpu(rrb->request.resume_token) != resume_token) {
+ res_code = CLP_RC_LISTPCI_BADRT;
+ rc = -EINVAL;
+ goto out;
+ }
+
+ if (be16_to_cpu(rrb->response.hdr.len) < 48) {
+ res_code = CLP_RC_8K;
+ rc = -EINVAL;
+ goto out;
+ }
+
+ initial_l2 = be16_to_cpu(rrb->response.hdr.len);
+ if ((initial_l2 - LIST_PCI_HDR_LEN) % sizeof(ClpFhListEntry)
+ != 0) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ rrb->response.fmt = 0;
+ rrb->response.reserved1 = rrb->response.reserved2 = 0;
+ rrb->response.mdd = cpu_to_be32(FH_VIRT);
+ rrb->response.max_fn = cpu_to_be16(PCI_MAX_FUNCTIONS);
+ rrb->response.entry_size = sizeof(ClpFhListEntry);
+ finish = 0;
+ idx = resume_token;
+ g_l2 = LIST_PCI_HDR_LEN;
+ do {
+ pbdev = s390_pci_find_dev_by_idx(idx);
+ if (!pbdev) {
+ finish = 1;
+ break;
+ }
+ rrb->response.fh_list[idx - resume_token].device_id =
+ pci_get_word(pbdev->pdev->config + PCI_DEVICE_ID);
+ rrb->response.fh_list[idx - resume_token].vendor_id =
+ pci_get_word(pbdev->pdev->config + PCI_VENDOR_ID);
+ rrb->response.fh_list[idx - resume_token].config =
+ cpu_to_be32(0x80000000);
+ rrb->response.fh_list[idx - resume_token].fid = cpu_to_be32(pbdev->fid);
+ rrb->response.fh_list[idx - resume_token].fh = cpu_to_be32(pbdev->fh);
+
+ g_l2 += sizeof(ClpFhListEntry);
+ DPRINTF("g_l2 %d vendor id 0x%x device id 0x%x fid 0x%x fh 0x%x\n",
+ g_l2,
+ rrb->response.fh_list[idx - resume_token].vendor_id,
+ rrb->response.fh_list[idx - resume_token].device_id,
+ rrb->response.fh_list[idx - resume_token].fid,
+ rrb->response.fh_list[idx - resume_token].fh);
+ idx++;
+ } while (g_l2 < initial_l2);
+
+ if (finish == 1) {
+ resume_token = 0;
+ } else {
+ resume_token = idx;
+ }
+ rrb->response.resume_token = cpu_to_be64(resume_token);
+ rrb->response.hdr.len = cpu_to_be16(g_l2);
+ rrb->response.hdr.rsp = cpu_to_be16(CLP_RC_OK);
+out:
+ if (rc) {
+ DPRINTF("list pci failed rc 0x%x\n", rc);
+ rrb->response.hdr.rsp = cpu_to_be16(res_code);
+ *cc = 3;
+ }
+ return rc;
+}
+
+int kvm_clp_service_call(S390CPU *cpu, struct kvm_run *run)
+{
+ ClpReqHdr *reqh;
+ ClpRspHdr *resh;
+ S390PCIBusDevice *pbdev;
+ uint32_t req_len;
+ uint32_t res_len;
+ uint8_t *buffer;
+ uint8_t cc;
+ CPUS390XState *env = &cpu->env;
+ uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
+ int rc = 0;
+ int i;
+
+ buffer = g_malloc0(4096 * 2);
+ cpu_synchronize_state(CPU(cpu));
+
+ cpu_physical_memory_rw(env->regs[r2], buffer, sizeof(reqh), 0);
+ reqh = (ClpReqHdr *)buffer;
+ req_len = be16_to_cpu(reqh->len);
+
+ cpu_physical_memory_rw(env->regs[r2], buffer, req_len + sizeof(resh), 0);
+ resh = (ClpRspHdr *)(buffer + req_len);
+ res_len = be16_to_cpu(resh->len);
+
+ cpu_physical_memory_rw(env->regs[r2], buffer, req_len + res_len, 0);
+
+ switch (reqh->cmd) {
+ case CLP_LIST_PCI: {
+ ClpReqRspListPci *rrb = (ClpReqRspListPci *)buffer;
+ rc = list_pci(rrb, &cc);
+ break;
+ }
+ case CLP_SET_PCI_FN: {
+ ClpReqSetPci *reqsetpci = (ClpReqSetPci *)reqh;
+ ClpRspSetPci *ressetpci = (ClpRspSetPci *)resh;
+
+ pbdev = s390_pci_find_dev_by_fh(be32_to_cpu(reqsetpci->fh));
+ if (!pbdev) {
+ ressetpci->hdr.rsp = cpu_to_be16(CLP_RC_SETPCIFN_FH);
+ goto out;
+ }
+
+ switch (reqsetpci->oc) {
+ case CLP_SET_ENABLE_PCI_FN:
+ if (pbdev->is_virt) {
+ pbdev->fh = pbdev->fh | 1 << ENABLE_BIT_OFFSET;
+ ressetpci->fh = cpu_to_be32(pbdev->fh);
+ ressetpci->hdr.rsp = cpu_to_be16(CLP_RC_OK);
+ } else {
+ pbdev->fh = be32_to_cpu(ressetpci->fh);
+ }
+ break;
+ case CLP_SET_DISABLE_PCI_FN:
+ if (pbdev->is_virt) {
+ pbdev->fh = pbdev->fh & ~(1 << ENABLE_BIT_OFFSET);
+ ressetpci->fh = cpu_to_be32(pbdev->fh);
+ ressetpci->hdr.rsp = cpu_to_be16(CLP_RC_OK);
+ } else {
+ pbdev->fh = be32_to_cpu(ressetpci->fh);
+ }
+ break;
+ default:
+ DPRINTF("unknown set pci command\n");
+ ressetpci->hdr.rsp = cpu_to_be16(CLP_RC_SETPCIFN_FHOP);
+ break;
+ }
+ break;
+ }
+ case CLP_QUERY_PCI_FN: {
+ ClpReqQueryPci *reqquery = (ClpReqQueryPci *)reqh;
+ ClpRspQueryPci *resquery = (ClpRspQueryPci *)resh;
+
+ pbdev = s390_pci_find_dev_by_fh(reqquery->fh);
+ if (!pbdev) {
+ DPRINTF("query pci no pci dev\n");
+ return -EIO;
+ }
+
+ for (i = 0; i < PCI_BAR_COUNT; i++) {
+ uint64_t data = pci_host_config_read_common(pbdev->pdev,
+ 0x10 + (i * 4), pci_config_size(pbdev->pdev), 4);
+
+ resquery->bar[i] = bswap32(data);
+ resquery->bar_size[i] = barsize(pbdev->pdev->io_regions[i].size);
+ DPRINTF("bar %d addr 0x%x size 0x%lx barsize 0x%x\n", i,
+ resquery->bar[i], pbdev->pdev->io_regions[i].size,
+ resquery->bar_size[i]);
+ }
+
+ /* traced find out how to optain !!! */
+ resquery->sdma = 0x100000000;
+ resquery->edma = 0x1ffffffffffffff;
+ resquery->pchid = 0;
+ resquery->ug = 1;
+
+ resquery->hdr.rsp = CLP_RC_OK;
+ break;
+ }
+ case CLP_QUERY_PCI_FNGRP: {
+ ClpRspQueryPciGrp *resgrp = (ClpRspQueryPciGrp *)resh;
+ /* find function group for now we go with fixed values */
+ resgrp->fr = 1;
+ resgrp->dasm = 0;
+ /* traced but we may not need this since msix table is maintained
+ * by host */
+ resgrp->msia = 0xfe00000000000000;
+ resgrp->mui = 0;
+ resgrp->i = 128;
+ resgrp->version = 0;
+
+ resgrp->hdr.rsp = CLP_RC_OK;
+ break;
+ }
+ default:
+ DPRINTF("unknown clp command\n");
+ resh->rsp = cpu_to_be16(CLP_RC_CMD);
+ break;
+ }
+
+out:
+ cpu_physical_memory_rw(env->regs[r2], buffer, req_len + res_len, 1);
+ g_free(buffer);
+ setcc(cpu, 0);
+ return rc;
+}
+
+int kvm_pcilg_service_call(S390CPU *cpu, struct kvm_run *run)
+{
+ CPUS390XState *env = &cpu->env;
+ S390PCIBusDevice *pbdev;
+ uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
+ uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
+ PciLgStg *rp;
+ uint64_t offset;
+ uint64_t data;
+
+ cpu_synchronize_state(CPU(cpu));
+ rp = (PciLgStg *)&env->regs[r2];
+ offset = env->regs[r2 + 1];
+
+ pbdev = s390_pci_find_dev_by_fh(rp->fh);
+ if (!pbdev) {
+ DPRINTF("pcilg no pci dev\n");
+ return -EIO;
+ }
+
+ if (rp->pcias < 6) {
+ MemoryRegion *mr = pbdev->pdev->io_regions[rp->pcias].memory;
+ io_mem_read(mr, offset, &data, rp->len);
+ } else if (rp->pcias == 15) {
+ data = pci_host_config_read_common(
+ pbdev->pdev, offset, pci_config_size(pbdev->pdev), rp->len);
+
+ switch (rp->len) {
+ case 1:
+ break;
+ case 2:
+ data = cpu_to_le16(data);
+ break;
+ case 4:
+ data = cpu_to_le32(data);
+ break;
+ case 8:
+ data = cpu_to_le64(data);
+ break;
+ default:
+ abort();
+ }
+ } else {
+ DPRINTF("invalid space\n");
+ }
+
+ env->regs[r1] = data;
+ setcc(cpu, 0);
+ return 0;
+}
+
+int kvm_pcistg_service_call(S390CPU *cpu, struct kvm_run *run)
+{
+ CPUS390XState *env = &cpu->env;
+ uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
+ uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
+ PciLgStg *rp;
+ uint64_t offset, data;
+ S390PCIBusDevice *pbdev;
+
+ cpu_synchronize_state(CPU(cpu));
+ rp = (PciLgStg *)&env->regs[r2];
+ offset = env->regs[r2 + 1];
+
+ pbdev = s390_pci_find_dev_by_fh(rp->fh);
+ if (!pbdev) {
+ DPRINTF("pcistg no pci dev\n");
+ return -EIO;
+ }
+
+ data = env->regs[r1];
+
+ if (rp->pcias < 6) {
+ MemoryRegion *mr;
+ if (pbdev->msix_table_bar == rp->pcias &&
+ offset >= pbdev->msix_table_offset &&
+ offset <= pbdev->msix_table_offset +
+ (pbdev->msix_entries - 1) * PCI_MSIX_ENTRY_SIZE) {
+ offset = offset - pbdev->msix_table_offset;
+ mr = &pbdev->pdev->msix_table_mmio;
+ } else {
+ mr = pbdev->pdev->io_regions[rp->pcias].memory;
+ }
+
+ io_mem_write(mr, offset, data, rp->len);
+ } else if (rp->pcias == 15) {
+ switch (rp->len) {
+ case 1:
+ break;
+ case 2:
+ data = le16_to_cpu(data);
+ break;
+ case 4:
+ data = le32_to_cpu(data);
+ break;
+ case 8:
+ data = le64_to_cpu(data);
+ break;
+ default:
+ abort();
+ }
+
+ pci_host_config_write_common(pbdev->pdev, offset,
+ pci_config_size(pbdev->pdev),
+ data, rp->len);
+ } else {
+ DPRINTF("pcistg invalid space\n");
+ }
+
+ setcc(cpu, 0);
+ return 0;
+}
+
+static uint64_t guest_io_table_walk(uint64_t guest_iota,
+ uint64_t guest_dma_address)
+{
+ uint64_t sto_a, pto_a, px_a;
+ uint64_t sto, pto, pte;
+ uint32_t rtx, sx, px;
+
+ rtx = calc_rtx(guest_dma_address);
+ sx = calc_sx(guest_dma_address);
+ px = calc_px(guest_dma_address);
+
+ sto_a = guest_iota + rtx * sizeof(uint64_t);
+ cpu_physical_memory_rw(sto_a, (uint8_t *)&sto, sizeof(uint64_t), 0);
+ sto = (uint64_t)get_rt_sto(sto);
+
+ pto_a = sto + sx * sizeof(uint64_t);
+ cpu_physical_memory_rw(pto_a, (uint8_t *)&pto, sizeof(uint64_t), 0);
+ pto = (uint64_t)get_st_pto(pto);
+
+ px_a = pto + px * sizeof(uint64_t);
+ cpu_physical_memory_rw(px_a, (uint8_t *)&pte, sizeof(uint64_t), 0);
+
+ return pte;
+}
+
+int kvm_rpcit_service_call(S390CPU *cpu, struct kvm_run *run)
+{
+ CPUS390XState *env = &cpu->env;
+ uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
+ uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
+ uint32_t fh;
+ uint64_t pte;
+ S390PCIBusDevice *pbdev;
+ ram_addr_t size;
+ int flags;
+ IOMMUTLBEntry entry;
+
+ cpu_synchronize_state(CPU(cpu));
+
+ fh = env->regs[r1] >> 32;
+ size = env->regs[r2 + 1];
+
+ pbdev = s390_pci_find_dev_by_fh(fh);
+
+ if (!pbdev) {
+ DPRINTF("rpcit no pci dev\n");
+ return -EIO;
+ }
+
+ pte = guest_io_table_walk(pbdev->g_iota, env->regs[r2]);
+ flags = pte & ZPCI_PTE_FLAG_MASK;
+ entry.target_as = &address_space_memory;
+ entry.iova = env->regs[r2];
+ entry.translated_addr = pte & ZPCI_PTE_ADDR_MASK;
+ entry.addr_mask = size - 1;
+
+ if (flags & ZPCI_PTE_INVALID) {
+ entry.perm = IOMMU_NONE;
+ } else {
+ entry.perm = IOMMU_RW;
+ }
+
+ memory_region_notify_iommu(pci_device_iommu_address_space(
+ pbdev->pdev)->root, entry);
+
+ setcc(cpu, 0);
+ return 0;
+}
+
+int kvm_sic_service_call(S390CPU *cpu, struct kvm_run *run)
+{
+ DPRINTF("sic\n");
+ return 0;
+}
+
+int kvm_pcistb_service_call(S390CPU *cpu, struct kvm_run *run)
+{
+ CPUS390XState *env = &cpu->env;
+ uint8_t r1 = (run->s390_sieic.ipa & 0x00f0) >> 4;
+ uint8_t r3 = run->s390_sieic.ipa & 0x000f;
+ PciStb *rp;
+ uint64_t gaddr;
+ uint64_t *uaddr, *pu;
+ hwaddr len;
+ S390PCIBusDevice *pbdev;
+ MemoryRegion *mr;
+ int i;
+
+ cpu_synchronize_state(CPU(cpu));
+
+ rp = (PciStb *)&env->regs[r1];
+ gaddr = get_base_disp_rsy(cpu, run);
+ len = rp->len;
+
+ pbdev = s390_pci_find_dev_by_fh(rp->fh);
+ if (!pbdev) {
+ DPRINTF("pcistb no pci dev fh 0x%x\n", rp->fh);
+ return -EIO;
+ }
+
+ uaddr = cpu_physical_memory_map(gaddr, &len, 0);
+ mr = pbdev->pdev->io_regions[rp->pcias].memory;
+
+ pu = uaddr;
+ for (i = 0; i < rp->len / 8; i++) {
+ io_mem_write(mr, env->regs[r3] + i * 8, *pu, 8);
+ pu++;
+ }
+
+ cpu_physical_memory_unmap(uaddr, len, 0, len);
+ setcc(cpu, 0);
+ return 0;
+}
+
+static int reg_irqs(CPUS390XState *env, S390PCIBusDevice *pbdev, ZpciFib fib)
+{
+ int ret;
+ S390FLICState *fs = s390_get_flic();
+ S390FLICStateClass *fsc = S390_FLIC_COMMON_GET_CLASS(fs);
+
+ ret = css_register_io_adapter(S390_PCIPT_ADAPTER,
+ FIB_DATA_ISC(fib.data), true, false,
+ &pbdev->routes.adapter.adapter_id);
+ assert(ret == 0);
+
+ fsc->io_adapter_map(fs, pbdev->routes.adapter.adapter_id, fib.aisb, true);
+ fsc->io_adapter_map(fs, pbdev->routes.adapter.adapter_id, fib.aibv, true);
+
+ pbdev->routes.adapter.summary_addr = fib.aisb;
+ pbdev->routes.adapter.summary_offset = FIB_DATA_AISBO(fib.data);
+ pbdev->routes.adapter.ind_addr = fib.aibv;
+ pbdev->routes.adapter.ind_offset = FIB_DATA_AIBVO(fib.data);
+
+ DPRINTF("reg_irqs adapter id %d\n", pbdev->routes.adapter.adapter_id);
+ return 0;
+}
+
+static int dereg_irqs(S390PCIBusDevice *pbdev)
+{
+ S390FLICState *fs = s390_get_flic();
+ S390FLICStateClass *fsc = S390_FLIC_COMMON_GET_CLASS(fs);
+
+ fsc->io_adapter_map(fs, pbdev->routes.adapter.adapter_id,
+ pbdev->routes.adapter.ind_addr, false);
+
+ pbdev->routes.adapter.summary_addr = 0;
+ pbdev->routes.adapter.summary_offset = 0;
+ pbdev->routes.adapter.ind_addr = 0;
+ pbdev->routes.adapter.ind_offset = 0;
+
+ DPRINTF("dereg_irqs adapter id %d\n", pbdev->routes.adapter.adapter_id);
+ return 0;
+}
+
+int kvm_mpcifc_service_call(S390CPU *cpu, struct kvm_run *run)
+{
+ CPUS390XState *env = &cpu->env;
+ uint8_t r1 = (run->s390_sieic.ipa & 0x00f0) >> 4;
+ uint8_t oc;
+ uint32_t fh;
+ uint64_t fiba;
+ ZpciFib fib;
+ S390PCIBusDevice *pbdev;
+
+ cpu_synchronize_state(CPU(cpu));
+
+ oc = env->regs[r1] & 0xff;
+ fh = env->regs[r1] >> 32;
+ fiba = get_base_disp_rxy(cpu, run);
+
+ pbdev = s390_pci_find_dev_by_fh(fh);
+ if (!pbdev) {
+ DPRINTF("mpcifc no pci dev fh 0x%x\n", fh);
+ }
+
+ cpu_physical_memory_rw(fiba, (uint8_t *)&fib, sizeof(fib), 0);
+
+ switch (oc) {
+ case ZPCI_MOD_FC_REG_INT: {
+ pbdev->isc = FIB_DATA_ISC(fib.data);
+ reg_irqs(env, pbdev, fib);
+ break;
+ }
+ case ZPCI_MOD_FC_DEREG_INT:
+ dereg_irqs(pbdev);
+ break;
+ case ZPCI_MOD_FC_REG_IOAT:
+ pbdev->g_iota = fib.iota & ~ZPCI_IOTA_RTTO_FLAG;
+ break;
+ case ZPCI_MOD_FC_DEREG_IOAT:
+ break;
+ case ZPCI_MOD_FC_REREG_IOAT:
+ break;
+ case ZPCI_MOD_FC_RESET_ERROR:
+ break;
+ case ZPCI_MOD_FC_RESET_BLOCK:
+ break;
+ case ZPCI_MOD_FC_SET_MEASURE:
+ break;
+ default:
+ break;
+ }
+
+ setcc(cpu, 0);
+ return 0;
+}
+
+int kvm_stpcifc_service_call(S390CPU *cpu, struct kvm_run *run)
+{
+ DPRINTF("stpcifc\n");
+ return -EINVAL;
+}
--- /dev/null
+++ b/target-s390x/pci_ic.h
@@ -0,0 +1,425 @@
+/*
+ * s390 PCI intercept definitions
+ *
+ * Copyright 2014 IBM Corp.
+ * Author(s): Frank Blaschka <frank.blaschka@de.ibm.com>
+ * Hong Bo Li <lihbbj@cn.ibm.com>
+ * Yi Min Zhao <zyimin@cn.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at
+ * your option) any later version. See the COPYING file in the top-level
+ * directory.
+ */
+
+#ifndef PCI_IC_S390X_H
+#define PCI_IC_S390X_H
+
+#include <sysemu/dma.h>
+
+/* CLP common request & response block size */
+#define CLP_BLK_SIZE 4096
+#define PCI_BAR_COUNT 6
+#define PCI_MAX_FUNCTIONS 4096
+
+typedef struct ClpReqHdr {
+ __uint16_t len;
+ __uint16_t cmd;
+} QEMU_PACKED ClpReqHdr;
+
+typedef struct ClpRspHdr {
+ __uint16_t len;
+ __uint16_t rsp;
+} QEMU_PACKED ClpRspHdr;
+
+/* CLP Response Codes */
+#define CLP_RC_OK 0x0010 /* Command request successfully */
+#define CLP_RC_CMD 0x0020 /* Command code not recognized */
+#define CLP_RC_PERM 0x0030 /* Command not authorized */
+#define CLP_RC_FMT 0x0040 /* Invalid command request format */
+#define CLP_RC_LEN 0x0050 /* Invalid command request length */
+#define CLP_RC_8K 0x0060 /* Command requires 8K LPCB */
+#define CLP_RC_RESNOT0 0x0070 /* Reserved field not zero */
+#define CLP_RC_NODATA 0x0080 /* No data available */
+#define CLP_RC_FC_UNKNOWN 0x0100 /* Function code not recognized */
+
+/*
+ * Call Logical Processor - Command Codes
+ */
+#define CLP_LIST_PCI 0x0002
+#define CLP_QUERY_PCI_FN 0x0003
+#define CLP_QUERY_PCI_FNGRP 0x0004
+#define CLP_SET_PCI_FN 0x0005
+
+/* PCI function handle list entry */
+typedef struct ClpFhListEntry {
+ __uint16_t device_id;
+ __uint16_t vendor_id;
+#define CLP_FHLIST_MASK_CONFIG 0x80000000
+ __uint32_t config;
+ __uint32_t fid;
+ __uint32_t fh;
+} QEMU_PACKED ClpFhListEntry;
+
+#define CLP_RC_SETPCIFN_FH 0x0101 /* Invalid PCI fn handle */
+#define CLP_RC_SETPCIFN_FHOP 0x0102 /* Fn handle not valid for op */
+#define CLP_RC_SETPCIFN_DMAAS 0x0103 /* Invalid DMA addr space */
+#define CLP_RC_SETPCIFN_RES 0x0104 /* Insufficient resources */
+#define CLP_RC_SETPCIFN_ALRDY 0x0105 /* Fn already in requested state */
+#define CLP_RC_SETPCIFN_ERR 0x0106 /* Fn in permanent error state */
+#define CLP_RC_SETPCIFN_RECPND 0x0107 /* Error recovery pending */
+#define CLP_RC_SETPCIFN_BUSY 0x0108 /* Fn busy */
+#define CLP_RC_LISTPCI_BADRT 0x010a /* Resume token not recognized */
+#define CLP_RC_QUERYPCIFG_PFGID 0x010b /* Unrecognized PFGID */
+
+/* request or response block header length */
+#define LIST_PCI_HDR_LEN 32
+
+/* Number of function handles fitting in response block */
+#define CLP_FH_LIST_NR_ENTRIES \
+ ((CLP_BLK_SIZE - 2 * LIST_PCI_HDR_LEN) \
+ / sizeof(ClpFhListEntry))
+
+#define CLP_SET_ENABLE_PCI_FN 0 /* Yes, 0 enables it */
+#define CLP_SET_DISABLE_PCI_FN 1 /* Yes, 1 disables it */
+
+#define CLP_UTIL_STR_LEN 64
+
+#define CLP_MASK_FMT 0xf0000000
+
+/* List PCI functions request */
+typedef struct ClpReqListPci {
+ ClpReqHdr hdr;
+ __uint32_t fmt;
+ __uint64_t reserved1;
+ __uint64_t resume_token;
+ __uint64_t reserved2;
+} QEMU_PACKED ClpReqListPci;
+
+/* List PCI functions response */
+typedef struct ClpRspListPci {
+ ClpRspHdr hdr;
+ __uint32_t fmt;
+ __uint64_t reserved1;
+ __uint64_t resume_token;
+ __uint32_t mdd;
+ __uint16_t max_fn;
+ __uint8_t reserved2;
+ __uint8_t entry_size;
+ ClpFhListEntry fh_list[CLP_FH_LIST_NR_ENTRIES];
+} QEMU_PACKED ClpRspListPci;
+
+/* Query PCI function request */
+typedef struct ClpReqQueryPci {
+ ClpReqHdr hdr;
+ __uint32_t fmt;
+ __uint64_t reserved1;
+ __uint32_t fh; /* function handle */
+ __uint32_t reserved2;
+ __uint64_t reserved3;
+} QEMU_PACKED ClpReqQueryPci;
+
+/* Query PCI function response */
+typedef struct ClpRspQueryPci {
+ ClpRspHdr hdr;
+ __uint32_t fmt;
+ __uint64_t reserved1;
+ __uint16_t vfn; /* virtual fn number */
+#define CLP_RSP_QPCI_MASK_UTIL 0x100
+#define CLP_RSP_QPCI_MASK_PFGID 0xff
+ __uint16_t ug;
+ __uint32_t fid; /* pci function id */
+ __uint8_t bar_size[PCI_BAR_COUNT];
+ __uint16_t pchid;
+ __uint32_t bar[PCI_BAR_COUNT];
+ __uint64_t reserved2;
+ __uint64_t sdma; /* start dma as */
+ __uint64_t edma; /* end dma as */
+ __uint64_t reserved3[6];
+ __uint8_t util_str[CLP_UTIL_STR_LEN]; /* utility string */
+} QEMU_PACKED ClpRspQueryPci;
+
+/* Query PCI function group request */
+typedef struct ClpReqQueryPciGrp {
+ ClpReqHdr hdr;
+ __uint32_t fmt;
+ __uint64_t reserved1;
+#define CLP_REQ_QPCIG_MASK_PFGID 0xff
+ __uint32_t g;
+ __uint32_t reserved2;
+ __uint64_t reserved3;
+} QEMU_PACKED ClpReqQueryPciGrp;
+
+/* Query PCI function group response */
+typedef struct ClpRspQueryPciGrp {
+ ClpRspHdr hdr;
+ __uint32_t fmt;
+ __uint64_t reserved1;
+#define CLP_RSP_QPCIG_MASK_NOI 0xfff
+ __uint16_t i;
+ __uint8_t version;
+#define CLP_RSP_QPCIG_MASK_FRAME 0x2
+#define CLP_RSP_QPCIG_MASK_REFRESH 0x1
+ __uint8_t fr;
+ __uint16_t reserved2;
+ __uint16_t mui;
+ __uint64_t reserved3;
+ __uint64_t dasm; /* dma address space mask */
+ __uint64_t msia; /* MSI address */
+ __uint64_t reserved4;
+ __uint64_t reserved5;
+} QEMU_PACKED ClpRspQueryPciGrp;
+
+/* Set PCI function request */
+typedef struct ClpReqSetPci {
+ ClpReqHdr hdr;
+ __uint32_t fmt;
+ __uint64_t reserved1;
+ __uint32_t fh; /* function handle */
+ __uint16_t reserved2;
+ __uint8_t oc; /* operation controls */
+ __uint8_t ndas; /* number of dma spaces */
+ __uint64_t reserved3;
+} QEMU_PACKED ClpReqSetPci;
+
+/* Set PCI function response */
+typedef struct ClpRspSetPci {
+ ClpRspHdr hdr;
+ __uint32_t fmt;
+ __uint64_t reserved1;
+ __uint32_t fh; /* function handle */
+ __uint32_t reserved3;
+ __uint64_t reserved4;
+} QEMU_PACKED ClpRspSetPci;
+
+typedef struct ClpReqRspListPci {
+ ClpReqListPci request;
+ ClpRspListPci response;
+} QEMU_PACKED ClpReqRspListPci;
+
+typedef struct ClpReqRspSetPci {
+ ClpReqSetPci request;
+ ClpRspSetPci response;
+} QEMU_PACKED ClpReqRspSetPci;
+
+typedef struct ClpReqRspQueryPci {
+ ClpReqQueryPci request;
+ ClpRspQueryPci response;
+} QEMU_PACKED ClpReqRspQueryPci;
+
+typedef struct ClpReqRspQueryPciGrp {
+ ClpReqQueryPciGrp request;
+ ClpRspQueryPciGrp response;
+} QEMU_PACKED ClpReqRspQueryPciGrp;
+
+typedef struct PciLgStg {
+ uint32_t fh;
+ uint8_t status;
+ uint8_t pcias;
+ uint8_t reserved;
+ uint8_t len;
+} QEMU_PACKED PciLgStg;
+
+typedef struct PciStb {
+ uint32_t fh;
+ uint8_t status;
+ uint8_t pcias;
+ uint8_t reserved;
+ uint8_t len;
+} QEMU_PACKED PciStb;
+
+/* Modify PCI Function Controls */
+#define ZPCI_MOD_FC_REG_INT 2
+#define ZPCI_MOD_FC_DEREG_INT 3
+#define ZPCI_MOD_FC_REG_IOAT 4
+#define ZPCI_MOD_FC_DEREG_IOAT 5
+#define ZPCI_MOD_FC_REREG_IOAT 6
+#define ZPCI_MOD_FC_RESET_ERROR 7
+#define ZPCI_MOD_FC_RESET_BLOCK 9
+#define ZPCI_MOD_FC_SET_MEASURE 10
+
+/* FIB function controls */
+#define ZPCI_FIB_FC_ENABLED 0x80
+#define ZPCI_FIB_FC_ERROR 0x40
+#define ZPCI_FIB_FC_LS_BLOCKED 0x20
+#define ZPCI_FIB_FC_DMAAS_REG 0x10
+
+/* FIB function controls */
+#define ZPCI_FIB_FC_ENABLED 0x80
+#define ZPCI_FIB_FC_ERROR 0x40
+#define ZPCI_FIB_FC_LS_BLOCKED 0x20
+#define ZPCI_FIB_FC_DMAAS_REG 0x10
+
+/* Function Information Block */
+typedef struct ZpciFib {
+ __uint8_t fmt; /* format */
+ __uint8_t reserved1[7];
+ __uint8_t fc; /* function controls */
+ __uint8_t reserved2;
+ __uint16_t reserved3;
+ __uint32_t reserved4;
+ __uint64_t pba; /* PCI base address */
+ __uint64_t pal; /* PCI address limit */
+ __uint64_t iota; /* I/O Translation Anchor */
+#define FIB_DATA_ISC(x) (((x) >> 28) & 0x7)
+#define FIB_DATA_NOI(x) (((x) >> 16) & 0xfff)
+#define FIB_DATA_AIBVO(x) (((x) >> 8) & 0x3f)
+#define FIB_DATA_SUM(x) (((x) >> 7) & 0x1)
+#define FIB_DATA_AISBO(x) ((x) & 0x3f)
+ __uint32_t data;
+ __uint32_t reserved5;
+ __uint64_t aibv; /* Adapter int bit vector address */
+ __uint64_t aisb; /* Adapter int summary bit address */
+ __uint64_t fmb_addr; /* Function measurement address and key */
+ __uint32_t reserved6;
+ __uint32_t gd;
+} QEMU_PACKED ZpciFib;
+
+#define PAGE_SHIFT 12
+#define PAGE_MASK (~(PAGE_SIZE-1))
+#define PAGE_DEFAULT_ACC 0
+#define PAGE_DEFAULT_KEY (PAGE_DEFAULT_ACC << 4)
+
+/* I/O Translation Anchor (IOTA) */
+enum ZpciIoatDtype {
+ ZPCI_IOTA_STO = 0,
+ ZPCI_IOTA_RTTO = 1,
+ ZPCI_IOTA_RSTO = 2,
+ ZPCI_IOTA_RFTO = 3,
+ ZPCI_IOTA_PFAA = 4,
+ ZPCI_IOTA_IOPFAA = 5,
+ ZPCI_IOTA_IOPTO = 7
+};
+
+#define ZPCI_IOTA_IOT_ENABLED 0x800UL
+#define ZPCI_IOTA_DT_ST (ZPCI_IOTA_STO << 2)
+#define ZPCI_IOTA_DT_RT (ZPCI_IOTA_RTTO << 2)
+#define ZPCI_IOTA_DT_RS (ZPCI_IOTA_RSTO << 2)
+#define ZPCI_IOTA_DT_RF (ZPCI_IOTA_RFTO << 2)
+#define ZPCI_IOTA_DT_PF (ZPCI_IOTA_PFAA << 2)
+#define ZPCI_IOTA_FS_4K 0
+#define ZPCI_IOTA_FS_1M 1
+#define ZPCI_IOTA_FS_2G 2
+#define ZPCI_KEY (PAGE_DEFAULT_KEY << 5)
+
+#define ZPCI_IOTA_STO_FLAG (ZPCI_IOTA_IOT_ENABLED | ZPCI_KEY | ZPCI_IOTA_DT_ST)
+#define ZPCI_IOTA_RTTO_FLAG (ZPCI_IOTA_IOT_ENABLED | ZPCI_KEY | ZPCI_IOTA_DT_RT)
+#define ZPCI_IOTA_RSTO_FLAG (ZPCI_IOTA_IOT_ENABLED | ZPCI_KEY | ZPCI_IOTA_DT_RS)
+#define ZPCI_IOTA_RFTO_FLAG (ZPCI_IOTA_IOT_ENABLED | ZPCI_KEY | ZPCI_IOTA_DT_RF)
+#define ZPCI_IOTA_RFAA_FLAG (ZPCI_IOTA_IOT_ENABLED | ZPCI_KEY |\
+ ZPCI_IOTA_DT_PF | ZPCI_IOTA_FS_2G)
+
+/* I/O Region and segment tables */
+#define ZPCI_INDEX_MASK 0x7ffUL
+
+#define ZPCI_TABLE_TYPE_MASK 0xc
+#define ZPCI_TABLE_TYPE_RFX 0xc
+#define ZPCI_TABLE_TYPE_RSX 0x8
+#define ZPCI_TABLE_TYPE_RTX 0x4
+#define ZPCI_TABLE_TYPE_SX 0x0
+
+#define ZPCI_TABLE_LEN_RFX 0x3
+#define ZPCI_TABLE_LEN_RSX 0x3
+#define ZPCI_TABLE_LEN_RTX 0x3
+
+#define ZPCI_TABLE_OFFSET_MASK 0xc0
+#define ZPCI_TABLE_SIZE 0x4000
+#define ZPCI_TABLE_ALIGN ZPCI_TABLE_SIZE
+#define ZPCI_TABLE_ENTRY_SIZE (sizeof(unsigned long))
+#define ZPCI_TABLE_ENTRIES (ZPCI_TABLE_SIZE / ZPCI_TABLE_ENTRY_SIZE)
+
+#define ZPCI_TABLE_BITS 11
+#define ZPCI_PT_BITS 8
+#define ZPCI_ST_SHIFT (ZPCI_PT_BITS + PAGE_SHIFT)
+#define ZPCI_RT_SHIFT (ZPCI_ST_SHIFT + ZPCI_TABLE_BITS)
+
+#define ZPCI_RTE_FLAG_MASK 0x3fffUL
+#define ZPCI_RTE_ADDR_MASK (~ZPCI_RTE_FLAG_MASK)
+#define ZPCI_STE_FLAG_MASK 0x7ffUL
+#define ZPCI_STE_ADDR_MASK (~ZPCI_STE_FLAG_MASK)
+
+/* I/O Page tables */
+#define ZPCI_PTE_VALID_MASK 0x400
+#define ZPCI_PTE_INVALID 0x400
+#define ZPCI_PTE_VALID 0x000
+#define ZPCI_PT_SIZE 0x800
+#define ZPCI_PT_ALIGN ZPCI_PT_SIZE
+#define ZPCI_PT_ENTRIES (ZPCI_PT_SIZE / ZPCI_TABLE_ENTRY_SIZE)
+#define ZPCI_PT_MASK (ZPCI_PT_ENTRIES - 1)
+
+#define ZPCI_PTE_FLAG_MASK 0xfffUL
+#define ZPCI_PTE_ADDR_MASK (~ZPCI_PTE_FLAG_MASK)
+
+/* Shared bits */
+#define ZPCI_TABLE_VALID 0x00
+#define ZPCI_TABLE_INVALID 0x20
+#define ZPCI_TABLE_PROTECTED 0x200
+#define ZPCI_TABLE_UNPROTECTED 0x000
+
+#define ZPCI_TABLE_VALID_MASK 0x20
+#define ZPCI_TABLE_PROT_MASK 0x200
+
+static inline unsigned int calc_rtx(dma_addr_t ptr)
+{
+ return ((unsigned long) ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK;
+}
+
+static inline unsigned int calc_sx(dma_addr_t ptr)
+{
+ return ((unsigned long) ptr >> ZPCI_ST_SHIFT) & ZPCI_INDEX_MASK;
+}
+
+static inline unsigned int calc_px(dma_addr_t ptr)
+{
+ return ((unsigned long) ptr >> PAGE_SHIFT) & ZPCI_PT_MASK;
+}
+
+static inline unsigned long *get_rt_sto(unsigned long entry)
+{
+ return ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX)
+ ? (unsigned long *) (entry & ZPCI_RTE_ADDR_MASK)
+ : NULL;
+}
+
+static inline unsigned long *get_st_pto(unsigned long entry)
+{
+ return ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX)
+ ? (unsigned long *) (entry & ZPCI_STE_ADDR_MASK)
+ : NULL;
+}
+
+static inline uint64_t get_base_disp_rxy(S390CPU *cpu, struct kvm_run *run)
+{
+ CPUS390XState *env = &cpu->env;
+ uint32_t x2 = (run->s390_sieic.ipa & 0x000f);
+ uint32_t base2 = run->s390_sieic.ipb >> 28;
+ uint32_t disp2 = ((run->s390_sieic.ipb & 0x0fff0000) >> 16) +
+ ((run->s390_sieic.ipb & 0xff00) << 4);
+
+ return (base2 ? env->regs[base2] : 0) +
+ (x2 ? env->regs[x2] : 0) + (__uint64_t)disp2;
+}
+
+static inline uint64_t get_base_disp_rsy(S390CPU *cpu, struct kvm_run *run)
+{
+ CPUS390XState *env = &cpu->env;
+ uint32_t base2 = run->s390_sieic.ipb >> 28;
+ uint32_t disp2 = ((run->s390_sieic.ipb & 0x0fff0000) >> 16) +
+ ((run->s390_sieic.ipb & 0xff00) << 4);
+
+ if (disp2 & 0x80000) {
+ disp2 += 0xfff00000;
+ }
+
+ return (base2 ? env->regs[base2] : 0) + (long)(int)disp2;
+}
+
+int kvm_clp_service_call(S390CPU *cpu, struct kvm_run *run);
+int kvm_rpcit_service_call(S390CPU *cpu, struct kvm_run *run);
+int kvm_sic_service_call(S390CPU *cpu, struct kvm_run *run);
+int kvm_pcistb_service_call(S390CPU *cpu, struct kvm_run *run);
+int kvm_mpcifc_service_call(S390CPU *cpu, struct kvm_run *run);
+int kvm_pcistg_service_call(S390CPU *cpu, struct kvm_run *run);
+int kvm_pcilg_service_call(S390CPU *cpu, struct kvm_run *run);
+int kvm_stpcifc_service_call(S390CPU *cpu, struct kvm_run *run);
+
+#endif
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Qemu-devel] [RFC patch 6/6] vfio: make vfio run on s390 platform
2014-09-19 11:54 [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 frank.blaschka
` (4 preceding siblings ...)
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 5/6] s390: implement pci instruction frank.blaschka
@ 2014-09-19 11:54 ` frank.blaschka
2014-09-22 20:47 ` [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 Alex Williamson
6 siblings, 0 replies; 20+ messages in thread
From: frank.blaschka @ 2014-09-19 11:54 UTC (permalink / raw)
To: qemu-devel, linux-s390, kvm; +Cc: pbonzini, alex.williamson, agraf
[-- Attachment #1: 104-qemu_vfio.patch --]
[-- Type: text/plain, Size: 2688 bytes --]
From: Frank Blaschka <frank.blaschka@de.ibm.com>
Following changes are made because of platform differences:
1) s390 does not support mmap'ing of PCI BARs so we have to go via slow path
2) no intx support
3) no classic MSIX interrupts. The pci hw understands the concept
of requesting MSIX irqs but irqs are delivered as s390 adapter irqs.
Introduce s390 specific functions for msix notification (slow path) and
msi routes (kvm fast path).
4) Use type1 iommu but register only for iommu address space
Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
---
hw/misc/vfio.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
--- a/hw/misc/vfio.c
+++ b/hw/misc/vfio.c
@@ -40,6 +40,9 @@
#include "sysemu/kvm.h"
#include "sysemu/sysemu.h"
#include "hw/misc/vfio.h"
+#ifdef TARGET_S390X
+#include "hw/s390x/s390-pci-bus.h"
+#endif
/* #define DEBUG_VFIO */
#ifdef DEBUG_VFIO
@@ -51,7 +54,11 @@
#endif
/* Extra debugging, trap acceleration paths for more logging */
+#ifdef TARGET_S390X
+#define VFIO_ALLOW_MMAP 0
+#else
#define VFIO_ALLOW_MMAP 1
+#endif
#define VFIO_ALLOW_KVM_INTX 1
#define VFIO_ALLOW_KVM_MSI 1
#define VFIO_ALLOW_KVM_MSIX 1
@@ -554,6 +561,10 @@ static int vfio_enable_intx(VFIODevice *
struct vfio_irq_set *irq_set;
int32_t *pfd;
+#ifdef TARGET_S390X
+ return 0;
+#endif
+
if (!pin) {
return 0;
}
@@ -664,7 +675,11 @@ static void vfio_msi_interrupt(void *opa
#endif
if (vdev->interrupt == VFIO_INT_MSIX) {
+#ifdef TARGET_S390X
+ s390_msix_notify(&vdev->pdev, nr);
+#else
msix_notify(&vdev->pdev, nr);
+#endif
} else if (vdev->interrupt == VFIO_INT_MSI) {
msi_notify(&vdev->pdev, nr);
} else {
@@ -730,7 +745,11 @@ static void vfio_add_kvm_msi_virq(VFIOMS
return;
}
+#ifdef TARGET_S390X
+ virq = s390_irqchip_add_msi_route(&vector->vdev->pdev, kvm_state, *msg);
+#else
virq = kvm_irqchip_add_msi_route(kvm_state, *msg);
+#endif
if (virq < 0) {
event_notifier_cleanup(&vector->kvm_interrupt);
return;
@@ -3702,8 +3721,13 @@ static int vfio_connect_container(VFIOGr
container->iommu_data.type1.listener = vfio_memory_listener;
container->iommu_data.release = vfio_listener_release;
+#ifdef TARGET_S390X
+ memory_listener_register(&container->iommu_data.type1.listener,
+ container->space->as);
+#else
memory_listener_register(&container->iommu_data.type1.listener,
&address_space_memory);
+#endif
if (container->iommu_data.type1.error) {
ret = container->iommu_data.type1.error;
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 5/6] s390: implement pci instruction
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 5/6] s390: implement pci instruction frank.blaschka
@ 2014-09-19 15:12 ` Thomas Huth
2014-09-22 7:40 ` Frank Blaschka
0 siblings, 1 reply; 20+ messages in thread
From: Thomas Huth @ 2014-09-19 15:12 UTC (permalink / raw)
To: frank.blaschka
Cc: linux-s390, kvm, agraf, qemu-devel, alex.williamson, pbonzini
Hi Frank,
On Fri, 19 Sep 2014 13:54:34 +0200
frank.blaschka@de.ibm.com wrote:
> From: Frank Blaschka <frank.blaschka@de.ibm.com>
>
> This patch implements the s390 pci instructions in qemu. This allows
> to attach qemu pci devices including vfio. This does not mean the
> devices are functional but at least detection and config/memory space
> access is working.
>
> Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
> ---
> target-s390x/Makefile.objs | 2
> target-s390x/kvm.c | 52 +++
> target-s390x/pci_ic.c | 621 +++++++++++++++++++++++++++++++++++++++++++++
> target-s390x/pci_ic.h | 425 ++++++++++++++++++++++++++++++
> 4 files changed, 1099 insertions(+), 1 deletion(-)
>
> --- a/target-s390x/Makefile.objs
> +++ b/target-s390x/Makefile.objs
> @@ -2,4 +2,4 @@ obj-y += translate.o helper.o cpu.o inte
> obj-y += int_helper.o fpu_helper.o cc_helper.o mem_helper.o misc_helper.o
> obj-y += gdbstub.o
> obj-$(CONFIG_SOFTMMU) += ioinst.o arch_dump.o
> -obj-$(CONFIG_KVM) += kvm.o
> +obj-$(CONFIG_KVM) += kvm.o pci_ic.o
> --- a/target-s390x/kvm.c
> +++ b/target-s390x/kvm.c
> @@ -40,6 +40,7 @@
> #include "exec/gdbstub.h"
> #include "trace.h"
> #include "qapi-event.h"
> +#include "pci_ic.h"
>
> /* #define DEBUG_KVM */
>
> @@ -56,6 +57,7 @@
> #define IPA0_B2 0xb200
> #define IPA0_B9 0xb900
> #define IPA0_EB 0xeb00
> +#define IPA0_E3 0xe300
>
> #define PRIV_B2_SCLP_CALL 0x20
> #define PRIV_B2_CSCH 0x30
> @@ -76,8 +78,17 @@
> #define PRIV_B2_XSCH 0x76
>
> #define PRIV_EB_SQBS 0x8a
> +#define PRIV_EB_PCISTB 0xd0
> +#define PRIV_EB_SIC 0xd1
>
> #define PRIV_B9_EQBS 0x9c
> +#define PRIV_B9_CLP 0xa0
> +#define PRIV_B9_PCISTG 0xd0
> +#define PRIV_B9_PCILG 0xd2
> +#define PRIV_B9_RPCIT 0xd3
> +
> +#define PRIV_E3_MPCIFC 0xd0
> +#define PRIV_E3_STPCIFC 0xd4
>
> #define DIAG_IPL 0x308
> #define DIAG_KVM_HYPERCALL 0x500
> @@ -813,6 +824,18 @@ static int handle_b9(S390CPU *cpu, struc
> int r = 0;
>
> switch (ipa1) {
> + case PRIV_B9_CLP:
> + r = kvm_clp_service_call(cpu, run);
> + break;
> + case PRIV_B9_PCISTG:
> + r = kvm_pcistg_service_call(cpu, run);
> + break;
> + case PRIV_B9_PCILG:
> + r = kvm_pcilg_service_call(cpu, run);
> + break;
> + case PRIV_B9_RPCIT:
> + r = kvm_rpcit_service_call(cpu, run);
> + break;
> case PRIV_B9_EQBS:
> /* just inject exception */
> r = -1;
> @@ -831,6 +854,12 @@ static int handle_eb(S390CPU *cpu, struc
> int r = 0;
>
> switch (ipa1) {
> + case PRIV_EB_PCISTB:
> + r = kvm_pcistb_service_call(cpu, run);
> + break;
> + case PRIV_EB_SIC:
> + r = kvm_sic_service_call(cpu, run);
> + break;
> case PRIV_EB_SQBS:
> /* just inject exception */
> r = -1;
I'm not sure, but I think the handler for the eb instructions is wrong:
The second byte of the opcode is encoded in the lowest byte of the ipb
field, not the lowest byte of the ipa field (just like with the e3
handler). Did you verify that your handlers get called correctly?
> @@ -844,6 +873,26 @@ static int handle_eb(S390CPU *cpu, struc
> return r;
> }
>
> +static int handle_e3(S390CPU *cpu, struct kvm_run *run, uint8_t ipa1)
> +{
> + int r = 0;
> +
> + switch (ipa1) {
> + case PRIV_E3_MPCIFC:
> + r = kvm_mpcifc_service_call(cpu, run);
> + break;
> + case PRIV_E3_STPCIFC:
> + r = kvm_stpcifc_service_call(cpu, run);
> + break;
> + default:
> + r = -1;
> + DPRINTF("KVM: unhandled PRIV: 0xe3%x\n", ipa1);
> + break;
> + }
> +
> + return r;
> +}
Could you please replace "ipa1" with "ipb1" to avoid confusion here?
> static int handle_hypercall(S390CPU *cpu, struct kvm_run *run)
> {
> CPUS390XState *env = &cpu->env;
> @@ -1038,6 +1087,9 @@ static int handle_instruction(S390CPU *c
> case IPA0_EB:
> r = handle_eb(cpu, run, ipa1);
> break;
> + case IPA0_E3:
> + r = handle_e3(cpu, run, run->s390_sieic.ipb & 0xff);
> + break;
> case IPA0_DIAG:
> r = handle_diag(cpu, run, run->s390_sieic.ipb);
> break;
> --- /dev/null
> +++ b/target-s390x/pci_ic.c
> @@ -0,0 +1,621 @@
[...]
> +
> +int kvm_pcilg_service_call(S390CPU *cpu, struct kvm_run *run)
> +{
> + CPUS390XState *env = &cpu->env;
> + S390PCIBusDevice *pbdev;
> + uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
> + uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
> + PciLgStg *rp;
> + uint64_t offset;
> + uint64_t data;
> +
> + cpu_synchronize_state(CPU(cpu));
> + rp = (PciLgStg *)&env->regs[r2];
I think you have to check for r2 to be even here (and inject a
specification exception otherwise)
You should also check rp->len for a valid value and inject an operand
exception otherwise.
> + offset = env->regs[r2 + 1];
You should also check for a valid offset value here (and inject an
operand exception otherwise).
... and while you're at it - it also seems like the pci instructions
are priviledged, so you should check for not being in the problem state
here, too, just to be sure ;-)
> + pbdev = s390_pci_find_dev_by_fh(rp->fh);
> + if (!pbdev) {
> + DPRINTF("pcilg no pci dev\n");
> + return -EIO;
> + }
> +
> + if (rp->pcias < 6) {
> + MemoryRegion *mr = pbdev->pdev->io_regions[rp->pcias].memory;
> + io_mem_read(mr, offset, &data, rp->len);
Does this also work if len != 8 ? I think the data should be put in the
rightmost byte positions, with the unused leftmost bytes set to zero...
so some more logic might be needed here.
> + } else if (rp->pcias == 15) {
> + data = pci_host_config_read_common(
> + pbdev->pdev, offset, pci_config_size(pbdev->pdev), rp->len);
> +
> + switch (rp->len) {
> + case 1:
> + break;
> + case 2:
> + data = cpu_to_le16(data);
> + break;
> + case 4:
> + data = cpu_to_le32(data);
> + break;
> + case 8:
> + data = cpu_to_le64(data);
> + break;
> + default:
> + abort();
> + }
> + } else {
> + DPRINTF("invalid space\n");
Set condition code 1 in this case?
> + }
> +
> + env->regs[r1] = data;
> + setcc(cpu, 0);
> + return 0;
> +}
> +
> +int kvm_pcistg_service_call(S390CPU *cpu, struct kvm_run *run)
> +{
> + CPUS390XState *env = &cpu->env;
> + uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
> + uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
> + PciLgStg *rp;
> + uint64_t offset, data;
> + S390PCIBusDevice *pbdev;
> +
> + cpu_synchronize_state(CPU(cpu));
> + rp = (PciLgStg *)&env->regs[r2];
> + offset = env->regs[r2 + 1];
You likely need the same sanity checks here as with the pcilg function.
> + pbdev = s390_pci_find_dev_by_fh(rp->fh);
> + if (!pbdev) {
> + DPRINTF("pcistg no pci dev\n");
> + return -EIO;
> + }
> +
> + data = env->regs[r1];
> +
> + if (rp->pcias < 6) {
> + MemoryRegion *mr;
> + if (pbdev->msix_table_bar == rp->pcias &&
> + offset >= pbdev->msix_table_offset &&
> + offset <= pbdev->msix_table_offset +
> + (pbdev->msix_entries - 1) * PCI_MSIX_ENTRY_SIZE) {
> + offset = offset - pbdev->msix_table_offset;
> + mr = &pbdev->pdev->msix_table_mmio;
> + } else {
> + mr = pbdev->pdev->io_regions[rp->pcias].memory;
> + }
> +
> + io_mem_write(mr, offset, data, rp->len);
Same concerns as with the pcilg - this likely only works for len == 8 ?
> + } else if (rp->pcias == 15) {
> + switch (rp->len) {
> + case 1:
> + break;
> + case 2:
> + data = le16_to_cpu(data);
> + break;
> + case 4:
> + data = le32_to_cpu(data);
> + break;
> + case 8:
> + data = le64_to_cpu(data);
> + break;
> + default:
> + abort();
> + }
> +
> + pci_host_config_write_common(pbdev->pdev, offset,
> + pci_config_size(pbdev->pdev),
> + data, rp->len);
> + } else {
> + DPRINTF("pcistg invalid space\n");
Set condition code 1 in this case?
> + }
> +
> + setcc(cpu, 0);
> + return 0;
> +}
> +
> +static uint64_t guest_io_table_walk(uint64_t guest_iota,
> + uint64_t guest_dma_address)
> +{
> + uint64_t sto_a, pto_a, px_a;
> + uint64_t sto, pto, pte;
> + uint32_t rtx, sx, px;
> +
> + rtx = calc_rtx(guest_dma_address);
> + sx = calc_sx(guest_dma_address);
> + px = calc_px(guest_dma_address);
> +
> + sto_a = guest_iota + rtx * sizeof(uint64_t);
> + cpu_physical_memory_rw(sto_a, (uint8_t *)&sto, sizeof(uint64_t), 0);
> + sto = (uint64_t)get_rt_sto(sto);
> +
> + pto_a = sto + sx * sizeof(uint64_t);
> + cpu_physical_memory_rw(pto_a, (uint8_t *)&pto, sizeof(uint64_t), 0);
> + pto = (uint64_t)get_st_pto(pto);
> +
> + px_a = pto + px * sizeof(uint64_t);
> + cpu_physical_memory_rw(px_a, (uint8_t *)&pte, sizeof(uint64_t), 0);
> +
> + return pte;
> +}
> +
> +int kvm_rpcit_service_call(S390CPU *cpu, struct kvm_run *run)
> +{
> + CPUS390XState *env = &cpu->env;
> + uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
> + uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
> + uint32_t fh;
> + uint64_t pte;
> + S390PCIBusDevice *pbdev;
> + ram_addr_t size;
> + int flags;
> + IOMMUTLBEntry entry;
> +
> + cpu_synchronize_state(CPU(cpu));
> +
> + fh = env->regs[r1] >> 32;
> + size = env->regs[r2 + 1];
Missing two checks again:
- r2 should be even
- CPU should not be in problem state
> + pbdev = s390_pci_find_dev_by_fh(fh);
> +
> + if (!pbdev) {
> + DPRINTF("rpcit no pci dev\n");
> + return -EIO;
I think it would be better to set condition code 3 instead.
> + }
> +
> + pte = guest_io_table_walk(pbdev->g_iota, env->regs[r2]);
> + flags = pte & ZPCI_PTE_FLAG_MASK;
> + entry.target_as = &address_space_memory;
> + entry.iova = env->regs[r2];
> + entry.translated_addr = pte & ZPCI_PTE_ADDR_MASK;
> + entry.addr_mask = size - 1;
> +
> + if (flags & ZPCI_PTE_INVALID) {
> + entry.perm = IOMMU_NONE;
> + } else {
> + entry.perm = IOMMU_RW;
> + }
> +
> + memory_region_notify_iommu(pci_device_iommu_address_space(
> + pbdev->pdev)->root, entry);
> +
> + setcc(cpu, 0);
> + return 0;
> +}
> +
> +int kvm_sic_service_call(S390CPU *cpu, struct kvm_run *run)
> +{
> + DPRINTF("sic\n");
> + return 0;
> +}
> +
> +int kvm_pcistb_service_call(S390CPU *cpu, struct kvm_run *run)
> +{
> + CPUS390XState *env = &cpu->env;
> + uint8_t r1 = (run->s390_sieic.ipa & 0x00f0) >> 4;
> + uint8_t r3 = run->s390_sieic.ipa & 0x000f;
> + PciStb *rp;
> + uint64_t gaddr;
> + uint64_t *uaddr, *pu;
> + hwaddr len;
> + S390PCIBusDevice *pbdev;
> + MemoryRegion *mr;
> + int i;
> +
> + cpu_synchronize_state(CPU(cpu));
> +
> + rp = (PciStb *)&env->regs[r1];
> + gaddr = get_base_disp_rsy(cpu, run);
> + len = rp->len;
Not sure, but don't you also have to check the length and offset here
for valid ranges?
At least you should check for problem state again.
> + pbdev = s390_pci_find_dev_by_fh(rp->fh);
> + if (!pbdev) {
> + DPRINTF("pcistb no pci dev fh 0x%x\n", rp->fh);
> + return -EIO;
User cc3 instead?
> + }
> +
> + uaddr = cpu_physical_memory_map(gaddr, &len, 0);
> + mr = pbdev->pdev->io_regions[rp->pcias].memory;
> +
> + pu = uaddr;
> + for (i = 0; i < rp->len / 8; i++) {
> + io_mem_write(mr, env->regs[r3] + i * 8, *pu, 8);
> + pu++;
> + }
> +
> + cpu_physical_memory_unmap(uaddr, len, 0, len);
> + setcc(cpu, 0);
> + return 0;
> +}
[...]
Thomas
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 5/6] s390: implement pci instruction
2014-09-19 15:12 ` Thomas Huth
@ 2014-09-22 7:40 ` Frank Blaschka
0 siblings, 0 replies; 20+ messages in thread
From: Frank Blaschka @ 2014-09-22 7:40 UTC (permalink / raw)
To: Thomas Huth
Cc: linux-s390, frank.blaschka, kvm, qemu-devel, agraf,
alex.williamson, pbonzini
On Fri, Sep 19, 2014 at 05:12:15PM +0200, Thomas Huth wrote:
>
> Hi Frank,
>
> On Fri, 19 Sep 2014 13:54:34 +0200
> frank.blaschka@de.ibm.com wrote:
>
> > From: Frank Blaschka <frank.blaschka@de.ibm.com>
> >
> > This patch implements the s390 pci instructions in qemu. This allows
> > to attach qemu pci devices including vfio. This does not mean the
> > devices are functional but at least detection and config/memory space
> > access is working.
> >
> > Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com>
> > ---
> > target-s390x/Makefile.objs | 2
> > target-s390x/kvm.c | 52 +++
> > target-s390x/pci_ic.c | 621 +++++++++++++++++++++++++++++++++++++++++++++
> > target-s390x/pci_ic.h | 425 ++++++++++++++++++++++++++++++
> > 4 files changed, 1099 insertions(+), 1 deletion(-)
> >
> > --- a/target-s390x/Makefile.objs
> > +++ b/target-s390x/Makefile.objs
> > @@ -2,4 +2,4 @@ obj-y += translate.o helper.o cpu.o inte
> > obj-y += int_helper.o fpu_helper.o cc_helper.o mem_helper.o misc_helper.o
> > obj-y += gdbstub.o
> > obj-$(CONFIG_SOFTMMU) += ioinst.o arch_dump.o
> > -obj-$(CONFIG_KVM) += kvm.o
> > +obj-$(CONFIG_KVM) += kvm.o pci_ic.o
> > --- a/target-s390x/kvm.c
> > +++ b/target-s390x/kvm.c
> > @@ -40,6 +40,7 @@
> > #include "exec/gdbstub.h"
> > #include "trace.h"
> > #include "qapi-event.h"
> > +#include "pci_ic.h"
> >
> > /* #define DEBUG_KVM */
> >
> > @@ -56,6 +57,7 @@
> > #define IPA0_B2 0xb200
> > #define IPA0_B9 0xb900
> > #define IPA0_EB 0xeb00
> > +#define IPA0_E3 0xe300
> >
> > #define PRIV_B2_SCLP_CALL 0x20
> > #define PRIV_B2_CSCH 0x30
> > @@ -76,8 +78,17 @@
> > #define PRIV_B2_XSCH 0x76
> >
> > #define PRIV_EB_SQBS 0x8a
> > +#define PRIV_EB_PCISTB 0xd0
> > +#define PRIV_EB_SIC 0xd1
> >
> > #define PRIV_B9_EQBS 0x9c
> > +#define PRIV_B9_CLP 0xa0
> > +#define PRIV_B9_PCISTG 0xd0
> > +#define PRIV_B9_PCILG 0xd2
> > +#define PRIV_B9_RPCIT 0xd3
> > +
> > +#define PRIV_E3_MPCIFC 0xd0
> > +#define PRIV_E3_STPCIFC 0xd4
> >
> > #define DIAG_IPL 0x308
> > #define DIAG_KVM_HYPERCALL 0x500
> > @@ -813,6 +824,18 @@ static int handle_b9(S390CPU *cpu, struc
> > int r = 0;
> >
> > switch (ipa1) {
> > + case PRIV_B9_CLP:
> > + r = kvm_clp_service_call(cpu, run);
> > + break;
> > + case PRIV_B9_PCISTG:
> > + r = kvm_pcistg_service_call(cpu, run);
> > + break;
> > + case PRIV_B9_PCILG:
> > + r = kvm_pcilg_service_call(cpu, run);
> > + break;
> > + case PRIV_B9_RPCIT:
> > + r = kvm_rpcit_service_call(cpu, run);
> > + break;
> > case PRIV_B9_EQBS:
> > /* just inject exception */
> > r = -1;
> > @@ -831,6 +854,12 @@ static int handle_eb(S390CPU *cpu, struc
> > int r = 0;
> >
> > switch (ipa1) {
> > + case PRIV_EB_PCISTB:
> > + r = kvm_pcistb_service_call(cpu, run);
> > + break;
> > + case PRIV_EB_SIC:
> > + r = kvm_sic_service_call(cpu, run);
> > + break;
> > case PRIV_EB_SQBS:
> > /* just inject exception */
> > r = -1;
>
> I'm not sure, but I think the handler for the eb instructions is wrong:
> The second byte of the opcode is encoded in the lowest byte of the ipb
> field, not the lowest byte of the ipa field (just like with the e3
> handler). Did you verify that your handlers get called correctly?
>
Hi Thomas, you are absolutely right. I already have a patch available for
this issue but did not append it to this RFC post (since it is basically a bug
fix). To the next posting I will add this patch as well.
Will also fix the remaining issues thx for your review.
> > @@ -844,6 +873,26 @@ static int handle_eb(S390CPU *cpu, struc
> > return r;
> > }
> >
> > +static int handle_e3(S390CPU *cpu, struct kvm_run *run, uint8_t ipa1)
> > +{
> > + int r = 0;
> > +
> > + switch (ipa1) {
> > + case PRIV_E3_MPCIFC:
> > + r = kvm_mpcifc_service_call(cpu, run);
> > + break;
> > + case PRIV_E3_STPCIFC:
> > + r = kvm_stpcifc_service_call(cpu, run);
> > + break;
> > + default:
> > + r = -1;
> > + DPRINTF("KVM: unhandled PRIV: 0xe3%x\n", ipa1);
> > + break;
> > + }
> > +
> > + return r;
> > +}
>
> Could you please replace "ipa1" with "ipb1" to avoid confusion here?
>
> > static int handle_hypercall(S390CPU *cpu, struct kvm_run *run)
> > {
> > CPUS390XState *env = &cpu->env;
> > @@ -1038,6 +1087,9 @@ static int handle_instruction(S390CPU *c
> > case IPA0_EB:
> > r = handle_eb(cpu, run, ipa1);
> > break;
> > + case IPA0_E3:
> > + r = handle_e3(cpu, run, run->s390_sieic.ipb & 0xff);
> > + break;
> > case IPA0_DIAG:
> > r = handle_diag(cpu, run, run->s390_sieic.ipb);
> > break;
> > --- /dev/null
> > +++ b/target-s390x/pci_ic.c
> > @@ -0,0 +1,621 @@
> [...]
> > +
> > +int kvm_pcilg_service_call(S390CPU *cpu, struct kvm_run *run)
> > +{
> > + CPUS390XState *env = &cpu->env;
> > + S390PCIBusDevice *pbdev;
> > + uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
> > + uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
> > + PciLgStg *rp;
> > + uint64_t offset;
> > + uint64_t data;
> > +
> > + cpu_synchronize_state(CPU(cpu));
> > + rp = (PciLgStg *)&env->regs[r2];
>
> I think you have to check for r2 to be even here (and inject a
> specification exception otherwise)
>
> You should also check rp->len for a valid value and inject an operand
> exception otherwise.
>
> > + offset = env->regs[r2 + 1];
>
> You should also check for a valid offset value here (and inject an
> operand exception otherwise).
>
> ... and while you're at it - it also seems like the pci instructions
> are priviledged, so you should check for not being in the problem state
> here, too, just to be sure ;-)
>
> > + pbdev = s390_pci_find_dev_by_fh(rp->fh);
> > + if (!pbdev) {
> > + DPRINTF("pcilg no pci dev\n");
> > + return -EIO;
> > + }
> > +
> > + if (rp->pcias < 6) {
> > + MemoryRegion *mr = pbdev->pdev->io_regions[rp->pcias].memory;
> > + io_mem_read(mr, offset, &data, rp->len);
>
> Does this also work if len != 8 ? I think the data should be put in the
> rightmost byte positions, with the unused leftmost bytes set to zero...
> so some more logic might be needed here.
>
> > + } else if (rp->pcias == 15) {
> > + data = pci_host_config_read_common(
> > + pbdev->pdev, offset, pci_config_size(pbdev->pdev), rp->len);
> > +
> > + switch (rp->len) {
> > + case 1:
> > + break;
> > + case 2:
> > + data = cpu_to_le16(data);
> > + break;
> > + case 4:
> > + data = cpu_to_le32(data);
> > + break;
> > + case 8:
> > + data = cpu_to_le64(data);
> > + break;
> > + default:
> > + abort();
> > + }
> > + } else {
> > + DPRINTF("invalid space\n");
>
> Set condition code 1 in this case?
>
> > + }
> > +
> > + env->regs[r1] = data;
> > + setcc(cpu, 0);
> > + return 0;
> > +}
> > +
> > +int kvm_pcistg_service_call(S390CPU *cpu, struct kvm_run *run)
> > +{
> > + CPUS390XState *env = &cpu->env;
> > + uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
> > + uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
> > + PciLgStg *rp;
> > + uint64_t offset, data;
> > + S390PCIBusDevice *pbdev;
> > +
> > + cpu_synchronize_state(CPU(cpu));
> > + rp = (PciLgStg *)&env->regs[r2];
> > + offset = env->regs[r2 + 1];
>
> You likely need the same sanity checks here as with the pcilg function.
>
> > + pbdev = s390_pci_find_dev_by_fh(rp->fh);
> > + if (!pbdev) {
> > + DPRINTF("pcistg no pci dev\n");
> > + return -EIO;
> > + }
> > +
> > + data = env->regs[r1];
> > +
> > + if (rp->pcias < 6) {
> > + MemoryRegion *mr;
> > + if (pbdev->msix_table_bar == rp->pcias &&
> > + offset >= pbdev->msix_table_offset &&
> > + offset <= pbdev->msix_table_offset +
> > + (pbdev->msix_entries - 1) * PCI_MSIX_ENTRY_SIZE) {
> > + offset = offset - pbdev->msix_table_offset;
> > + mr = &pbdev->pdev->msix_table_mmio;
> > + } else {
> > + mr = pbdev->pdev->io_regions[rp->pcias].memory;
> > + }
> > +
> > + io_mem_write(mr, offset, data, rp->len);
>
> Same concerns as with the pcilg - this likely only works for len == 8 ?
>
> > + } else if (rp->pcias == 15) {
> > + switch (rp->len) {
> > + case 1:
> > + break;
> > + case 2:
> > + data = le16_to_cpu(data);
> > + break;
> > + case 4:
> > + data = le32_to_cpu(data);
> > + break;
> > + case 8:
> > + data = le64_to_cpu(data);
> > + break;
> > + default:
> > + abort();
> > + }
> > +
> > + pci_host_config_write_common(pbdev->pdev, offset,
> > + pci_config_size(pbdev->pdev),
> > + data, rp->len);
> > + } else {
> > + DPRINTF("pcistg invalid space\n");
>
> Set condition code 1 in this case?
>
> > + }
> > +
> > + setcc(cpu, 0);
> > + return 0;
> > +}
> > +
> > +static uint64_t guest_io_table_walk(uint64_t guest_iota,
> > + uint64_t guest_dma_address)
> > +{
> > + uint64_t sto_a, pto_a, px_a;
> > + uint64_t sto, pto, pte;
> > + uint32_t rtx, sx, px;
> > +
> > + rtx = calc_rtx(guest_dma_address);
> > + sx = calc_sx(guest_dma_address);
> > + px = calc_px(guest_dma_address);
> > +
> > + sto_a = guest_iota + rtx * sizeof(uint64_t);
> > + cpu_physical_memory_rw(sto_a, (uint8_t *)&sto, sizeof(uint64_t), 0);
> > + sto = (uint64_t)get_rt_sto(sto);
> > +
> > + pto_a = sto + sx * sizeof(uint64_t);
> > + cpu_physical_memory_rw(pto_a, (uint8_t *)&pto, sizeof(uint64_t), 0);
> > + pto = (uint64_t)get_st_pto(pto);
> > +
> > + px_a = pto + px * sizeof(uint64_t);
> > + cpu_physical_memory_rw(px_a, (uint8_t *)&pte, sizeof(uint64_t), 0);
> > +
> > + return pte;
> > +}
> > +
> > +int kvm_rpcit_service_call(S390CPU *cpu, struct kvm_run *run)
> > +{
> > + CPUS390XState *env = &cpu->env;
> > + uint8_t r1 = (run->s390_sieic.ipb & 0x00f00000) >> 20;
> > + uint8_t r2 = (run->s390_sieic.ipb & 0x000f0000) >> 16;
> > + uint32_t fh;
> > + uint64_t pte;
> > + S390PCIBusDevice *pbdev;
> > + ram_addr_t size;
> > + int flags;
> > + IOMMUTLBEntry entry;
> > +
> > + cpu_synchronize_state(CPU(cpu));
> > +
> > + fh = env->regs[r1] >> 32;
> > + size = env->regs[r2 + 1];
>
> Missing two checks again:
> - r2 should be even
> - CPU should not be in problem state
>
> > + pbdev = s390_pci_find_dev_by_fh(fh);
> > +
> > + if (!pbdev) {
> > + DPRINTF("rpcit no pci dev\n");
> > + return -EIO;
>
> I think it would be better to set condition code 3 instead.
>
> > + }
> > +
> > + pte = guest_io_table_walk(pbdev->g_iota, env->regs[r2]);
> > + flags = pte & ZPCI_PTE_FLAG_MASK;
> > + entry.target_as = &address_space_memory;
> > + entry.iova = env->regs[r2];
> > + entry.translated_addr = pte & ZPCI_PTE_ADDR_MASK;
> > + entry.addr_mask = size - 1;
> > +
> > + if (flags & ZPCI_PTE_INVALID) {
> > + entry.perm = IOMMU_NONE;
> > + } else {
> > + entry.perm = IOMMU_RW;
> > + }
> > +
> > + memory_region_notify_iommu(pci_device_iommu_address_space(
> > + pbdev->pdev)->root, entry);
> > +
> > + setcc(cpu, 0);
> > + return 0;
> > +}
> > +
> > +int kvm_sic_service_call(S390CPU *cpu, struct kvm_run *run)
> > +{
> > + DPRINTF("sic\n");
> > + return 0;
> > +}
> > +
> > +int kvm_pcistb_service_call(S390CPU *cpu, struct kvm_run *run)
> > +{
> > + CPUS390XState *env = &cpu->env;
> > + uint8_t r1 = (run->s390_sieic.ipa & 0x00f0) >> 4;
> > + uint8_t r3 = run->s390_sieic.ipa & 0x000f;
> > + PciStb *rp;
> > + uint64_t gaddr;
> > + uint64_t *uaddr, *pu;
> > + hwaddr len;
> > + S390PCIBusDevice *pbdev;
> > + MemoryRegion *mr;
> > + int i;
> > +
> > + cpu_synchronize_state(CPU(cpu));
> > +
> > + rp = (PciStb *)&env->regs[r1];
> > + gaddr = get_base_disp_rsy(cpu, run);
> > + len = rp->len;
>
> Not sure, but don't you also have to check the length and offset here
> for valid ranges?
> At least you should check for problem state again.
>
> > + pbdev = s390_pci_find_dev_by_fh(rp->fh);
> > + if (!pbdev) {
> > + DPRINTF("pcistb no pci dev fh 0x%x\n", rp->fh);
> > + return -EIO;
>
> User cc3 instead?
>
> > + }
> > +
> > + uaddr = cpu_physical_memory_map(gaddr, &len, 0);
> > + mr = pbdev->pdev->io_regions[rp->pcias].memory;
> > +
> > + pu = uaddr;
> > + for (i = 0; i < rp->len / 8; i++) {
> > + io_mem_write(mr, env->regs[r3] + i * 8, *pu, 8);
> > + pu++;
> > + }
> > +
> > + cpu_physical_memory_unmap(uaddr, len, 0, len);
> > + setcc(cpu, 0);
> > + return 0;
> > +}
> [...]
>
> Thomas
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-19 11:54 [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 frank.blaschka
` (5 preceding siblings ...)
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 6/6] vfio: make vfio run on s390 platform frank.blaschka
@ 2014-09-22 20:47 ` Alex Williamson
2014-09-22 22:08 ` Alexander Graf
2014-09-24 8:47 ` Frank Blaschka
6 siblings, 2 replies; 20+ messages in thread
From: Alex Williamson @ 2014-09-22 20:47 UTC (permalink / raw)
To: frank.blaschka; +Cc: linux-s390, agraf, qemu-devel, kvm, pbonzini
On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> This set of patches implements a vfio based solution for pci
> pass-through on the s390 platform. The kernel stuff is pretty
> much straight forward, but qemu needs more work.
>
> Most interesting patch is:
> vfio: make vfio run on s390 platform
>
> I hope Alex & Alex can give me some guidance how to do the changes
> in an appropriate way. After creating a separate iommmu address space
> for each attached PCI device I can successfully run the vfio type1
> iommu. So If we could extend type1 not registering all guest memory
> (see patch) I think we do not need a special vfio iommu for s390
> for the moment.
>
> The patches implement the base pass-through support. s390 specific
> virtualization functions are currently not included. This would
> be a second step after the base support is done.
>
> kernel patches apply to linux-kvm-next
>
> KVM: s390: Enable PCI instructions
> iommu: add iommu for s390 platform
> vfio: make vfio build on s390
>
> qemu patches apply to qemu-master
>
> s390: Add PCI bus support
> s390: implement pci instruction
> vfio: make vfio run on s390 platform
>
> Thx for feedback and review comments
Sending patches as attachments makes it difficult to comment inline.
2/6
- careful of the namespace as you're changing functions from static and
exporting them
- doesn't seem like functions need to be exported, just non-static to
call from s390-iommu.c
6/6
- We shouldn't need to globally disable mmap, each VFIO region reports
whether it supports mmap and vfio-pci on s390 should indicate mmap is
not supported on the platform.
- INTx should be done the same way, the interrupt index for INTx should
report 0 count. The current code likely doesn't handle this, but it
should be easy to fix.
- s390_msix_notify() vs msix_notify() should be abstracted somewhere
else. How would an emulated PCI device with MSI-X support work?
- same for add_msi_route
- We can probably come up with a better way to determine which address
space to connect to the memory listener.
Looks like a reasonable first pass, good re-use of vfio code. Thanks,
Alex
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-22 20:47 ` [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 Alex Williamson
@ 2014-09-22 22:08 ` Alexander Graf
2014-09-22 22:28 ` Alex Williamson
2014-09-24 8:47 ` Frank Blaschka
1 sibling, 1 reply; 20+ messages in thread
From: Alexander Graf @ 2014-09-22 22:08 UTC (permalink / raw)
To: Alex Williamson, frank.blaschka; +Cc: linux-s390, qemu-devel, kvm, pbonzini
On 22.09.14 22:47, Alex Williamson wrote:
> On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
>> This set of patches implements a vfio based solution for pci
>> pass-through on the s390 platform. The kernel stuff is pretty
>> much straight forward, but qemu needs more work.
>>
>> Most interesting patch is:
>> vfio: make vfio run on s390 platform
>>
>> I hope Alex & Alex can give me some guidance how to do the changes
>> in an appropriate way. After creating a separate iommmu address space
>> for each attached PCI device I can successfully run the vfio type1
>> iommu. So If we could extend type1 not registering all guest memory
>> (see patch) I think we do not need a special vfio iommu for s390
>> for the moment.
>>
>> The patches implement the base pass-through support. s390 specific
>> virtualization functions are currently not included. This would
>> be a second step after the base support is done.
>>
>> kernel patches apply to linux-kvm-next
>>
>> KVM: s390: Enable PCI instructions
>> iommu: add iommu for s390 platform
>> vfio: make vfio build on s390
>>
>> qemu patches apply to qemu-master
>>
>> s390: Add PCI bus support
>> s390: implement pci instruction
>> vfio: make vfio run on s390 platform
>>
>> Thx for feedback and review comments
>
> Sending patches as attachments makes it difficult to comment inline.
>
> 2/6
> - careful of the namespace as you're changing functions from static and
> exporting them
> - doesn't seem like functions need to be exported, just non-static to
> call from s390-iommu.c
>
> 6/6
> - We shouldn't need to globally disable mmap, each VFIO region reports
> whether it supports mmap and vfio-pci on s390 should indicate mmap is
> not supported on the platform.
Can we emulate MMIO on mmap'ed regions by routing every memory access
via the kernel? It'd be slow, but at least make existing VFIO code
compatible.
> - INTx should be done the same way, the interrupt index for INTx should
> report 0 count. The current code likely doesn't handle this, but it
> should be easy to fix.
> - s390_msix_notify() vs msix_notify() should be abstracted somewhere
> else. How would an emulated PCI device with MSI-X support work?
> - same for add_msi_route
Yes, please implement emulated PCI device support first, then do VFIO.
Alex
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-22 22:08 ` Alexander Graf
@ 2014-09-22 22:28 ` Alex Williamson
2014-09-23 8:33 ` Alexander Graf
0 siblings, 1 reply; 20+ messages in thread
From: Alex Williamson @ 2014-09-22 22:28 UTC (permalink / raw)
To: Alexander Graf; +Cc: linux-s390, frank.blaschka, qemu-devel, kvm, pbonzini
On Tue, 2014-09-23 at 00:08 +0200, Alexander Graf wrote:
>
> On 22.09.14 22:47, Alex Williamson wrote:
> > On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> >> This set of patches implements a vfio based solution for pci
> >> pass-through on the s390 platform. The kernel stuff is pretty
> >> much straight forward, but qemu needs more work.
> >>
> >> Most interesting patch is:
> >> vfio: make vfio run on s390 platform
> >>
> >> I hope Alex & Alex can give me some guidance how to do the changes
> >> in an appropriate way. After creating a separate iommmu address space
> >> for each attached PCI device I can successfully run the vfio type1
> >> iommu. So If we could extend type1 not registering all guest memory
> >> (see patch) I think we do not need a special vfio iommu for s390
> >> for the moment.
> >>
> >> The patches implement the base pass-through support. s390 specific
> >> virtualization functions are currently not included. This would
> >> be a second step after the base support is done.
> >>
> >> kernel patches apply to linux-kvm-next
> >>
> >> KVM: s390: Enable PCI instructions
> >> iommu: add iommu for s390 platform
> >> vfio: make vfio build on s390
> >>
> >> qemu patches apply to qemu-master
> >>
> >> s390: Add PCI bus support
> >> s390: implement pci instruction
> >> vfio: make vfio run on s390 platform
> >>
> >> Thx for feedback and review comments
> >
> > Sending patches as attachments makes it difficult to comment inline.
> >
> > 2/6
> > - careful of the namespace as you're changing functions from static and
> > exporting them
> > - doesn't seem like functions need to be exported, just non-static to
> > call from s390-iommu.c
> >
> > 6/6
> > - We shouldn't need to globally disable mmap, each VFIO region reports
> > whether it supports mmap and vfio-pci on s390 should indicate mmap is
> > not supported on the platform.
>
> Can we emulate MMIO on mmap'ed regions by routing every memory access
> via the kernel? It'd be slow, but at least make existing VFIO code
> compatible.
Isn't that effectively what we do when we use memory_region_init_io() vs
memory_region_init_ram_ptr() or are you suggesting something that can
handle the MMIO without bouncing out to QEMU? VFIO is already
compatible with regions that cannot be mmap'd, the kernel just needs to
report it as such. Thanks,
Alex
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-22 22:28 ` Alex Williamson
@ 2014-09-23 8:33 ` Alexander Graf
0 siblings, 0 replies; 20+ messages in thread
From: Alexander Graf @ 2014-09-23 8:33 UTC (permalink / raw)
To: Alex Williamson; +Cc: linux-s390, frank.blaschka, qemu-devel, kvm, pbonzini
On 23.09.14 00:28, Alex Williamson wrote:
> On Tue, 2014-09-23 at 00:08 +0200, Alexander Graf wrote:
>>
>> On 22.09.14 22:47, Alex Williamson wrote:
>>> On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
>>>> This set of patches implements a vfio based solution for pci
>>>> pass-through on the s390 platform. The kernel stuff is pretty
>>>> much straight forward, but qemu needs more work.
>>>>
>>>> Most interesting patch is:
>>>> vfio: make vfio run on s390 platform
>>>>
>>>> I hope Alex & Alex can give me some guidance how to do the changes
>>>> in an appropriate way. After creating a separate iommmu address space
>>>> for each attached PCI device I can successfully run the vfio type1
>>>> iommu. So If we could extend type1 not registering all guest memory
>>>> (see patch) I think we do not need a special vfio iommu for s390
>>>> for the moment.
>>>>
>>>> The patches implement the base pass-through support. s390 specific
>>>> virtualization functions are currently not included. This would
>>>> be a second step after the base support is done.
>>>>
>>>> kernel patches apply to linux-kvm-next
>>>>
>>>> KVM: s390: Enable PCI instructions
>>>> iommu: add iommu for s390 platform
>>>> vfio: make vfio build on s390
>>>>
>>>> qemu patches apply to qemu-master
>>>>
>>>> s390: Add PCI bus support
>>>> s390: implement pci instruction
>>>> vfio: make vfio run on s390 platform
>>>>
>>>> Thx for feedback and review comments
>>>
>>> Sending patches as attachments makes it difficult to comment inline.
>>>
>>> 2/6
>>> - careful of the namespace as you're changing functions from static and
>>> exporting them
>>> - doesn't seem like functions need to be exported, just non-static to
>>> call from s390-iommu.c
>>>
>>> 6/6
>>> - We shouldn't need to globally disable mmap, each VFIO region reports
>>> whether it supports mmap and vfio-pci on s390 should indicate mmap is
>>> not supported on the platform.
>>
>> Can we emulate MMIO on mmap'ed regions by routing every memory access
>> via the kernel? It'd be slow, but at least make existing VFIO code
>> compatible.
>
> Isn't that effectively what we do when we use memory_region_init_io() vs
> memory_region_init_ram_ptr() or are you suggesting something that can
> handle the MMIO without bouncing out to QEMU? VFIO is already
> compatible with regions that cannot be mmap'd, the kernel just needs to
> report it as such. Thanks,
Ah, cool. I guess I missed that part :). Then all is well.
Alex
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-22 20:47 ` [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 Alex Williamson
2014-09-22 22:08 ` Alexander Graf
@ 2014-09-24 8:47 ` Frank Blaschka
2014-09-24 16:05 ` Alex Williamson
1 sibling, 1 reply; 20+ messages in thread
From: Frank Blaschka @ 2014-09-24 8:47 UTC (permalink / raw)
To: Alex Williamson
Cc: linux-s390, frank.blaschka, kvm, agraf, qemu-devel, pbonzini
On Mon, Sep 22, 2014 at 02:47:31PM -0600, Alex Williamson wrote:
> On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> > This set of patches implements a vfio based solution for pci
> > pass-through on the s390 platform. The kernel stuff is pretty
> > much straight forward, but qemu needs more work.
> >
> > Most interesting patch is:
> > vfio: make vfio run on s390 platform
> >
> > I hope Alex & Alex can give me some guidance how to do the changes
> > in an appropriate way. After creating a separate iommmu address space
> > for each attached PCI device I can successfully run the vfio type1
> > iommu. So If we could extend type1 not registering all guest memory
> > (see patch) I think we do not need a special vfio iommu for s390
> > for the moment.
> >
> > The patches implement the base pass-through support. s390 specific
> > virtualization functions are currently not included. This would
> > be a second step after the base support is done.
> >
> > kernel patches apply to linux-kvm-next
> >
> > KVM: s390: Enable PCI instructions
> > iommu: add iommu for s390 platform
> > vfio: make vfio build on s390
> >
> > qemu patches apply to qemu-master
> >
> > s390: Add PCI bus support
> > s390: implement pci instruction
> > vfio: make vfio run on s390 platform
> >
> > Thx for feedback and review comments
>
> Sending patches as attachments makes it difficult to comment inline.
>
Sorry, don't understand this. I sent every patch as separate email so
you can comment directly on the patch. What do you prefer?
> 2/6
> - careful of the namespace as you're changing functions from static and
> exporting them
> - doesn't seem like functions need to be exported, just non-static to
> call from s390-iommu.c
>
Ok, will change this.
> 6/6
> - We shouldn't need to globally disable mmap, each VFIO region reports
> whether it supports mmap and vfio-pci on s390 should indicate mmap is
> not supported on the platform.
Yes, this is even better to let the kernel announce a BAR can not be
mmap'ed. Checking the kernel code I realized the BARs are valid for
mmap'ing but the s390 platform does simply not allow this. So I feal we
have to introduce a platform switch in kernel. How about this ...
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -377,9 +377,11 @@ static long vfio_pci_ioctl(void *device_
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
+#ifndef CONFIG_S390
if (pci_resource_flags(pdev, info.index) &
IORESOURCE_MEM && info.size >= PAGE_SIZE)
info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
+#endif
break;
case VFIO_PCI_ROM_REGION_INDEX:
{
> - INTx should be done the same way, the interrupt index for INTx should
> report 0 count. The current code likely doesn't handle this, but it
> should be easy to fix.
The current code is fine. Problem is the card reports an interrupt index
(PCI_INTERRUPT_PIN) but again the platform does not support INTx at all.
So we need a platform switch as well.
> - s390_msix_notify() vs msix_notify() should be abstracted somewhere
Platform does not have have an apic so there is nothing we could emulate
in qemu to make the existing msix_notify() work.
> else. How would an emulated PCI device with MSI-X support work?
> - same for add_msi_route
Same here, we have to setup an adapter route due to the fact MSIX
notifications are delivered as adapter/thin IRQs on the platform.
Any suggestion or idea how a better abstraction could look like?
With all the platform constraints I was not able to find a suitable
emulated device. Remember s390:
- does not support IO BARs
- does not support INTx only MSIX
- in reality currently there is only a PCI network card available
- platform does not support fancy I/O like usb or audio :-)
So we don't even have kernel (host and guest) support for this
kind of devices.
> - We can probably come up with a better way to determine which address
> space to connect to the memory listener.
Any suggestion or idea for that?
>
> Looks like a reasonable first pass, good re-use of vfio code. Thanks,
>
> Alex
>
Thx,
Frank
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-24 8:47 ` Frank Blaschka
@ 2014-09-24 16:05 ` Alex Williamson
2014-09-26 6:45 ` Frank Blaschka
0 siblings, 1 reply; 20+ messages in thread
From: Alex Williamson @ 2014-09-24 16:05 UTC (permalink / raw)
To: Frank Blaschka
Cc: linux-s390, frank.blaschka, kvm, agraf, qemu-devel, pbonzini
On Wed, 2014-09-24 at 10:47 +0200, Frank Blaschka wrote:
> On Mon, Sep 22, 2014 at 02:47:31PM -0600, Alex Williamson wrote:
> > On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> > > This set of patches implements a vfio based solution for pci
> > > pass-through on the s390 platform. The kernel stuff is pretty
> > > much straight forward, but qemu needs more work.
> > >
> > > Most interesting patch is:
> > > vfio: make vfio run on s390 platform
> > >
> > > I hope Alex & Alex can give me some guidance how to do the changes
> > > in an appropriate way. After creating a separate iommmu address space
> > > for each attached PCI device I can successfully run the vfio type1
> > > iommu. So If we could extend type1 not registering all guest memory
> > > (see patch) I think we do not need a special vfio iommu for s390
> > > for the moment.
> > >
> > > The patches implement the base pass-through support. s390 specific
> > > virtualization functions are currently not included. This would
> > > be a second step after the base support is done.
> > >
> > > kernel patches apply to linux-kvm-next
> > >
> > > KVM: s390: Enable PCI instructions
> > > iommu: add iommu for s390 platform
> > > vfio: make vfio build on s390
> > >
> > > qemu patches apply to qemu-master
> > >
> > > s390: Add PCI bus support
> > > s390: implement pci instruction
> > > vfio: make vfio run on s390 platform
> > >
> > > Thx for feedback and review comments
> >
> > Sending patches as attachments makes it difficult to comment inline.
> >
> Sorry, don't understand this. I sent every patch as separate email so
> you can comment directly on the patch. What do you prefer?
The patches in each email are showing up as attachments in my mail
client. Is it just me?
> > 2/6
> > - careful of the namespace as you're changing functions from static and
> > exporting them
> > - doesn't seem like functions need to be exported, just non-static to
> > call from s390-iommu.c
> >
> Ok, will change this.
>
> > 6/6
> > - We shouldn't need to globally disable mmap, each VFIO region reports
> > whether it supports mmap and vfio-pci on s390 should indicate mmap is
> > not supported on the platform.
> Yes, this is even better to let the kernel announce a BAR can not be
> mmap'ed. Checking the kernel code I realized the BARs are valid for
> mmap'ing but the s390 platform does simply not allow this. So I feal we
> have to introduce a platform switch in kernel. How about this ...
>
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -377,9 +377,11 @@ static long vfio_pci_ioctl(void *device_
>
> info.flags = VFIO_REGION_INFO_FLAG_READ |
> VFIO_REGION_INFO_FLAG_WRITE;
> +#ifndef CONFIG_S390
> if (pci_resource_flags(pdev, info.index) &
> IORESOURCE_MEM && info.size >= PAGE_SIZE)
> info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
> +#endif
> break;
> case VFIO_PCI_ROM_REGION_INDEX:
> {
Maybe pull it out into a function. Also, is there some capability or
feature we can test rather than just the architecture? I'd prefer it to
be excluded because of a platform feature that prevents it rather than
the overall architecture itself.
> > - INTx should be done the same way, the interrupt index for INTx should
> > report 0 count. The current code likely doesn't handle this, but it
> > should be easy to fix.
> The current code is fine. Problem is the card reports an interrupt index
> (PCI_INTERRUPT_PIN) but again the platform does not support INTx at all.
> So we need a platform switch as well.
Yep, let's try to do something consistent with the MMAP testing.
> > - s390_msix_notify() vs msix_notify() should be abstracted somewhere
>
> Platform does not have have an apic so there is nothing we could emulate
> in qemu to make the existing msix_notify() work.
>
> > else. How would an emulated PCI device with MSI-X support work?
> > - same for add_msi_route
> Same here, we have to setup an adapter route due to the fact MSIX
> notifications are delivered as adapter/thin IRQs on the platform.
>
> Any suggestion or idea how a better abstraction could look like?
>
> With all the platform constraints I was not able to find a suitable
> emulated device. Remember s390:
> - does not support IO BARs
> - does not support INTx only MSIX
What about MSI (non-X)?
> - in reality currently there is only a PCI network card available
On the physical hardware?
> - platform does not support fancy I/O like usb or audio :-)
> So we don't even have kernel (host and guest) support for this
> kind of devices.
Does that mean you couldn't? What about virtio-net-pci with MSI-X
interrupts or emulated xhci with MSI-X interrupts, couldn't those be
supported if s390 MSI-X were properly integrated into the QEMU MSI-X
API? vfio-pci isn't the right level to be switching between the
standard API and the s390 API.
> > - We can probably come up with a better way to determine which address
> > space to connect to the memory listener.
> Any suggestion or idea for that?
I imagine you can tell by the address space of the device whether it
lives behind an emulated IOMMU or not and therefore pick the closest
address space for the notifier, the IOMMU or the system. Thanks,
Alex
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-24 16:05 ` Alex Williamson
@ 2014-09-26 6:45 ` Frank Blaschka
2014-09-26 19:59 ` Alex Williamson
0 siblings, 1 reply; 20+ messages in thread
From: Frank Blaschka @ 2014-09-26 6:45 UTC (permalink / raw)
To: Alex Williamson
Cc: linux-s390, frank.blaschka, kvm, agraf, qemu-devel, pbonzini
On Wed, Sep 24, 2014 at 10:05:57AM -0600, Alex Williamson wrote:
> On Wed, 2014-09-24 at 10:47 +0200, Frank Blaschka wrote:
> > On Mon, Sep 22, 2014 at 02:47:31PM -0600, Alex Williamson wrote:
> > > On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> > > > This set of patches implements a vfio based solution for pci
> > > > pass-through on the s390 platform. The kernel stuff is pretty
> > > > much straight forward, but qemu needs more work.
> > > >
> > > > Most interesting patch is:
> > > > vfio: make vfio run on s390 platform
> > > >
> > > > I hope Alex & Alex can give me some guidance how to do the changes
> > > > in an appropriate way. After creating a separate iommmu address space
> > > > for each attached PCI device I can successfully run the vfio type1
> > > > iommu. So If we could extend type1 not registering all guest memory
> > > > (see patch) I think we do not need a special vfio iommu for s390
> > > > for the moment.
> > > >
> > > > The patches implement the base pass-through support. s390 specific
> > > > virtualization functions are currently not included. This would
> > > > be a second step after the base support is done.
> > > >
> > > > kernel patches apply to linux-kvm-next
> > > >
> > > > KVM: s390: Enable PCI instructions
> > > > iommu: add iommu for s390 platform
> > > > vfio: make vfio build on s390
> > > >
> > > > qemu patches apply to qemu-master
> > > >
> > > > s390: Add PCI bus support
> > > > s390: implement pci instruction
> > > > vfio: make vfio run on s390 platform
> > > >
> > > > Thx for feedback and review comments
> > >
> > > Sending patches as attachments makes it difficult to comment inline.
> > >
> > Sorry, don't understand this. I sent every patch as separate email so
> > you can comment directly on the patch. What do you prefer?
>
> The patches in each email are showing up as attachments in my mail
> client. Is it just me?
>
> > > 2/6
> > > - careful of the namespace as you're changing functions from static and
> > > exporting them
> > > - doesn't seem like functions need to be exported, just non-static to
> > > call from s390-iommu.c
> > >
> > Ok, will change this.
> >
> > > 6/6
> > > - We shouldn't need to globally disable mmap, each VFIO region reports
> > > whether it supports mmap and vfio-pci on s390 should indicate mmap is
> > > not supported on the platform.
> > Yes, this is even better to let the kernel announce a BAR can not be
> > mmap'ed. Checking the kernel code I realized the BARs are valid for
> > mmap'ing but the s390 platform does simply not allow this. So I feal we
> > have to introduce a platform switch in kernel. How about this ...
> >
> > --- a/drivers/vfio/pci/vfio_pci.c
> > +++ b/drivers/vfio/pci/vfio_pci.c
> > @@ -377,9 +377,11 @@ static long vfio_pci_ioctl(void *device_
> >
> > info.flags = VFIO_REGION_INFO_FLAG_READ |
> > VFIO_REGION_INFO_FLAG_WRITE;
> > +#ifndef CONFIG_S390
> > if (pci_resource_flags(pdev, info.index) &
> > IORESOURCE_MEM && info.size >= PAGE_SIZE)
> > info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
> > +#endif
> > break;
> > case VFIO_PCI_ROM_REGION_INDEX:
> > {
>
> Maybe pull it out into a function. Also, is there some capability or
> feature we can test rather than just the architecture? I'd prefer it to
> be excluded because of a platform feature that prevents it rather than
> the overall architecture itself.
>
Ok, understand this. There is no capability of feature so I will go with
the function.
> > > - INTx should be done the same way, the interrupt index for INTx should
> > > report 0 count. The current code likely doesn't handle this, but it
> > > should be easy to fix.
> > The current code is fine. Problem is the card reports an interrupt index
> > (PCI_INTERRUPT_PIN) but again the platform does not support INTx at all.
> > So we need a platform switch as well.
>
> Yep, let's try to do something consistent with the MMAP testing.
>
Do you mean let the kernel announce this also?
> > > - s390_msix_notify() vs msix_notify() should be abstracted somewhere
> >
> > Platform does not have have an apic so there is nothing we could emulate
> > in qemu to make the existing msix_notify() work.
> >
> > > else. How would an emulated PCI device with MSI-X support work?
> > > - same for add_msi_route
> > Same here, we have to setup an adapter route due to the fact MSIX
> > notifications are delivered as adapter/thin IRQs on the platform.
> >
> > Any suggestion or idea how a better abstraction could look like?
> >
> > With all the platform constraints I was not able to find a suitable
> > emulated device. Remember s390:
> > - does not support IO BARs
> > - does not support INTx only MSIX
>
> What about MSI (non-X)?
In theory MSI should work also but I have not seen in reality.
>
> > - in reality currently there is only a PCI network card available
>
> On the physical hardware?
>
yes
> > - platform does not support fancy I/O like usb or audio :-)
> > So we don't even have kernel (host and guest) support for this
> > kind of devices.
>
> Does that mean you couldn't? What about virtio-net-pci with MSI-X
> interrupts or emulated xhci with MSI-X interrupts, couldn't those be
> supported if s390 MSI-X were properly integrated into the QEMU MSI-X
> API? vfio-pci isn't the right level to be switching between the
> standard API and the s390 API.
>
Yes, I also think vfio might not be the best place to switch API. Will try
to move s390 specifics to MSI-X level.
> > > - We can probably come up with a better way to determine which address
> > > space to connect to the memory listener.
> > Any suggestion or idea for that?
>
> I imagine you can tell by the address space of the device whether it
> lives behind an emulated IOMMU or not and therefore pick the closest
> address space for the notifier, the IOMMU or the system. Thanks,
>
I do not undertand this in detail, can you elaborate a little bit more on this?
Or maybe provide a code snip?
Thx Frank
> Alex
>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-26 6:45 ` Frank Blaschka
@ 2014-09-26 19:59 ` Alex Williamson
2014-10-01 9:11 ` Frank Blaschka
0 siblings, 1 reply; 20+ messages in thread
From: Alex Williamson @ 2014-09-26 19:59 UTC (permalink / raw)
To: Frank Blaschka
Cc: linux-s390, frank.blaschka, kvm, agraf, qemu-devel, pbonzini
On Fri, 2014-09-26 at 08:45 +0200, Frank Blaschka wrote:
> On Wed, Sep 24, 2014 at 10:05:57AM -0600, Alex Williamson wrote:
> > On Wed, 2014-09-24 at 10:47 +0200, Frank Blaschka wrote:
> > > On Mon, Sep 22, 2014 at 02:47:31PM -0600, Alex Williamson wrote:
> > > > On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> > > > > This set of patches implements a vfio based solution for pci
> > > > > pass-through on the s390 platform. The kernel stuff is pretty
> > > > > much straight forward, but qemu needs more work.
> > > > >
> > > > > Most interesting patch is:
> > > > > vfio: make vfio run on s390 platform
> > > > >
> > > > > I hope Alex & Alex can give me some guidance how to do the changes
> > > > > in an appropriate way. After creating a separate iommmu address space
> > > > > for each attached PCI device I can successfully run the vfio type1
> > > > > iommu. So If we could extend type1 not registering all guest memory
> > > > > (see patch) I think we do not need a special vfio iommu for s390
> > > > > for the moment.
> > > > >
> > > > > The patches implement the base pass-through support. s390 specific
> > > > > virtualization functions are currently not included. This would
> > > > > be a second step after the base support is done.
> > > > >
> > > > > kernel patches apply to linux-kvm-next
> > > > >
> > > > > KVM: s390: Enable PCI instructions
> > > > > iommu: add iommu for s390 platform
> > > > > vfio: make vfio build on s390
> > > > >
> > > > > qemu patches apply to qemu-master
> > > > >
> > > > > s390: Add PCI bus support
> > > > > s390: implement pci instruction
> > > > > vfio: make vfio run on s390 platform
> > > > >
> > > > > Thx for feedback and review comments
> > > >
> > > > Sending patches as attachments makes it difficult to comment inline.
> > > >
> > > Sorry, don't understand this. I sent every patch as separate email so
> > > you can comment directly on the patch. What do you prefer?
> >
> > The patches in each email are showing up as attachments in my mail
> > client. Is it just me?
> >
> > > > 2/6
> > > > - careful of the namespace as you're changing functions from static and
> > > > exporting them
> > > > - doesn't seem like functions need to be exported, just non-static to
> > > > call from s390-iommu.c
> > > >
> > > Ok, will change this.
> > >
> > > > 6/6
> > > > - We shouldn't need to globally disable mmap, each VFIO region reports
> > > > whether it supports mmap and vfio-pci on s390 should indicate mmap is
> > > > not supported on the platform.
> > > Yes, this is even better to let the kernel announce a BAR can not be
> > > mmap'ed. Checking the kernel code I realized the BARs are valid for
> > > mmap'ing but the s390 platform does simply not allow this. So I feal we
> > > have to introduce a platform switch in kernel. How about this ...
> > >
> > > --- a/drivers/vfio/pci/vfio_pci.c
> > > +++ b/drivers/vfio/pci/vfio_pci.c
> > > @@ -377,9 +377,11 @@ static long vfio_pci_ioctl(void *device_
> > >
> > > info.flags = VFIO_REGION_INFO_FLAG_READ |
> > > VFIO_REGION_INFO_FLAG_WRITE;
> > > +#ifndef CONFIG_S390
> > > if (pci_resource_flags(pdev, info.index) &
> > > IORESOURCE_MEM && info.size >= PAGE_SIZE)
> > > info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
> > > +#endif
> > > break;
> > > case VFIO_PCI_ROM_REGION_INDEX:
> > > {
> >
> > Maybe pull it out into a function. Also, is there some capability or
> > feature we can test rather than just the architecture? I'd prefer it to
> > be excluded because of a platform feature that prevents it rather than
> > the overall architecture itself.
> >
>
> Ok, understand this. There is no capability of feature so I will go with
> the function.
>
> > > > - INTx should be done the same way, the interrupt index for INTx should
> > > > report 0 count. The current code likely doesn't handle this, but it
> > > > should be easy to fix.
> > > The current code is fine. Problem is the card reports an interrupt index
> > > (PCI_INTERRUPT_PIN) but again the platform does not support INTx at all.
> > > So we need a platform switch as well.
> >
> > Yep, let's try to do something consistent with the MMAP testing.
> >
>
> Do you mean let the kernel announce this also?
Yes, the kernel reports a count of 0 in vfio_irq_info when the interrupt
type is not supported. We do this for MSI/X already, but it's assumed
that INTx is always present since it's part of what most platforms would
consider the minimal feature set.
> > > > - s390_msix_notify() vs msix_notify() should be abstracted somewhere
> > >
> > > Platform does not have have an apic so there is nothing we could emulate
> > > in qemu to make the existing msix_notify() work.
> > >
> > > > else. How would an emulated PCI device with MSI-X support work?
> > > > - same for add_msi_route
> > > Same here, we have to setup an adapter route due to the fact MSIX
> > > notifications are delivered as adapter/thin IRQs on the platform.
> > >
> > > Any suggestion or idea how a better abstraction could look like?
> > >
> > > With all the platform constraints I was not able to find a suitable
> > > emulated device. Remember s390:
> > > - does not support IO BARs
> > > - does not support INTx only MSIX
> >
> > What about MSI (non-X)?
>
> In theory MSI should work also but I have not seen in reality.
>
> >
> > > - in reality currently there is only a PCI network card available
> >
> > On the physical hardware?
> >
>
> yes
>
> > > - platform does not support fancy I/O like usb or audio :-)
> > > So we don't even have kernel (host and guest) support for this
> > > kind of devices.
> >
> > Does that mean you couldn't? What about virtio-net-pci with MSI-X
> > interrupts or emulated xhci with MSI-X interrupts, couldn't those be
> > supported if s390 MSI-X were properly integrated into the QEMU MSI-X
> > API? vfio-pci isn't the right level to be switching between the
> > standard API and the s390 API.
> >
>
> Yes, I also think vfio might not be the best place to switch API. Will try
> to move s390 specifics to MSI-X level.
>
> > > > - We can probably come up with a better way to determine which address
> > > > space to connect to the memory listener.
> > > Any suggestion or idea for that?
> >
> > I imagine you can tell by the address space of the device whether it
> > lives behind an emulated IOMMU or not and therefore pick the closest
> > address space for the notifier, the IOMMU or the system. Thanks,
> >
>
> I do not undertand this in detail, can you elaborate a little bit more on this?
> Or maybe provide a code snip?
Well, I'm mostly making things up, but my assumption is that the device
appears behind an IOMMU in the guest and by walking through address
spaces from the device, we should be able to figure that out and avoid
using a platform #ifdef. IOW, it's not s390 that makes us need to use a
different address space, it's the guest topology of having an emulated
IOMMU for the device, and that's what we should be keying on rather than
the arch. Thanks,
Alex
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-09-26 19:59 ` Alex Williamson
@ 2014-10-01 9:11 ` Frank Blaschka
2014-10-01 17:26 ` Alex Williamson
0 siblings, 1 reply; 20+ messages in thread
From: Frank Blaschka @ 2014-10-01 9:11 UTC (permalink / raw)
To: Alex Williamson
Cc: linux-s390, frank.blaschka, kvm, agraf, qemu-devel, pbonzini
On Fri, Sep 26, 2014 at 01:59:40PM -0600, Alex Williamson wrote:
> On Fri, 2014-09-26 at 08:45 +0200, Frank Blaschka wrote:
> > On Wed, Sep 24, 2014 at 10:05:57AM -0600, Alex Williamson wrote:
> > > On Wed, 2014-09-24 at 10:47 +0200, Frank Blaschka wrote:
> > > > On Mon, Sep 22, 2014 at 02:47:31PM -0600, Alex Williamson wrote:
> > > > > On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> > > > > > This set of patches implements a vfio based solution for pci
> > > > > > pass-through on the s390 platform. The kernel stuff is pretty
> > > > > > much straight forward, but qemu needs more work.
> > > > > >
> > > > > > Most interesting patch is:
> > > > > > vfio: make vfio run on s390 platform
> > > > > >
> > > > > > I hope Alex & Alex can give me some guidance how to do the changes
> > > > > > in an appropriate way. After creating a separate iommmu address space
> > > > > > for each attached PCI device I can successfully run the vfio type1
> > > > > > iommu. So If we could extend type1 not registering all guest memory
> > > > > > (see patch) I think we do not need a special vfio iommu for s390
> > > > > > for the moment.
> > > > > >
> > > > > > The patches implement the base pass-through support. s390 specific
> > > > > > virtualization functions are currently not included. This would
> > > > > > be a second step after the base support is done.
> > > > > >
> > > > > > kernel patches apply to linux-kvm-next
> > > > > >
> > > > > > KVM: s390: Enable PCI instructions
> > > > > > iommu: add iommu for s390 platform
> > > > > > vfio: make vfio build on s390
> > > > > >
> > > > > > qemu patches apply to qemu-master
> > > > > >
> > > > > > s390: Add PCI bus support
> > > > > > s390: implement pci instruction
> > > > > > vfio: make vfio run on s390 platform
> > > > > >
> > > > > > Thx for feedback and review comments
> > > > >
> > > > > Sending patches as attachments makes it difficult to comment inline.
> > > > >
> > > > Sorry, don't understand this. I sent every patch as separate email so
> > > > you can comment directly on the patch. What do you prefer?
> > >
> > > The patches in each email are showing up as attachments in my mail
> > > client. Is it just me?
> > >
> > > > > 2/6
> > > > > - careful of the namespace as you're changing functions from static and
> > > > > exporting them
> > > > > - doesn't seem like functions need to be exported, just non-static to
> > > > > call from s390-iommu.c
> > > > >
> > > > Ok, will change this.
> > > >
> > > > > 6/6
> > > > > - We shouldn't need to globally disable mmap, each VFIO region reports
> > > > > whether it supports mmap and vfio-pci on s390 should indicate mmap is
> > > > > not supported on the platform.
> > > > Yes, this is even better to let the kernel announce a BAR can not be
> > > > mmap'ed. Checking the kernel code I realized the BARs are valid for
> > > > mmap'ing but the s390 platform does simply not allow this. So I feal we
> > > > have to introduce a platform switch in kernel. How about this ...
> > > >
> > > > --- a/drivers/vfio/pci/vfio_pci.c
> > > > +++ b/drivers/vfio/pci/vfio_pci.c
> > > > @@ -377,9 +377,11 @@ static long vfio_pci_ioctl(void *device_
> > > >
> > > > info.flags = VFIO_REGION_INFO_FLAG_READ |
> > > > VFIO_REGION_INFO_FLAG_WRITE;
> > > > +#ifndef CONFIG_S390
> > > > if (pci_resource_flags(pdev, info.index) &
> > > > IORESOURCE_MEM && info.size >= PAGE_SIZE)
> > > > info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
> > > > +#endif
> > > > break;
> > > > case VFIO_PCI_ROM_REGION_INDEX:
> > > > {
> > >
> > > Maybe pull it out into a function. Also, is there some capability or
> > > feature we can test rather than just the architecture? I'd prefer it to
> > > be excluded because of a platform feature that prevents it rather than
> > > the overall architecture itself.
> > >
> >
> > Ok, understand this. There is no capability of feature so I will go with
> > the function.
> >
> > > > > - INTx should be done the same way, the interrupt index for INTx should
> > > > > report 0 count. The current code likely doesn't handle this, but it
> > > > > should be easy to fix.
> > > > The current code is fine. Problem is the card reports an interrupt index
> > > > (PCI_INTERRUPT_PIN) but again the platform does not support INTx at all.
> > > > So we need a platform switch as well.
> > >
> > > Yep, let's try to do something consistent with the MMAP testing.
> > >
> >
> > Do you mean let the kernel announce this also?
>
> Yes, the kernel reports a count of 0 in vfio_irq_info when the interrupt
> type is not supported. We do this for MSI/X already, but it's assumed
> that INTx is always present since it's part of what most platforms would
> consider the minimal feature set.
>
> > > > > - s390_msix_notify() vs msix_notify() should be abstracted somewhere
> > > >
> > > > Platform does not have have an apic so there is nothing we could emulate
> > > > in qemu to make the existing msix_notify() work.
> > > >
> > > > > else. How would an emulated PCI device with MSI-X support work?
> > > > > - same for add_msi_route
> > > > Same here, we have to setup an adapter route due to the fact MSIX
> > > > notifications are delivered as adapter/thin IRQs on the platform.
> > > >
> > > > Any suggestion or idea how a better abstraction could look like?
> > > >
> > > > With all the platform constraints I was not able to find a suitable
> > > > emulated device. Remember s390:
> > > > - does not support IO BARs
> > > > - does not support INTx only MSIX
> > >
> > > What about MSI (non-X)?
> >
> > In theory MSI should work also but I have not seen in reality.
> >
> > >
> > > > - in reality currently there is only a PCI network card available
> > >
> > > On the physical hardware?
> > >
> >
> > yes
> >
> > > > - platform does not support fancy I/O like usb or audio :-)
> > > > So we don't even have kernel (host and guest) support for this
> > > > kind of devices.
> > >
> > > Does that mean you couldn't? What about virtio-net-pci with MSI-X
> > > interrupts or emulated xhci with MSI-X interrupts, couldn't those be
> > > supported if s390 MSI-X were properly integrated into the QEMU MSI-X
> > > API? vfio-pci isn't the right level to be switching between the
> > > standard API and the s390 API.
> > >
> >
> > Yes, I also think vfio might not be the best place to switch API. Will try
> > to move s390 specifics to MSI-X level.
> >
> > > > > - We can probably come up with a better way to determine which address
> > > > > space to connect to the memory listener.
> > > > Any suggestion or idea for that?
> > >
> > > I imagine you can tell by the address space of the device whether it
> > > lives behind an emulated IOMMU or not and therefore pick the closest
> > > address space for the notifier, the IOMMU or the system. Thanks,
> > >
> >
> > I do not undertand this in detail, can you elaborate a little bit more on this?
> > Or maybe provide a code snip?
>
> Well, I'm mostly making things up, but my assumption is that the device
> appears behind an IOMMU in the guest and by walking through address
> spaces from the device, we should be able to figure that out and avoid
> using a platform #ifdef. IOW, it's not s390 that makes us need to use a
> different address space, it's the guest topology of having an emulated
> IOMMU for the device, and that's what we should be keying on rather than
> the arch. Thanks,
>
Do you think this would be sufficient?
@@ -3689,8 +3701,13 @@ static int vfio_connect_container(VFIOGr
container->iommu_data.type1.listener = vfio_memory_listener;
container->iommu_data.release = vfio_listener_release;
- memory_listener_register(&container->iommu_data.type1.listener,
- &address_space_memory);
+ if (memory_region_is_iommu(as->root)) {
+ memory_listener_register(&container->iommu_data.type1.listener,
+ container->space->as);
+ } else {
+ memory_listener_register(&container->iommu_data.type1.listener,
+ &address_space_memory);
+ }
if (container->iommu_data.type1.error) {
ret = container->iommu_data.type1.error;
If not what else has to be checked? What are the indications to add the memory
listener to container address space or to address_space_memory?
Thx for your help.
> Alex
>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-10-01 9:11 ` Frank Blaschka
@ 2014-10-01 17:26 ` Alex Williamson
2014-10-02 7:21 ` Frank Blaschka
0 siblings, 1 reply; 20+ messages in thread
From: Alex Williamson @ 2014-10-01 17:26 UTC (permalink / raw)
To: Frank Blaschka
Cc: linux-s390, frank.blaschka, kvm, Alexey Kardashevskiy, agraf,
qemu-devel, pbonzini
On Wed, 2014-10-01 at 11:11 +0200, Frank Blaschka wrote:
> On Fri, Sep 26, 2014 at 01:59:40PM -0600, Alex Williamson wrote:
> > On Fri, 2014-09-26 at 08:45 +0200, Frank Blaschka wrote:
> > > On Wed, Sep 24, 2014 at 10:05:57AM -0600, Alex Williamson wrote:
> > > > On Wed, 2014-09-24 at 10:47 +0200, Frank Blaschka wrote:
> > > > > On Mon, Sep 22, 2014 at 02:47:31PM -0600, Alex Williamson wrote:
> > > > > > On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> > > > > > > This set of patches implements a vfio based solution for pci
> > > > > > > pass-through on the s390 platform. The kernel stuff is pretty
> > > > > > > much straight forward, but qemu needs more work.
> > > > > > >
> > > > > > > Most interesting patch is:
> > > > > > > vfio: make vfio run on s390 platform
> > > > > > >
> > > > > > > I hope Alex & Alex can give me some guidance how to do the changes
> > > > > > > in an appropriate way. After creating a separate iommmu address space
> > > > > > > for each attached PCI device I can successfully run the vfio type1
> > > > > > > iommu. So If we could extend type1 not registering all guest memory
> > > > > > > (see patch) I think we do not need a special vfio iommu for s390
> > > > > > > for the moment.
> > > > > > >
> > > > > > > The patches implement the base pass-through support. s390 specific
> > > > > > > virtualization functions are currently not included. This would
> > > > > > > be a second step after the base support is done.
> > > > > > >
> > > > > > > kernel patches apply to linux-kvm-next
> > > > > > >
> > > > > > > KVM: s390: Enable PCI instructions
> > > > > > > iommu: add iommu for s390 platform
> > > > > > > vfio: make vfio build on s390
> > > > > > >
> > > > > > > qemu patches apply to qemu-master
> > > > > > >
> > > > > > > s390: Add PCI bus support
> > > > > > > s390: implement pci instruction
> > > > > > > vfio: make vfio run on s390 platform
> > > > > > >
> > > > > > > Thx for feedback and review comments
> > > > > >
> > > > > > Sending patches as attachments makes it difficult to comment inline.
> > > > > >
> > > > > Sorry, don't understand this. I sent every patch as separate email so
> > > > > you can comment directly on the patch. What do you prefer?
> > > >
> > > > The patches in each email are showing up as attachments in my mail
> > > > client. Is it just me?
> > > >
> > > > > > 2/6
> > > > > > - careful of the namespace as you're changing functions from static and
> > > > > > exporting them
> > > > > > - doesn't seem like functions need to be exported, just non-static to
> > > > > > call from s390-iommu.c
> > > > > >
> > > > > Ok, will change this.
> > > > >
> > > > > > 6/6
> > > > > > - We shouldn't need to globally disable mmap, each VFIO region reports
> > > > > > whether it supports mmap and vfio-pci on s390 should indicate mmap is
> > > > > > not supported on the platform.
> > > > > Yes, this is even better to let the kernel announce a BAR can not be
> > > > > mmap'ed. Checking the kernel code I realized the BARs are valid for
> > > > > mmap'ing but the s390 platform does simply not allow this. So I feal we
> > > > > have to introduce a platform switch in kernel. How about this ...
> > > > >
> > > > > --- a/drivers/vfio/pci/vfio_pci.c
> > > > > +++ b/drivers/vfio/pci/vfio_pci.c
> > > > > @@ -377,9 +377,11 @@ static long vfio_pci_ioctl(void *device_
> > > > >
> > > > > info.flags = VFIO_REGION_INFO_FLAG_READ |
> > > > > VFIO_REGION_INFO_FLAG_WRITE;
> > > > > +#ifndef CONFIG_S390
> > > > > if (pci_resource_flags(pdev, info.index) &
> > > > > IORESOURCE_MEM && info.size >= PAGE_SIZE)
> > > > > info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
> > > > > +#endif
> > > > > break;
> > > > > case VFIO_PCI_ROM_REGION_INDEX:
> > > > > {
> > > >
> > > > Maybe pull it out into a function. Also, is there some capability or
> > > > feature we can test rather than just the architecture? I'd prefer it to
> > > > be excluded because of a platform feature that prevents it rather than
> > > > the overall architecture itself.
> > > >
> > >
> > > Ok, understand this. There is no capability of feature so I will go with
> > > the function.
> > >
> > > > > > - INTx should be done the same way, the interrupt index for INTx should
> > > > > > report 0 count. The current code likely doesn't handle this, but it
> > > > > > should be easy to fix.
> > > > > The current code is fine. Problem is the card reports an interrupt index
> > > > > (PCI_INTERRUPT_PIN) but again the platform does not support INTx at all.
> > > > > So we need a platform switch as well.
> > > >
> > > > Yep, let's try to do something consistent with the MMAP testing.
> > > >
> > >
> > > Do you mean let the kernel announce this also?
> >
> > Yes, the kernel reports a count of 0 in vfio_irq_info when the interrupt
> > type is not supported. We do this for MSI/X already, but it's assumed
> > that INTx is always present since it's part of what most platforms would
> > consider the minimal feature set.
> >
> > > > > > - s390_msix_notify() vs msix_notify() should be abstracted somewhere
> > > > >
> > > > > Platform does not have have an apic so there is nothing we could emulate
> > > > > in qemu to make the existing msix_notify() work.
> > > > >
> > > > > > else. How would an emulated PCI device with MSI-X support work?
> > > > > > - same for add_msi_route
> > > > > Same here, we have to setup an adapter route due to the fact MSIX
> > > > > notifications are delivered as adapter/thin IRQs on the platform.
> > > > >
> > > > > Any suggestion or idea how a better abstraction could look like?
> > > > >
> > > > > With all the platform constraints I was not able to find a suitable
> > > > > emulated device. Remember s390:
> > > > > - does not support IO BARs
> > > > > - does not support INTx only MSIX
> > > >
> > > > What about MSI (non-X)?
> > >
> > > In theory MSI should work also but I have not seen in reality.
> > >
> > > >
> > > > > - in reality currently there is only a PCI network card available
> > > >
> > > > On the physical hardware?
> > > >
> > >
> > > yes
> > >
> > > > > - platform does not support fancy I/O like usb or audio :-)
> > > > > So we don't even have kernel (host and guest) support for this
> > > > > kind of devices.
> > > >
> > > > Does that mean you couldn't? What about virtio-net-pci with MSI-X
> > > > interrupts or emulated xhci with MSI-X interrupts, couldn't those be
> > > > supported if s390 MSI-X were properly integrated into the QEMU MSI-X
> > > > API? vfio-pci isn't the right level to be switching between the
> > > > standard API and the s390 API.
> > > >
> > >
> > > Yes, I also think vfio might not be the best place to switch API. Will try
> > > to move s390 specifics to MSI-X level.
> > >
> > > > > > - We can probably come up with a better way to determine which address
> > > > > > space to connect to the memory listener.
> > > > > Any suggestion or idea for that?
> > > >
> > > > I imagine you can tell by the address space of the device whether it
> > > > lives behind an emulated IOMMU or not and therefore pick the closest
> > > > address space for the notifier, the IOMMU or the system. Thanks,
> > > >
> > >
> > > I do not undertand this in detail, can you elaborate a little bit more on this?
> > > Or maybe provide a code snip?
> >
> > Well, I'm mostly making things up, but my assumption is that the device
> > appears behind an IOMMU in the guest and by walking through address
> > spaces from the device, we should be able to figure that out and avoid
> > using a platform #ifdef. IOW, it's not s390 that makes us need to use a
> > different address space, it's the guest topology of having an emulated
> > IOMMU for the device, and that's what we should be keying on rather than
> > the arch. Thanks,
> >
>
> Do you think this would be sufficient?
>
> @@ -3689,8 +3701,13 @@ static int vfio_connect_container(VFIOGr
> container->iommu_data.type1.listener = vfio_memory_listener;
> container->iommu_data.release = vfio_listener_release;
>
> - memory_listener_register(&container->iommu_data.type1.listener,
> - &address_space_memory);
> + if (memory_region_is_iommu(as->root)) {
> + memory_listener_register(&container->iommu_data.type1.listener,
> + container->space->as);
> + } else {
> + memory_listener_register(&container->iommu_data.type1.listener,
> + &address_space_memory);
> + }
>
> if (container->iommu_data.type1.error) {
> ret = container->iommu_data.type1.error;
>
> If not what else has to be checked? What are the indications to add the memory
> listener to container address space or to address_space_memory?
> Thx for your help.
Sure, that's what I was asking for, but as I'm looking at is, shouldn't
we be able to use container->space->as regardless? It seems like an
oversight that address_space_memory wasn't replaced with
container->space->as when Alexey added support for multiple address
spaces.
The container->space->as value comes from
pci_device_iommu_address_space() which walks the PCI bus up from the
device looking for an IOMMU address space. If it doesn't find one
(likely) it uses address_space_memory. So I suspect if we fix the code,
there's no need for any sort of switch. Thanks,
Alex
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390
2014-10-01 17:26 ` Alex Williamson
@ 2014-10-02 7:21 ` Frank Blaschka
0 siblings, 0 replies; 20+ messages in thread
From: Frank Blaschka @ 2014-10-02 7:21 UTC (permalink / raw)
To: Alex Williamson
Cc: linux-s390, frank.blaschka, kvm, Alexey Kardashevskiy, agraf,
qemu-devel, pbonzini
On Wed, Oct 01, 2014 at 11:26:51AM -0600, Alex Williamson wrote:
> On Wed, 2014-10-01 at 11:11 +0200, Frank Blaschka wrote:
> > On Fri, Sep 26, 2014 at 01:59:40PM -0600, Alex Williamson wrote:
> > > On Fri, 2014-09-26 at 08:45 +0200, Frank Blaschka wrote:
> > > > On Wed, Sep 24, 2014 at 10:05:57AM -0600, Alex Williamson wrote:
> > > > > On Wed, 2014-09-24 at 10:47 +0200, Frank Blaschka wrote:
> > > > > > On Mon, Sep 22, 2014 at 02:47:31PM -0600, Alex Williamson wrote:
> > > > > > > On Fri, 2014-09-19 at 13:54 +0200, frank.blaschka@de.ibm.com wrote:
> > > > > > > > This set of patches implements a vfio based solution for pci
> > > > > > > > pass-through on the s390 platform. The kernel stuff is pretty
> > > > > > > > much straight forward, but qemu needs more work.
> > > > > > > >
> > > > > > > > Most interesting patch is:
> > > > > > > > vfio: make vfio run on s390 platform
> > > > > > > >
> > > > > > > > I hope Alex & Alex can give me some guidance how to do the changes
> > > > > > > > in an appropriate way. After creating a separate iommmu address space
> > > > > > > > for each attached PCI device I can successfully run the vfio type1
> > > > > > > > iommu. So If we could extend type1 not registering all guest memory
> > > > > > > > (see patch) I think we do not need a special vfio iommu for s390
> > > > > > > > for the moment.
> > > > > > > >
> > > > > > > > The patches implement the base pass-through support. s390 specific
> > > > > > > > virtualization functions are currently not included. This would
> > > > > > > > be a second step after the base support is done.
> > > > > > > >
> > > > > > > > kernel patches apply to linux-kvm-next
> > > > > > > >
> > > > > > > > KVM: s390: Enable PCI instructions
> > > > > > > > iommu: add iommu for s390 platform
> > > > > > > > vfio: make vfio build on s390
> > > > > > > >
> > > > > > > > qemu patches apply to qemu-master
> > > > > > > >
> > > > > > > > s390: Add PCI bus support
> > > > > > > > s390: implement pci instruction
> > > > > > > > vfio: make vfio run on s390 platform
> > > > > > > >
> > > > > > > > Thx for feedback and review comments
> > > > > > >
> > > > > > > Sending patches as attachments makes it difficult to comment inline.
> > > > > > >
> > > > > > Sorry, don't understand this. I sent every patch as separate email so
> > > > > > you can comment directly on the patch. What do you prefer?
> > > > >
> > > > > The patches in each email are showing up as attachments in my mail
> > > > > client. Is it just me?
> > > > >
> > > > > > > 2/6
> > > > > > > - careful of the namespace as you're changing functions from static and
> > > > > > > exporting them
> > > > > > > - doesn't seem like functions need to be exported, just non-static to
> > > > > > > call from s390-iommu.c
> > > > > > >
> > > > > > Ok, will change this.
> > > > > >
> > > > > > > 6/6
> > > > > > > - We shouldn't need to globally disable mmap, each VFIO region reports
> > > > > > > whether it supports mmap and vfio-pci on s390 should indicate mmap is
> > > > > > > not supported on the platform.
> > > > > > Yes, this is even better to let the kernel announce a BAR can not be
> > > > > > mmap'ed. Checking the kernel code I realized the BARs are valid for
> > > > > > mmap'ing but the s390 platform does simply not allow this. So I feal we
> > > > > > have to introduce a platform switch in kernel. How about this ...
> > > > > >
> > > > > > --- a/drivers/vfio/pci/vfio_pci.c
> > > > > > +++ b/drivers/vfio/pci/vfio_pci.c
> > > > > > @@ -377,9 +377,11 @@ static long vfio_pci_ioctl(void *device_
> > > > > >
> > > > > > info.flags = VFIO_REGION_INFO_FLAG_READ |
> > > > > > VFIO_REGION_INFO_FLAG_WRITE;
> > > > > > +#ifndef CONFIG_S390
> > > > > > if (pci_resource_flags(pdev, info.index) &
> > > > > > IORESOURCE_MEM && info.size >= PAGE_SIZE)
> > > > > > info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
> > > > > > +#endif
> > > > > > break;
> > > > > > case VFIO_PCI_ROM_REGION_INDEX:
> > > > > > {
> > > > >
> > > > > Maybe pull it out into a function. Also, is there some capability or
> > > > > feature we can test rather than just the architecture? I'd prefer it to
> > > > > be excluded because of a platform feature that prevents it rather than
> > > > > the overall architecture itself.
> > > > >
> > > >
> > > > Ok, understand this. There is no capability of feature so I will go with
> > > > the function.
> > > >
> > > > > > > - INTx should be done the same way, the interrupt index for INTx should
> > > > > > > report 0 count. The current code likely doesn't handle this, but it
> > > > > > > should be easy to fix.
> > > > > > The current code is fine. Problem is the card reports an interrupt index
> > > > > > (PCI_INTERRUPT_PIN) but again the platform does not support INTx at all.
> > > > > > So we need a platform switch as well.
> > > > >
> > > > > Yep, let's try to do something consistent with the MMAP testing.
> > > > >
> > > >
> > > > Do you mean let the kernel announce this also?
> > >
> > > Yes, the kernel reports a count of 0 in vfio_irq_info when the interrupt
> > > type is not supported. We do this for MSI/X already, but it's assumed
> > > that INTx is always present since it's part of what most platforms would
> > > consider the minimal feature set.
> > >
> > > > > > > - s390_msix_notify() vs msix_notify() should be abstracted somewhere
> > > > > >
> > > > > > Platform does not have have an apic so there is nothing we could emulate
> > > > > > in qemu to make the existing msix_notify() work.
> > > > > >
> > > > > > > else. How would an emulated PCI device with MSI-X support work?
> > > > > > > - same for add_msi_route
> > > > > > Same here, we have to setup an adapter route due to the fact MSIX
> > > > > > notifications are delivered as adapter/thin IRQs on the platform.
> > > > > >
> > > > > > Any suggestion or idea how a better abstraction could look like?
> > > > > >
> > > > > > With all the platform constraints I was not able to find a suitable
> > > > > > emulated device. Remember s390:
> > > > > > - does not support IO BARs
> > > > > > - does not support INTx only MSIX
> > > > >
> > > > > What about MSI (non-X)?
> > > >
> > > > In theory MSI should work also but I have not seen in reality.
> > > >
> > > > >
> > > > > > - in reality currently there is only a PCI network card available
> > > > >
> > > > > On the physical hardware?
> > > > >
> > > >
> > > > yes
> > > >
> > > > > > - platform does not support fancy I/O like usb or audio :-)
> > > > > > So we don't even have kernel (host and guest) support for this
> > > > > > kind of devices.
> > > > >
> > > > > Does that mean you couldn't? What about virtio-net-pci with MSI-X
> > > > > interrupts or emulated xhci with MSI-X interrupts, couldn't those be
> > > > > supported if s390 MSI-X were properly integrated into the QEMU MSI-X
> > > > > API? vfio-pci isn't the right level to be switching between the
> > > > > standard API and the s390 API.
> > > > >
> > > >
> > > > Yes, I also think vfio might not be the best place to switch API. Will try
> > > > to move s390 specifics to MSI-X level.
> > > >
> > > > > > > - We can probably come up with a better way to determine which address
> > > > > > > space to connect to the memory listener.
> > > > > > Any suggestion or idea for that?
> > > > >
> > > > > I imagine you can tell by the address space of the device whether it
> > > > > lives behind an emulated IOMMU or not and therefore pick the closest
> > > > > address space for the notifier, the IOMMU or the system. Thanks,
> > > > >
> > > >
> > > > I do not undertand this in detail, can you elaborate a little bit more on this?
> > > > Or maybe provide a code snip?
> > >
> > > Well, I'm mostly making things up, but my assumption is that the device
> > > appears behind an IOMMU in the guest and by walking through address
> > > spaces from the device, we should be able to figure that out and avoid
> > > using a platform #ifdef. IOW, it's not s390 that makes us need to use a
> > > different address space, it's the guest topology of having an emulated
> > > IOMMU for the device, and that's what we should be keying on rather than
> > > the arch. Thanks,
> > >
> >
> > Do you think this would be sufficient?
> >
> > @@ -3689,8 +3701,13 @@ static int vfio_connect_container(VFIOGr
> > container->iommu_data.type1.listener = vfio_memory_listener;
> > container->iommu_data.release = vfio_listener_release;
> >
> > - memory_listener_register(&container->iommu_data.type1.listener,
> > - &address_space_memory);
> > + if (memory_region_is_iommu(as->root)) {
> > + memory_listener_register(&container->iommu_data.type1.listener,
> > + container->space->as);
> > + } else {
> > + memory_listener_register(&container->iommu_data.type1.listener,
> > + &address_space_memory);
> > + }
> >
> > if (container->iommu_data.type1.error) {
> > ret = container->iommu_data.type1.error;
> >
> > If not what else has to be checked? What are the indications to add the memory
> > listener to container address space or to address_space_memory?
> > Thx for your help.
>
> Sure, that's what I was asking for, but as I'm looking at is, shouldn't
> we be able to use container->space->as regardless? It seems like an
> oversight that address_space_memory wasn't replaced with
> container->space->as when Alexey added support for multiple address
> spaces.
>
> The container->space->as value comes from
> pci_device_iommu_address_space() which walks the PCI bus up from the
> device looking for an IOMMU address space. If it doesn't find one
> (likely) it uses address_space_memory. So I suspect if we fix the code,
> there's no need for any sort of switch. Thanks,
>
This was my impression too, but I was not completely sure about this. Thx
for confirmation I will chance this. Now with all that work my changes in
vfio are minimal and without any platform #ifdef. I will post the complete
patch set again if I have a solution (or at least an idea) for the MSI-X
stuff.
Thanks again for your help and patience,
Frank
> Alex
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2014-10-02 7:21 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-19 11:54 [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 1/6] KVM: s390: Enable PCI instructions frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 2/6] iommu: add iommu for s390 platform frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 3/6] vfio: make vfio build on s390 frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 4/6] s390: Add PCI bus support frank.blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 5/6] s390: implement pci instruction frank.blaschka
2014-09-19 15:12 ` Thomas Huth
2014-09-22 7:40 ` Frank Blaschka
2014-09-19 11:54 ` [Qemu-devel] [RFC patch 6/6] vfio: make vfio run on s390 platform frank.blaschka
2014-09-22 20:47 ` [Qemu-devel] [RFC patch 0/6] vfio based pci pass-through for qemu/KVM on s390 Alex Williamson
2014-09-22 22:08 ` Alexander Graf
2014-09-22 22:28 ` Alex Williamson
2014-09-23 8:33 ` Alexander Graf
2014-09-24 8:47 ` Frank Blaschka
2014-09-24 16:05 ` Alex Williamson
2014-09-26 6:45 ` Frank Blaschka
2014-09-26 19:59 ` Alex Williamson
2014-10-01 9:11 ` Frank Blaschka
2014-10-01 17:26 ` Alex Williamson
2014-10-02 7:21 ` Frank Blaschka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).