* [Qemu-devel] [PATCH RFC 1/4] linux-headers: Sync vfio.h
2015-09-18 6:30 [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB Gavin Shan
@ 2015-09-18 6:30 ` Gavin Shan
2015-09-18 6:30 ` [Qemu-devel] [PATCH RFC 2/4] VFIO: Introduce vfio_get_group_id() Gavin Shan
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Gavin Shan @ 2015-09-18 6:30 UTC (permalink / raw)
To: qemu-devel; +Cc: alex.williamson, qemu-ppc, Gavin Shan, david
This synchronizes the Linux header vfio.h because of the changes
introduced by below Linux commits:
900facd ("drivers/vfio: Support IOMMU group for EEH operations")
108f78d ("drivers/vfio: Support EEH API revision")
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
---
| 6 ++++++
1 file changed, 6 insertions(+)
--git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
index aa276bc..0cf1c57 100644
--- a/linux-headers/linux/vfio.h
+++ b/linux-headers/linux/vfio.h
@@ -495,6 +495,10 @@ struct vfio_iommu_spapr_tce_info {
* - configure PE;
* - inject EEH error.
*/
+#define VFIO_EEH_DISABLED 0
+#define VFIO_EEH_ENABLED_V1 1
+#define VFIO_EEH_ENABLED_V2 2
+
struct vfio_eeh_pe_err {
__u32 type;
__u32 func;
@@ -505,7 +509,9 @@ struct vfio_eeh_pe_err {
struct vfio_eeh_pe_op {
__u32 argsz;
__u32 flags;
+#define VFIO_EEH_ENABLED_MASK 0xFF
__u32 op;
+ __u32 groupid;
union {
struct vfio_eeh_pe_err err;
};
--
2.1.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH RFC 2/4] VFIO: Introduce vfio_get_group_id()
2015-09-18 6:30 [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB Gavin Shan
2015-09-18 6:30 ` [Qemu-devel] [PATCH RFC 1/4] linux-headers: Sync vfio.h Gavin Shan
@ 2015-09-18 6:30 ` Gavin Shan
2015-09-18 6:30 ` [Qemu-devel] [PATCH RFC 3/4] sPAPR: Support multiple IOMMU groups in PHB for EEH operations Gavin Shan
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Gavin Shan @ 2015-09-18 6:30 UTC (permalink / raw)
To: qemu-devel; +Cc: alex.williamson, qemu-ppc, Gavin Shan, david
This introduces vfio_get_group_id() to retrieve the group ID from
the specified PCI device. The function will be used by subsequent
patches to support applying EEH operation on the specified IOMMU
group.
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
---
hw/vfio/pci.c | 12 ++++++++++++
include/hw/vfio/vfio.h | 1 +
2 files changed, 13 insertions(+)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 73d34b9..27d7842 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -3507,6 +3507,18 @@ static void vfio_setup_resetfn(VFIOPCIDevice *vdev)
}
}
+int vfio_get_group_id(PCIDevice *pdev)
+{
+ VFIOPCIDevice *vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev);
+
+ if (!pdev || !object_dynamic_cast(OBJECT(pdev), "vfio-pci") ||
+ !vdev->vbasedev.group) {
+ return -1;
+ }
+
+ return vdev->vbasedev.group->groupid;
+}
+
static int vfio_initfn(PCIDevice *pdev)
{
VFIOPCIDevice *vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev);
diff --git a/include/hw/vfio/vfio.h b/include/hw/vfio/vfio.h
index 0b26cd8..6289b96 100644
--- a/include/hw/vfio/vfio.h
+++ b/include/hw/vfio/vfio.h
@@ -5,5 +5,6 @@
extern int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
int req, void *param);
+extern int vfio_get_group_id(PCIDevice *pdev);
#endif
--
2.1.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH RFC 3/4] sPAPR: Support multiple IOMMU groups in PHB for EEH operations
2015-09-18 6:30 [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB Gavin Shan
2015-09-18 6:30 ` [Qemu-devel] [PATCH RFC 1/4] linux-headers: Sync vfio.h Gavin Shan
2015-09-18 6:30 ` [Qemu-devel] [PATCH RFC 2/4] VFIO: Introduce vfio_get_group_id() Gavin Shan
@ 2015-09-18 6:30 ` Gavin Shan
2015-09-18 6:30 ` [Qemu-devel] [PATCH RFC 4/4] sPAPR: Remove EEH callbacks in sPAPRPHBClass Gavin Shan
2015-09-19 6:28 ` [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB David Gibson
4 siblings, 0 replies; 6+ messages in thread
From: Gavin Shan @ 2015-09-18 6:30 UTC (permalink / raw)
To: qemu-devel; +Cc: alex.williamson, qemu-ppc, Gavin Shan, david
Currently, EEH works based on the assumption that every VFIO PHB
has only one attached IOMMU group, which won't be true any more.
It means every PHB would have multiple attached IOMMU groups. In
order to apply the request EEH operation to the specified IOMMU
group (PE), the changes to host kernel is required. Also, QEMU
need provide the IOMMU group ID to specify the precise target for
the EEH operations.
This reuses the PE address to carry IOMMU group (PE) ID, which is
passed to host kernel to specify the precise target for the EEH
operations.
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
---
hw/ppc/spapr_pci.c | 30 ++++++++++++++-------
hw/ppc/spapr_pci_vfio.c | 64 ++++++++++++++++++++++++++++++++++++++++-----
include/hw/pci-host/spapr.h | 6 ++---
3 files changed, 82 insertions(+), 18 deletions(-)
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index a0cca22..93d55ab 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -30,6 +30,7 @@
#include "hw/pci/pci_host.h"
#include "hw/ppc/spapr.h"
#include "hw/pci-host/spapr.h"
+#include "hw/vfio/vfio.h"
#include "exec/address-spaces.h"
#include <libfdt.h>
#include "trace.h"
@@ -477,6 +478,7 @@ static void rtas_ibm_get_config_addr_info2(PowerPCCPU *cpu,
sPAPRPHBState *sphb;
sPAPRPHBClass *spc;
PCIDevice *pdev;
+ int groupid;
uint32_t addr, option;
uint64_t buid;
@@ -495,10 +497,6 @@ static void rtas_ibm_get_config_addr_info2(PowerPCCPU *cpu,
goto param_error_exit;
}
- /*
- * We always have PE address of form "00BB0001". "BB"
- * represents the bus number of PE's primary bus.
- */
option = rtas_ld(args, 3);
switch (option) {
case RTAS_GET_PE_ADDR:
@@ -508,7 +506,16 @@ static void rtas_ibm_get_config_addr_info2(PowerPCCPU *cpu,
goto param_error_exit;
}
- rtas_st(rets, 1, (pci_bus_num(pdev->bus) << 16) + 1);
+ /*
+ * We treat the (IOMMU group ID + 1) as the PE address
+ * because zero is invalid PE address.
+ */
+ groupid = vfio_get_group_id(pdev);
+ if (groupid < 0) {
+ goto param_error_exit;
+ }
+
+ rtas_st(rets, 1, groupid + 1);
break;
case RTAS_GET_PE_MODE:
rtas_st(rets, 1, RTAS_PE_MODE_SHARED);
@@ -532,6 +539,7 @@ static void rtas_ibm_read_slot_reset_state2(PowerPCCPU *cpu,
{
sPAPRPHBState *sphb;
sPAPRPHBClass *spc;
+ uint32_t addr;
uint64_t buid;
int state, ret;
@@ -539,6 +547,7 @@ static void rtas_ibm_read_slot_reset_state2(PowerPCCPU *cpu,
goto param_error_exit;
}
+ addr = rtas_ld(args, 0);
buid = rtas_ldq(args, 1);
sphb = spapr_pci_find_phb(spapr, buid);
if (!sphb) {
@@ -550,7 +559,7 @@ static void rtas_ibm_read_slot_reset_state2(PowerPCCPU *cpu,
goto param_error_exit;
}
- ret = spc->eeh_get_state(sphb, &state);
+ ret = spc->eeh_get_state(sphb, addr, &state);
rtas_st(rets, 0, ret);
if (ret != RTAS_OUT_SUCCESS) {
return;
@@ -576,7 +585,7 @@ static void rtas_ibm_set_slot_reset(PowerPCCPU *cpu,
{
sPAPRPHBState *sphb;
sPAPRPHBClass *spc;
- uint32_t option;
+ uint32_t addr, option;
uint64_t buid;
int ret;
@@ -584,6 +593,7 @@ static void rtas_ibm_set_slot_reset(PowerPCCPU *cpu,
goto param_error_exit;
}
+ addr = rtas_ld(args, 0);
buid = rtas_ldq(args, 1);
option = rtas_ld(args, 3);
sphb = spapr_pci_find_phb(spapr, buid);
@@ -596,7 +606,7 @@ static void rtas_ibm_set_slot_reset(PowerPCCPU *cpu,
goto param_error_exit;
}
- ret = spc->eeh_reset(sphb, option);
+ ret = spc->eeh_reset(sphb, addr, option);
rtas_st(rets, 0, ret);
return;
@@ -612,6 +622,7 @@ static void rtas_ibm_configure_pe(PowerPCCPU *cpu,
{
sPAPRPHBState *sphb;
sPAPRPHBClass *spc;
+ uint32_t addr;
uint64_t buid;
int ret;
@@ -619,6 +630,7 @@ static void rtas_ibm_configure_pe(PowerPCCPU *cpu,
goto param_error_exit;
}
+ addr = rtas_ld(args, 0);
buid = rtas_ldq(args, 1);
sphb = spapr_pci_find_phb(spapr, buid);
if (!sphb) {
@@ -630,7 +642,7 @@ static void rtas_ibm_configure_pe(PowerPCCPU *cpu,
goto param_error_exit;
}
- ret = spc->eeh_configure(sphb);
+ ret = spc->eeh_configure(sphb, addr);
rtas_st(rets, 0, ret);
return;
diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index cca45ed..8579ace 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -72,10 +72,24 @@ static void spapr_phb_vfio_finish_realize(sPAPRPHBState *sphb, Error **errp)
spapr_tce_get_iommu(tcet));
}
+static int spapr_phb_vfio_eeh_enabled(sPAPRPHBState *sphb, int groupid)
+{
+ static int eeh_enabled_flag = -1;
+
+ if (eeh_enabled_flag >= 0)
+ return eeh_enabled_flag;
+
+ eeh_enabled_flag = vfio_container_ioctl(&sphb->iommu_as, groupid,
+ VFIO_CHECK_EXTENSION,
+ (void *)VFIO_EEH);
+ return eeh_enabled_flag;
+}
+
static void spapr_phb_vfio_eeh_reenable(sPAPRPHBVFIOState *svphb)
{
struct vfio_eeh_pe_op op = {
.argsz = sizeof(op),
+ .flags = 0,
.op = VFIO_EEH_PE_ENABLE
};
@@ -98,6 +112,7 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
unsigned int addr, int option)
{
sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
+ int flags, groupid = addr - 1;
struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
int ret;
@@ -121,6 +136,11 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
return RTAS_OUT_PARAM_ERROR;
}
+ groupid = vfio_get_group_id(pdev);
+ if (groupid < 0) {
+ return RTAS_OUT_PARAM_ERROR;
+ }
+
op.op = VFIO_EEH_PE_ENABLE;
break;
}
@@ -134,6 +154,13 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
return RTAS_OUT_PARAM_ERROR;
}
+ flags = spapr_phb_vfio_eeh_enabled(sphb, groupid);
+ if (flags == VFIO_EEH_DISABLED) {
+ return RTAS_OUT_HW_ERROR;
+ }
+
+ op.flags |= flags;
+ op.groupid = groupid;
ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
VFIO_EEH_PE_OP, &op);
if (ret < 0) {
@@ -143,13 +170,22 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
return RTAS_OUT_SUCCESS;
}
-static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
+static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb,
+ unsigned int addr,
+ int *state)
{
sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
- int ret;
+ int flags, ret;
+
+ flags = spapr_phb_vfio_eeh_enabled(sphb, addr - 1);
+ if (flags == VFIO_EEH_DISABLED) {
+ return RTAS_OUT_PARAM_ERROR;
+ }
op.op = VFIO_EEH_PE_GET_STATE;
+ op.flags |= flags;
+ op.groupid = addr - 1;
ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
VFIO_EEH_PE_OP, &op);
if (ret < 0) {
@@ -203,11 +239,12 @@ static void spapr_phb_vfio_eeh_pre_reset(sPAPRPHBState *sphb)
pci_for_each_bus(phb->bus, spapr_phb_vfio_eeh_clear_bus_msix, NULL);
}
-static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
+static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb,
+ unsigned int addr, int option)
{
sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
- int ret;
+ int flags, ret;
switch (option) {
case RTAS_SLOT_RESET_DEACTIVATE:
@@ -225,6 +262,13 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
return RTAS_OUT_PARAM_ERROR;
}
+ flags = spapr_phb_vfio_eeh_enabled(sphb, addr - 1);
+ if (flags == VFIO_EEH_DISABLED) {
+ return RTAS_OUT_HW_ERROR;
+ }
+
+ op.flags |= flags;
+ op.groupid = addr - 1;
ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
VFIO_EEH_PE_OP, &op);
if (ret < 0) {
@@ -234,13 +278,21 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
return RTAS_OUT_SUCCESS;
}
-static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb)
+static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb,
+ unsigned int addr)
{
sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
- int ret;
+ int flags, ret;
+
+ flags = spapr_phb_vfio_eeh_enabled(sphb, addr - 1);
+ if (flags == VFIO_EEH_DISABLED) {
+ return RTAS_OUT_HW_ERROR;
+ }
op.op = VFIO_EEH_PE_CONFIGURE;
+ op.flags |= flags;
+ op.groupid = addr - 1;
ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
VFIO_EEH_PE_OP, &op);
if (ret < 0) {
diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h
index 7de5e02..e01bb74 100644
--- a/include/hw/pci-host/spapr.h
+++ b/include/hw/pci-host/spapr.h
@@ -50,9 +50,9 @@ struct sPAPRPHBClass {
void (*finish_realize)(sPAPRPHBState *sphb, Error **errp);
int (*eeh_set_option)(sPAPRPHBState *sphb, unsigned int addr, int option);
- int (*eeh_get_state)(sPAPRPHBState *sphb, int *state);
- int (*eeh_reset)(sPAPRPHBState *sphb, int option);
- int (*eeh_configure)(sPAPRPHBState *sphb);
+ int (*eeh_get_state)(sPAPRPHBState *sphb, unsigned int addr, int *state);
+ int (*eeh_reset)(sPAPRPHBState *sphb, unsigned int addr, int option);
+ int (*eeh_configure)(sPAPRPHBState *sphb, unsigned int addr);
};
typedef struct spapr_pci_msi {
--
2.1.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH RFC 4/4] sPAPR: Remove EEH callbacks in sPAPRPHBClass
2015-09-18 6:30 [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB Gavin Shan
` (2 preceding siblings ...)
2015-09-18 6:30 ` [Qemu-devel] [PATCH RFC 3/4] sPAPR: Support multiple IOMMU groups in PHB for EEH operations Gavin Shan
@ 2015-09-18 6:30 ` Gavin Shan
2015-09-19 6:28 ` [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB David Gibson
4 siblings, 0 replies; 6+ messages in thread
From: Gavin Shan @ 2015-09-18 6:30 UTC (permalink / raw)
To: qemu-devel; +Cc: alex.williamson, qemu-ppc, Gavin Shan, david
Currently, the EEH operations implemented in the callbacks in
sPAPRPHBClass, which will be dropped soon. This makes those
functions corresponding to the EEH callbacks in sPAPRPHBClass
public so that they can be called directly.
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
---
hw/ppc/spapr_pci.c | 44 ++++----------------------------------------
hw/ppc/spapr_pci_vfio.c | 33 +++++++++++----------------------
include/hw/pci-host/spapr.h | 11 +++++++----
3 files changed, 22 insertions(+), 66 deletions(-)
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index 93d55ab..5e63ee5 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -431,7 +431,6 @@ static void rtas_ibm_set_eeh_option(PowerPCCPU *cpu,
target_ulong rets)
{
sPAPRPHBState *sphb;
- sPAPRPHBClass *spc;
PCIDevice *pdev;
uint32_t addr, option;
uint64_t buid;
@@ -456,12 +455,7 @@ static void rtas_ibm_set_eeh_option(PowerPCCPU *cpu,
goto param_error_exit;
}
- spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
- if (!spc->eeh_set_option) {
- goto param_error_exit;
- }
-
- ret = spc->eeh_set_option(sphb, addr, option);
+ ret = spapr_phb_vfio_eeh_set_option(sphb, addr, option);
rtas_st(rets, 0, ret);
return;
@@ -476,7 +470,6 @@ static void rtas_ibm_get_config_addr_info2(PowerPCCPU *cpu,
target_ulong rets)
{
sPAPRPHBState *sphb;
- sPAPRPHBClass *spc;
PCIDevice *pdev;
int groupid;
uint32_t addr, option;
@@ -492,11 +485,6 @@ static void rtas_ibm_get_config_addr_info2(PowerPCCPU *cpu,
goto param_error_exit;
}
- spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
- if (!spc->eeh_set_option) {
- goto param_error_exit;
- }
-
option = rtas_ld(args, 3);
switch (option) {
case RTAS_GET_PE_ADDR:
@@ -538,7 +526,6 @@ static void rtas_ibm_read_slot_reset_state2(PowerPCCPU *cpu,
target_ulong rets)
{
sPAPRPHBState *sphb;
- sPAPRPHBClass *spc;
uint32_t addr;
uint64_t buid;
int state, ret;
@@ -554,12 +541,7 @@ static void rtas_ibm_read_slot_reset_state2(PowerPCCPU *cpu,
goto param_error_exit;
}
- spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
- if (!spc->eeh_get_state) {
- goto param_error_exit;
- }
-
- ret = spc->eeh_get_state(sphb, addr, &state);
+ ret = spapr_phb_vfio_eeh_get_state(sphb, addr, &state);
rtas_st(rets, 0, ret);
if (ret != RTAS_OUT_SUCCESS) {
return;
@@ -584,7 +566,6 @@ static void rtas_ibm_set_slot_reset(PowerPCCPU *cpu,
target_ulong rets)
{
sPAPRPHBState *sphb;
- sPAPRPHBClass *spc;
uint32_t addr, option;
uint64_t buid;
int ret;
@@ -601,12 +582,7 @@ static void rtas_ibm_set_slot_reset(PowerPCCPU *cpu,
goto param_error_exit;
}
- spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
- if (!spc->eeh_reset) {
- goto param_error_exit;
- }
-
- ret = spc->eeh_reset(sphb, addr, option);
+ ret = spapr_phb_vfio_eeh_reset(sphb, addr, option);
rtas_st(rets, 0, ret);
return;
@@ -621,7 +597,6 @@ static void rtas_ibm_configure_pe(PowerPCCPU *cpu,
target_ulong rets)
{
sPAPRPHBState *sphb;
- sPAPRPHBClass *spc;
uint32_t addr;
uint64_t buid;
int ret;
@@ -637,12 +612,7 @@ static void rtas_ibm_configure_pe(PowerPCCPU *cpu,
goto param_error_exit;
}
- spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
- if (!spc->eeh_configure) {
- goto param_error_exit;
- }
-
- ret = spc->eeh_configure(sphb, addr);
+ ret = spapr_phb_vfio_eeh_configure(sphb, addr);
rtas_st(rets, 0, ret);
return;
@@ -658,7 +628,6 @@ static void rtas_ibm_slot_error_detail(PowerPCCPU *cpu,
target_ulong rets)
{
sPAPRPHBState *sphb;
- sPAPRPHBClass *spc;
int option;
uint64_t buid;
@@ -672,11 +641,6 @@ static void rtas_ibm_slot_error_detail(PowerPCCPU *cpu,
goto param_error_exit;
}
- spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
- if (!spc->eeh_set_option) {
- goto param_error_exit;
- }
-
option = rtas_ld(args, 7);
switch (option) {
case RTAS_SLOT_TEMP_ERR_LOG:
diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index 8579ace..a99c7b3 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -108,10 +108,9 @@ static void spapr_phb_vfio_reset(DeviceState *qdev)
spapr_phb_vfio_eeh_reenable(SPAPR_PCI_VFIO_HOST_BRIDGE(qdev));
}
-static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
- unsigned int addr, int option)
+int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
+ unsigned int addr, int option)
{
- sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
int flags, groupid = addr - 1;
struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
int ret;
@@ -161,8 +160,7 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
op.flags |= flags;
op.groupid = groupid;
- ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
- VFIO_EEH_PE_OP, &op);
+ ret = vfio_container_ioctl(&sphb->iommu_as, groupid, VFIO_EEH_PE_OP, &op);
if (ret < 0) {
return RTAS_OUT_HW_ERROR;
}
@@ -170,11 +168,9 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
return RTAS_OUT_SUCCESS;
}
-static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb,
- unsigned int addr,
- int *state)
+int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb,
+ unsigned int addr, int *state)
{
- sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
int flags, ret;
@@ -186,7 +182,7 @@ static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb,
op.op = VFIO_EEH_PE_GET_STATE;
op.flags |= flags;
op.groupid = addr - 1;
- ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+ ret = vfio_container_ioctl(&sphb->iommu_as, op.groupid,
VFIO_EEH_PE_OP, &op);
if (ret < 0) {
return RTAS_OUT_PARAM_ERROR;
@@ -239,10 +235,9 @@ static void spapr_phb_vfio_eeh_pre_reset(sPAPRPHBState *sphb)
pci_for_each_bus(phb->bus, spapr_phb_vfio_eeh_clear_bus_msix, NULL);
}
-static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb,
- unsigned int addr, int option)
+int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb,
+ unsigned int addr, int option)
{
- sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
int flags, ret;
@@ -269,7 +264,7 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb,
op.flags |= flags;
op.groupid = addr - 1;
- ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+ ret = vfio_container_ioctl(&sphb->iommu_as, op.groupid,
VFIO_EEH_PE_OP, &op);
if (ret < 0) {
return RTAS_OUT_HW_ERROR;
@@ -278,10 +273,8 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb,
return RTAS_OUT_SUCCESS;
}
-static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb,
- unsigned int addr)
+int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb, unsigned int addr)
{
- sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
int flags, ret;
@@ -293,7 +286,7 @@ static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb,
op.op = VFIO_EEH_PE_CONFIGURE;
op.flags |= flags;
op.groupid = addr - 1;
- ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+ ret = vfio_container_ioctl(&sphb->iommu_as, op.groupid,
VFIO_EEH_PE_OP, &op);
if (ret < 0) {
return RTAS_OUT_PARAM_ERROR;
@@ -310,10 +303,6 @@ static void spapr_phb_vfio_class_init(ObjectClass *klass, void *data)
dc->props = spapr_phb_vfio_properties;
dc->reset = spapr_phb_vfio_reset;
spc->finish_realize = spapr_phb_vfio_finish_realize;
- spc->eeh_set_option = spapr_phb_vfio_eeh_set_option;
- spc->eeh_get_state = spapr_phb_vfio_eeh_get_state;
- spc->eeh_reset = spapr_phb_vfio_eeh_reset;
- spc->eeh_configure = spapr_phb_vfio_eeh_configure;
}
static const TypeInfo spapr_phb_vfio_info = {
diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h
index e01bb74..488d807 100644
--- a/include/hw/pci-host/spapr.h
+++ b/include/hw/pci-host/spapr.h
@@ -49,10 +49,6 @@ struct sPAPRPHBClass {
PCIHostBridgeClass parent_class;
void (*finish_realize)(sPAPRPHBState *sphb, Error **errp);
- int (*eeh_set_option)(sPAPRPHBState *sphb, unsigned int addr, int option);
- int (*eeh_get_state)(sPAPRPHBState *sphb, unsigned int addr, int *state);
- int (*eeh_reset)(sPAPRPHBState *sphb, unsigned int addr, int option);
- int (*eeh_configure)(sPAPRPHBState *sphb, unsigned int addr);
};
typedef struct spapr_pci_msi {
@@ -136,5 +132,12 @@ void spapr_pci_rtas_init(void);
sPAPRPHBState *spapr_pci_find_phb(sPAPRMachineState *spapr, uint64_t buid);
PCIDevice *spapr_pci_find_dev(sPAPRMachineState *spapr, uint64_t buid,
uint32_t config_addr);
+int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
+ unsigned int addr, int option);
+int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb,
+ unsigned int addr, int *state);
+int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb,
+ unsigned int addr, int option);
+int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb, unsigned int addr);
#endif /* __HW_SPAPR_PCI_H__ */
--
2.1.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB
2015-09-18 6:30 [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB Gavin Shan
` (3 preceding siblings ...)
2015-09-18 6:30 ` [Qemu-devel] [PATCH RFC 4/4] sPAPR: Remove EEH callbacks in sPAPRPHBClass Gavin Shan
@ 2015-09-19 6:28 ` David Gibson
4 siblings, 0 replies; 6+ messages in thread
From: David Gibson @ 2015-09-19 6:28 UTC (permalink / raw)
To: Gavin Shan; +Cc: alex.williamson, qemu-ppc, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 2310 bytes --]
On Fri, Sep 18, 2015 at 04:30:12PM +1000, Gavin Shan wrote:
> This patchset bases on David Gibson's git tree: git://github.com/dgibson/qemu.git
> (branch: vfio). And it requires host kernel changes which is being reviewed
> this moment.
>
> https://patchwork.ozlabs.org/patch/519135/
> https://patchwork.ozlabs.org/patch/519136/
>
> Currently, EEH works with the assumption that every sPAPRPHBState, which
> is associated with VFIO container in VFIO case, only has one attached
> IOMMU group (PE). The request EEH opertion (like reset) is applied to
> all PEs attached to the specified sPAPRPHBState. It breaks the affected
> boundary of the EEH operation if the sPAPRPHBState supports multiple
> IOMMU groups (PEs).
>
> The patchset intends to resolve above issue by using the newly exposed
> EEH v2 API interface, which accepts IOMMU group (PE) to specify the
> affected domain of the requested EEH operation: Every PE is identified
> with PE address, which is the (PE's primary bus ID + 1) previously.
> After this patchset, it's changed to (IOMMU group ID + 1). The PE adress
> is passed on every requested EEH operation from guest so that it can
> be passed to host to affect the target PE only.
Sorry Gavin,
I've been working on this problem from the other end - trying to get
qemu to work safely with EEH to the limited extent that's possible
with the existing kernel interface, and also getting rid of the
special VFIO host bridge nonsense at the same time.
My code is at git://github.com/dgibson/qemu.git, branch 'eeh'. I plan
to post as soon as I've given it at least some minimal testing.
It collides with these patches so they'll need a substantial
reworking.
Fwiw, I'd also prefer if you tackled this by first altering the PE
config_addr allocation in qemu so that we can get it working with the
existing broken kernel interface, as long as there is only one vfio
group per container (put there could be multiple containers on a PHB).
Then, the fixed kernel interface can be added on top of that, to
remove the one-group-per-container restriction.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread