qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
@ 2012-10-02 19:21 Alex Williamson
  2012-10-02 19:21 ` [Qemu-devel] [PATCH 1/6] pci: Add INTx no-route option Alex Williamson
                   ` (6 more replies)
  0 siblings, 7 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-02 19:21 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: jan.kiszka, alex.williamson

A few cleanups that I'll also apply to vfio-pci.  First make intx
route checking non-fatal.  vfio-pci has a fallback INTx mechanism
that doesn't rely on this, so we can already run on q35, but not if
we can't even probe for intx routing w/o blowing up.  Next, both
vfio-pci and pci-assign test whether INTx routing has changed using
similar functions.  Make this common.  Finally, expose a way to
get the MSI message for an MSI vector.  Again, both pci-assign and
vfio-pci need to do this to program the vector for KVM injection.
Thanks,

Alex

---

Alex Williamson (6):
      pci-assign: Use msi_get_message()
      msi: Add msi_get_message()
      pci-assign: Use pci_intx_route_changed()
      pci: Helper function for testing if an INTx route changed
      pci-assign: Add support for no-route
      pci: Add INTx no-route option


 hw/kvm/pci-assign.c |   14 ++++++++------
 hw/msi.c            |   45 +++++++++++++++++++++++++++++----------------
 hw/msi.h            |    1 +
 hw/pci.c            |   13 +++++++++++--
 hw/pci.h            |    2 ++
 5 files changed, 51 insertions(+), 24 deletions(-)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 1/6] pci: Add INTx no-route option
  2012-10-02 19:21 [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
@ 2012-10-02 19:21 ` Alex Williamson
  2012-10-02 19:21 ` [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route Alex Williamson
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-02 19:21 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: jan.kiszka, alex.williamson

pci_device_route_intx_to_irq() has no probe capability.  vfio-pci
can make use of KVM acceleration if this information is available,
but can still operate without it.  Make it non-fatal to call this
on a platform or chipset where it hasn't been implemented yet.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---

 hw/pci.c |    8 ++++++--
 hw/pci.h |    1 +
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/hw/pci.c b/hw/pci.c
index f855cf3..9cb0ad4 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -1094,8 +1094,12 @@ PCIINTxRoute pci_device_route_intx_to_irq(PCIDevice *dev, int pin)
          pin = bus->map_irq(dev, pin);
          dev = bus->parent_dev;
     } while (dev);
-    assert(bus->route_intx_to_irq);
-    return bus->route_intx_to_irq(bus->irq_opaque, pin);
+
+    if (bus->route_intx_to_irq) {
+        return bus->route_intx_to_irq(bus->irq_opaque, pin);
+    }
+
+    return (PCIINTxRoute) { PCI_INTX_NOROUTE, -1 };
 }
 
 void pci_bus_fire_intx_routing_notifier(PCIBus *bus)
diff --git a/hw/pci.h b/hw/pci.h
index 4b6ab3d..ed1a372 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -146,6 +146,7 @@ typedef struct PCIINTxRoute {
         PCI_INTX_ENABLED,
         PCI_INTX_INVERTED,
         PCI_INTX_DISABLED,
+        PCI_INTX_NOROUTE,
     } mode;
     int irq;
 } PCIINTxRoute;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route
  2012-10-02 19:21 [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
  2012-10-02 19:21 ` [Qemu-devel] [PATCH 1/6] pci: Add INTx no-route option Alex Williamson
@ 2012-10-02 19:21 ` Alex Williamson
  2012-10-02 19:55   ` Anthony Liguori
  2012-10-02 19:21 ` [Qemu-devel] [PATCH 3/6] pci: Helper function for testing if an INTx route changed Alex Williamson
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-10-02 19:21 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: jan.kiszka, alex.williamson

In the event that a pci-assign device is added to a chipset that
hasn't yet implemented the INTx routing interface, exit gracefully
instead of killing the VM.  I'm sure we'll implement this for q35,
but there's no reason for such a harsh response.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---

 hw/kvm/pci-assign.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/hw/kvm/pci-assign.c b/hw/kvm/pci-assign.c
index 05b93d9..7ce0f37 100644
--- a/hw/kvm/pci-assign.c
+++ b/hw/kvm/pci-assign.c
@@ -887,6 +887,12 @@ static int assign_intx(AssignedDevice *dev)
 
     intx_route = pci_device_route_intx_to_irq(&dev->dev, dev->intpin);
     assert(intx_route.mode != PCI_INTX_INVERTED);
+    if (intx_route.mode == PCI_INTX_NOROUTE) {
+        error_report("pci-assign: chipset provides no INTx routing "
+                     "information, but device supports INTx interrupt mode.\n");
+        pci_device_set_intx_routing_notifier(&dev->dev, NULL);
+        return -ENOTSUP;
+    }
 
     if (dev->intx_route.mode == intx_route.mode &&
         dev->intx_route.irq == intx_route.irq) {

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 3/6] pci: Helper function for testing if an INTx route changed
  2012-10-02 19:21 [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
  2012-10-02 19:21 ` [Qemu-devel] [PATCH 1/6] pci: Add INTx no-route option Alex Williamson
  2012-10-02 19:21 ` [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route Alex Williamson
@ 2012-10-02 19:21 ` Alex Williamson
  2012-10-02 19:22 ` [Qemu-devel] [PATCH 4/6] pci-assign: Use pci_intx_route_changed() Alex Williamson
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-02 19:21 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: jan.kiszka, alex.williamson

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---

 hw/pci.c |    5 +++++
 hw/pci.h |    1 +
 2 files changed, 6 insertions(+)

diff --git a/hw/pci.c b/hw/pci.c
index 9cb0ad4..6adb3b7 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -1102,6 +1102,11 @@ PCIINTxRoute pci_device_route_intx_to_irq(PCIDevice *dev, int pin)
     return (PCIINTxRoute) { PCI_INTX_NOROUTE, -1 };
 }
 
+bool pci_intx_route_changed(PCIINTxRoute *old, PCIINTxRoute *new)
+{
+    return old->mode != new->mode || old->irq != new->irq;
+}
+
 void pci_bus_fire_intx_routing_notifier(PCIBus *bus)
 {
     PCIDevice *dev;
diff --git a/hw/pci.h b/hw/pci.h
index ed1a372..14ebb57 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -325,6 +325,7 @@ PCIBus *pci_register_bus(DeviceState *parent, const char *name,
                          uint8_t devfn_min, int nirq);
 void pci_bus_set_route_irq_fn(PCIBus *, pci_route_irq_fn);
 PCIINTxRoute pci_device_route_intx_to_irq(PCIDevice *dev, int pin);
+bool pci_intx_route_changed(PCIINTxRoute *old, PCIINTxRoute *new);
 void pci_bus_fire_intx_routing_notifier(PCIBus *bus);
 void pci_device_set_intx_routing_notifier(PCIDevice *dev,
                                           PCIINTxRoutingNotifier notifier);

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 4/6] pci-assign: Use pci_intx_route_changed()
  2012-10-02 19:21 [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
                   ` (2 preceding siblings ...)
  2012-10-02 19:21 ` [Qemu-devel] [PATCH 3/6] pci: Helper function for testing if an INTx route changed Alex Williamson
@ 2012-10-02 19:22 ` Alex Williamson
  2012-10-02 19:22 ` [Qemu-devel] [PATCH 5/6] msi: Add msi_get_message() Alex Williamson
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-02 19:22 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: jan.kiszka, alex.williamson

Replace open coded version

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---

 hw/kvm/pci-assign.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/hw/kvm/pci-assign.c b/hw/kvm/pci-assign.c
index 7ce0f37..ddde573 100644
--- a/hw/kvm/pci-assign.c
+++ b/hw/kvm/pci-assign.c
@@ -894,8 +894,7 @@ static int assign_intx(AssignedDevice *dev)
         return -ENOTSUP;
     }
 
-    if (dev->intx_route.mode == intx_route.mode &&
-        dev->intx_route.irq == intx_route.irq) {
+    if (!pci_intx_route_changed(&dev->intx_route, &intx_route)) {
         return 0;
     }
 

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 5/6] msi: Add msi_get_message()
  2012-10-02 19:21 [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
                   ` (3 preceding siblings ...)
  2012-10-02 19:22 ` [Qemu-devel] [PATCH 4/6] pci-assign: Use pci_intx_route_changed() Alex Williamson
@ 2012-10-02 19:22 ` Alex Williamson
  2012-10-02 19:22 ` [Qemu-devel] [PATCH 6/6] pci-assign: Use msi_get_message() Alex Williamson
  2012-10-08 15:58 ` [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
  6 siblings, 0 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-02 19:22 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: jan.kiszka, alex.williamson

vfio-pci and pci-assign both do this on their own for setting up
direct MSI injection through KVM.  Provide a helper function for
this in MSI code.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---

 hw/msi.c |   45 +++++++++++++++++++++++++++++----------------
 hw/msi.h |    1 +
 2 files changed, 30 insertions(+), 16 deletions(-)

diff --git a/hw/msi.c b/hw/msi.c
index e2273a0..33037a8 100644
--- a/hw/msi.c
+++ b/hw/msi.c
@@ -122,6 +122,31 @@ void msi_set_message(PCIDevice *dev, MSIMessage msg)
     pci_set_word(dev->config + msi_data_off(dev, msi64bit), msg.data);
 }
 
+MSIMessage msi_get_message(PCIDevice *dev, unsigned int vector)
+{
+    uint16_t flags = pci_get_word(dev->config + msi_flags_off(dev));
+    bool msi64bit = flags & PCI_MSI_FLAGS_64BIT;
+    unsigned int nr_vectors = msi_nr_vectors(flags);
+    MSIMessage msg;
+
+    assert(vector < nr_vectors);
+
+    if (msi64bit) {
+        msg.address = pci_get_quad(dev->config + msi_address_lo_off(dev));
+    } else {
+        msg.address = pci_get_long(dev->config + msi_address_lo_off(dev));
+    }
+
+    /* upper bit 31:16 is zero */
+    msg.data = pci_get_word(dev->config + msi_data_off(dev, msi64bit));
+    if (nr_vectors > 1) {
+        msg.data &= ~(nr_vectors - 1);
+        msg.data |= vector;
+    }
+
+    return msg;
+}
+
 bool msi_enabled(const PCIDevice *dev)
 {
     return msi_present(dev) &&
@@ -249,8 +274,7 @@ void msi_notify(PCIDevice *dev, unsigned int vector)
     uint16_t flags = pci_get_word(dev->config + msi_flags_off(dev));
     bool msi64bit = flags & PCI_MSI_FLAGS_64BIT;
     unsigned int nr_vectors = msi_nr_vectors(flags);
-    uint64_t address;
-    uint32_t data;
+    MSIMessage msg;
 
     assert(vector < nr_vectors);
     if (msi_is_masked(dev, vector)) {
@@ -261,24 +285,13 @@ void msi_notify(PCIDevice *dev, unsigned int vector)
         return;
     }
 
-    if (msi64bit) {
-        address = pci_get_quad(dev->config + msi_address_lo_off(dev));
-    } else {
-        address = pci_get_long(dev->config + msi_address_lo_off(dev));
-    }
-
-    /* upper bit 31:16 is zero */
-    data = pci_get_word(dev->config + msi_data_off(dev, msi64bit));
-    if (nr_vectors > 1) {
-        data &= ~(nr_vectors - 1);
-        data |= vector;
-    }
+    msg = msi_get_message(dev, vector);
 
     MSI_DEV_PRINTF(dev,
                    "notify vector 0x%x"
                    " address: 0x%"PRIx64" data: 0x%"PRIx32"\n",
-                   vector, address, data);
-    stl_le_phys(address, data);
+                   vector, msg.address, msg.data);
+    stl_le_phys(msg.address, msg.data);
 }
 
 /* Normally called by pci_default_write_config(). */
diff --git a/hw/msi.h b/hw/msi.h
index 6ec1f99..150b09a 100644
--- a/hw/msi.h
+++ b/hw/msi.h
@@ -32,6 +32,7 @@ struct MSIMessage {
 extern bool msi_supported;
 
 void msi_set_message(PCIDevice *dev, MSIMessage msg);
+MSIMessage msi_get_message(PCIDevice *dev, unsigned int vector);
 bool msi_enabled(const PCIDevice *dev);
 int msi_init(struct PCIDevice *dev, uint8_t offset,
              unsigned int nr_vectors, bool msi64bit, bool msi_per_vector_mask);

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH 6/6] pci-assign: Use msi_get_message()
  2012-10-02 19:21 [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
                   ` (4 preceding siblings ...)
  2012-10-02 19:22 ` [Qemu-devel] [PATCH 5/6] msi: Add msi_get_message() Alex Williamson
@ 2012-10-02 19:22 ` Alex Williamson
  2012-10-08 15:58 ` [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
  6 siblings, 0 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-02 19:22 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: jan.kiszka, alex.williamson

pci-assign only uses a subset of the flexibility msi_get_message()
provides, but it's still worthwhile to use it.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---

 hw/kvm/pci-assign.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/hw/kvm/pci-assign.c b/hw/kvm/pci-assign.c
index ddde573..d76b1bd 100644
--- a/hw/kvm/pci-assign.c
+++ b/hw/kvm/pci-assign.c
@@ -1008,12 +1008,9 @@ static void assigned_dev_update_msi(PCIDevice *pci_dev)
     }
 
     if (ctrl_byte & PCI_MSI_FLAGS_ENABLE) {
-        uint8_t *pos = pci_dev->config + pci_dev->msi_cap;
-        MSIMessage msg;
+        MSIMessage msg = msi_get_message(pci_dev, 0);
         int virq;
 
-        msg.address = pci_get_long(pos + PCI_MSI_ADDRESS_LO);
-        msg.data = pci_get_word(pos + PCI_MSI_DATA_32);
         virq = kvm_irqchip_add_msi_route(kvm_state, msg);
         if (virq < 0) {
             perror("assigned_dev_update_msi: kvm_irqchip_add_msi_route");

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route
  2012-10-02 19:21 ` [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route Alex Williamson
@ 2012-10-02 19:55   ` Anthony Liguori
  2012-10-02 20:12     ` Alex Williamson
  0 siblings, 1 reply; 28+ messages in thread
From: Anthony Liguori @ 2012-10-02 19:55 UTC (permalink / raw)
  To: Alex Williamson, mst, qemu-devel; +Cc: jan.kiszka

Alex Williamson <alex.williamson@redhat.com> writes:

> In the event that a pci-assign device is added to a chipset that
> hasn't yet implemented the INTx routing interface, exit gracefully
> instead of killing the VM.  I'm sure we'll implement this for q35,
> but there's no reason for such a harsh response.
>
> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> ---
>
>  hw/kvm/pci-assign.c |    6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/hw/kvm/pci-assign.c b/hw/kvm/pci-assign.c
> index 05b93d9..7ce0f37 100644
> --- a/hw/kvm/pci-assign.c
> +++ b/hw/kvm/pci-assign.c
> @@ -887,6 +887,12 @@ static int assign_intx(AssignedDevice *dev)
>  
>      intx_route = pci_device_route_intx_to_irq(&dev->dev, dev->intpin);
>      assert(intx_route.mode != PCI_INTX_INVERTED);
> +    if (intx_route.mode == PCI_INTX_NOROUTE) {
> +        error_report("pci-assign: chipset provides no INTx routing "
> +                     "information, but device supports INTx interrupt mode.\n");
> +        pci_device_set_intx_routing_notifier(&dev->dev, NULL);
> +        return -ENOTSUP;
> +    }

Please don't use error_report() in new code.  Propagate an Error object
using error_setg() and friends.

Regards,

Anthony Liguori

>  
>      if (dev->intx_route.mode == intx_route.mode &&
>          dev->intx_route.irq == intx_route.irq) {

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route
  2012-10-02 19:55   ` Anthony Liguori
@ 2012-10-02 20:12     ` Alex Williamson
  2012-10-03  9:52       ` Paolo Bonzini
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-10-02 20:12 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: jan.kiszka, qemu-devel, mst

On Tue, 2012-10-02 at 14:55 -0500, Anthony Liguori wrote:
> Alex Williamson <alex.williamson@redhat.com> writes:
> 
> > In the event that a pci-assign device is added to a chipset that
> > hasn't yet implemented the INTx routing interface, exit gracefully
> > instead of killing the VM.  I'm sure we'll implement this for q35,
> > but there's no reason for such a harsh response.
> >
> > Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> > ---
> >
> >  hw/kvm/pci-assign.c |    6 ++++++
> >  1 file changed, 6 insertions(+)
> >
> > diff --git a/hw/kvm/pci-assign.c b/hw/kvm/pci-assign.c
> > index 05b93d9..7ce0f37 100644
> > --- a/hw/kvm/pci-assign.c
> > +++ b/hw/kvm/pci-assign.c
> > @@ -887,6 +887,12 @@ static int assign_intx(AssignedDevice *dev)
> >  
> >      intx_route = pci_device_route_intx_to_irq(&dev->dev, dev->intpin);
> >      assert(intx_route.mode != PCI_INTX_INVERTED);
> > +    if (intx_route.mode == PCI_INTX_NOROUTE) {
> > +        error_report("pci-assign: chipset provides no INTx routing "
> > +                     "information, but device supports INTx interrupt mode.\n");
> > +        pci_device_set_intx_routing_notifier(&dev->dev, NULL);
> > +        return -ENOTSUP;
> > +    }
> 
> Please don't use error_report() in new code.  Propagate an Error object
> using error_setg() and friends.

That doesn't really seem to be an option since this function is called
both from the driver initfn and the interrupt routing change notifier.
Neither of those give me an Error object... suggestions?  Thanks,

Alex

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route
  2012-10-02 20:12     ` Alex Williamson
@ 2012-10-03  9:52       ` Paolo Bonzini
  2012-10-03 12:32         ` Anthony Liguori
  0 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2012-10-03  9:52 UTC (permalink / raw)
  To: Alex Williamson; +Cc: jan.kiszka, qemu-devel, Anthony Liguori, mst

Il 02/10/2012 22:12, Alex Williamson ha scritto:
> On Tue, 2012-10-02 at 14:55 -0500, Anthony Liguori wrote:
>> Alex Williamson <alex.williamson@redhat.com> writes:
>>
>>> In the event that a pci-assign device is added to a chipset that
>>> hasn't yet implemented the INTx routing interface, exit gracefully
>>> instead of killing the VM.  I'm sure we'll implement this for q35,
>>> but there's no reason for such a harsh response.
>>>
>>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
>>> ---
>>>
>>>  hw/kvm/pci-assign.c |    6 ++++++
>>>  1 file changed, 6 insertions(+)
>>>
>>> diff --git a/hw/kvm/pci-assign.c b/hw/kvm/pci-assign.c
>>> index 05b93d9..7ce0f37 100644
>>> --- a/hw/kvm/pci-assign.c
>>> +++ b/hw/kvm/pci-assign.c
>>> @@ -887,6 +887,12 @@ static int assign_intx(AssignedDevice *dev)
>>>  
>>>      intx_route = pci_device_route_intx_to_irq(&dev->dev, dev->intpin);
>>>      assert(intx_route.mode != PCI_INTX_INVERTED);
>>> +    if (intx_route.mode == PCI_INTX_NOROUTE) {
>>> +        error_report("pci-assign: chipset provides no INTx routing "
>>> +                     "information, but device supports INTx interrupt mode.\n");
>>> +        pci_device_set_intx_routing_notifier(&dev->dev, NULL);
>>> +        return -ENOTSUP;
>>> +    }
>>
>> Please don't use error_report() in new code.  Propagate an Error object
>> using error_setg() and friends.
> 
> That doesn't really seem to be an option since this function is called
> both from the driver initfn and the interrupt routing change notifier.
> Neither of those give me an Error object... suggestions?  Thanks,

Both assigned_dev_update_irq_routing and assigned_dev_update_msi would
have to print the error themselves (qerror_report_err + error_free).
Looks like a waste of time, frankly.

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route
  2012-10-03  9:52       ` Paolo Bonzini
@ 2012-10-03 12:32         ` Anthony Liguori
  0 siblings, 0 replies; 28+ messages in thread
From: Anthony Liguori @ 2012-10-03 12:32 UTC (permalink / raw)
  To: Paolo Bonzini, Alex Williamson; +Cc: jan.kiszka, qemu-devel, mst

Paolo Bonzini <pbonzini@redhat.com> writes:

> Il 02/10/2012 22:12, Alex Williamson ha scritto:
>> On Tue, 2012-10-02 at 14:55 -0500, Anthony Liguori wrote:
>>> Alex Williamson <alex.williamson@redhat.com> writes:
>>>
>>>> In the event that a pci-assign device is added to a chipset that
>>>> hasn't yet implemented the INTx routing interface, exit gracefully
>>>> instead of killing the VM.  I'm sure we'll implement this for q35,
>>>> but there's no reason for such a harsh response.
>>>>
>>>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
>>>> ---
>>>>
>>>>  hw/kvm/pci-assign.c |    6 ++++++
>>>>  1 file changed, 6 insertions(+)
>>>>
>>>> diff --git a/hw/kvm/pci-assign.c b/hw/kvm/pci-assign.c
>>>> index 05b93d9..7ce0f37 100644
>>>> --- a/hw/kvm/pci-assign.c
>>>> +++ b/hw/kvm/pci-assign.c
>>>> @@ -887,6 +887,12 @@ static int assign_intx(AssignedDevice *dev)
>>>>  
>>>>      intx_route = pci_device_route_intx_to_irq(&dev->dev, dev->intpin);
>>>>      assert(intx_route.mode != PCI_INTX_INVERTED);
>>>> +    if (intx_route.mode == PCI_INTX_NOROUTE) {
>>>> +        error_report("pci-assign: chipset provides no INTx routing "
>>>> +                     "information, but device supports INTx interrupt mode.\n");
>>>> +        pci_device_set_intx_routing_notifier(&dev->dev, NULL);
>>>> +        return -ENOTSUP;
>>>> +    }
>>>
>>> Please don't use error_report() in new code.  Propagate an Error object
>>> using error_setg() and friends.
>> 
>> That doesn't really seem to be an option since this function is called
>> both from the driver initfn and the interrupt routing change notifier.
>> Neither of those give me an Error object... suggestions?  Thanks,
>
> Both assigned_dev_update_irq_routing and assigned_dev_update_msi would
> have to print the error themselves (qerror_report_err + error_free).
> Looks like a waste of time, frankly.

That's a pity but I do agree.

Regards,

Anthony Liguori

>
> Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-02 19:21 [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
                   ` (5 preceding siblings ...)
  2012-10-02 19:22 ` [Qemu-devel] [PATCH 6/6] pci-assign: Use msi_get_message() Alex Williamson
@ 2012-10-08 15:58 ` Alex Williamson
  2012-10-08 16:01   ` Jan Kiszka
  2012-10-08 20:15   ` Michael S. Tsirkin
  6 siblings, 2 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-08 15:58 UTC (permalink / raw)
  To: mst; +Cc: jan.kiszka, qemu-devel

Michael, Jan,

Any comments on these?  I'd like to make the PCI changes before I update
vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
anyone officially listed as maintainer of pci-assign since it's been
moved to qemu.  I could include the pci-assign patches in my tree if you
prefer.  Thanks,

Alex

On Tue, 2012-10-02 at 13:21 -0600, Alex Williamson wrote:
> A few cleanups that I'll also apply to vfio-pci.  First make intx
> route checking non-fatal.  vfio-pci has a fallback INTx mechanism
> that doesn't rely on this, so we can already run on q35, but not if
> we can't even probe for intx routing w/o blowing up.  Next, both
> vfio-pci and pci-assign test whether INTx routing has changed using
> similar functions.  Make this common.  Finally, expose a way to
> get the MSI message for an MSI vector.  Again, both pci-assign and
> vfio-pci need to do this to program the vector for KVM injection.
> Thanks,
> 
> Alex
> 
> ---
> 
> Alex Williamson (6):
>       pci-assign: Use msi_get_message()
>       msi: Add msi_get_message()
>       pci-assign: Use pci_intx_route_changed()
>       pci: Helper function for testing if an INTx route changed
>       pci-assign: Add support for no-route
>       pci: Add INTx no-route option
> 
> 
>  hw/kvm/pci-assign.c |   14 ++++++++------
>  hw/msi.c            |   45 +++++++++++++++++++++++++++++----------------
>  hw/msi.h            |    1 +
>  hw/pci.c            |   13 +++++++++++--
>  hw/pci.h            |    2 ++
>  5 files changed, 51 insertions(+), 24 deletions(-)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 15:58 ` [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
@ 2012-10-08 16:01   ` Jan Kiszka
  2012-10-08 20:15   ` Michael S. Tsirkin
  1 sibling, 0 replies; 28+ messages in thread
From: Jan Kiszka @ 2012-10-08 16:01 UTC (permalink / raw)
  To: Alex Williamson; +Cc: qemu-devel@nongnu.org, mst@redhat.com

On 2012-10-08 17:58, Alex Williamson wrote:
> Michael, Jan,
> 
> Any comments on these?  I'd like to make the PCI changes before I update
> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> anyone officially listed as maintainer of pci-assign since it's been
> moved to qemu.  I could include the pci-assign patches in my tree if you
> prefer.  Thanks,

Sorry, just briefly looked at them, and from that they made sense.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 20:15   ` Michael S. Tsirkin
@ 2012-10-08 19:27     ` Alex Williamson
  2012-10-08 21:40       ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-10-08 19:27 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: jan.kiszka, qemu-devel

On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > Michael, Jan,
> > 
> > Any comments on these?  I'd like to make the PCI changes before I update
> > vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > anyone officially listed as maintainer of pci-assign since it's been
> > moved to qemu.  I could include the pci-assign patches in my tree if you
> > prefer.  Thanks,
> > 
> > Alex
> 
> Patches themselves look fine, but I'd like to
> better understand why do we want the INTx fallback.
> Isn't it easier to add intx routing support?

vfio-pci can work with or without intx routing support.  Its presence is
just one requirement to enable kvm accelerated intx support.  Regardless
of whether it's easy or hard to implement intx routing in a given
chipset, I currently can't probe for it and make useful decisions about
whether or not to enable kvm support without potentially hitting an
assert.  It's arguable how important intx acceleration is for specific
applications, so while I'd like all chipsets to implement it, I don't
know that it should be a gating factor to chipset integration.  Thanks,

Alex

> > On Tue, 2012-10-02 at 13:21 -0600, Alex Williamson wrote:
> > > A few cleanups that I'll also apply to vfio-pci.  First make intx
> > > route checking non-fatal.  vfio-pci has a fallback INTx mechanism
> > > that doesn't rely on this, so we can already run on q35, but not if
> > > we can't even probe for intx routing w/o blowing up.  Next, both
> > > vfio-pci and pci-assign test whether INTx routing has changed using
> > > similar functions.  Make this common.  Finally, expose a way to
> > > get the MSI message for an MSI vector.  Again, both pci-assign and
> > > vfio-pci need to do this to program the vector for KVM injection.
> > > Thanks,
> > > 
> > > Alex
> > > 
> > > ---
> > > 
> > > Alex Williamson (6):
> > >       pci-assign: Use msi_get_message()
> > >       msi: Add msi_get_message()
> > >       pci-assign: Use pci_intx_route_changed()
> > >       pci: Helper function for testing if an INTx route changed
> > >       pci-assign: Add support for no-route
> > >       pci: Add INTx no-route option
> > > 
> > > 
> > >  hw/kvm/pci-assign.c |   14 ++++++++------
> > >  hw/msi.c            |   45 +++++++++++++++++++++++++++++----------------
> > >  hw/msi.h            |    1 +
> > >  hw/pci.c            |   13 +++++++++++--
> > >  hw/pci.h            |    2 ++
> > >  5 files changed, 51 insertions(+), 24 deletions(-)
> > 
> > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 15:58 ` [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
  2012-10-08 16:01   ` Jan Kiszka
@ 2012-10-08 20:15   ` Michael S. Tsirkin
  2012-10-08 19:27     ` Alex Williamson
  1 sibling, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-10-08 20:15 UTC (permalink / raw)
  To: Alex Williamson; +Cc: jan.kiszka, qemu-devel

On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> Michael, Jan,
> 
> Any comments on these?  I'd like to make the PCI changes before I update
> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> anyone officially listed as maintainer of pci-assign since it's been
> moved to qemu.  I could include the pci-assign patches in my tree if you
> prefer.  Thanks,
> 
> Alex

Patches themselves look fine, but I'd like to
better understand why do we want the INTx fallback.
Isn't it easier to add intx routing support?

> On Tue, 2012-10-02 at 13:21 -0600, Alex Williamson wrote:
> > A few cleanups that I'll also apply to vfio-pci.  First make intx
> > route checking non-fatal.  vfio-pci has a fallback INTx mechanism
> > that doesn't rely on this, so we can already run on q35, but not if
> > we can't even probe for intx routing w/o blowing up.  Next, both
> > vfio-pci and pci-assign test whether INTx routing has changed using
> > similar functions.  Make this common.  Finally, expose a way to
> > get the MSI message for an MSI vector.  Again, both pci-assign and
> > vfio-pci need to do this to program the vector for KVM injection.
> > Thanks,
> > 
> > Alex
> > 
> > ---
> > 
> > Alex Williamson (6):
> >       pci-assign: Use msi_get_message()
> >       msi: Add msi_get_message()
> >       pci-assign: Use pci_intx_route_changed()
> >       pci: Helper function for testing if an INTx route changed
> >       pci-assign: Add support for no-route
> >       pci: Add INTx no-route option
> > 
> > 
> >  hw/kvm/pci-assign.c |   14 ++++++++------
> >  hw/msi.c            |   45 +++++++++++++++++++++++++++++----------------
> >  hw/msi.h            |    1 +
> >  hw/pci.c            |   13 +++++++++++--
> >  hw/pci.h            |    2 ++
> >  5 files changed, 51 insertions(+), 24 deletions(-)
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 21:40       ` Michael S. Tsirkin
@ 2012-10-08 21:11         ` Alex Williamson
  2012-10-08 21:50           ` Michael S. Tsirkin
  2012-10-09  7:09           ` Jan Kiszka
  0 siblings, 2 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-08 21:11 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: jan.kiszka, qemu-devel

On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > Michael, Jan,
> > > > 
> > > > Any comments on these?  I'd like to make the PCI changes before I update
> > > > vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > > anyone officially listed as maintainer of pci-assign since it's been
> > > > moved to qemu.  I could include the pci-assign patches in my tree if you
> > > > prefer.  Thanks,
> > > > 
> > > > Alex
> > > 
> > > Patches themselves look fine, but I'd like to
> > > better understand why do we want the INTx fallback.
> > > Isn't it easier to add intx routing support?
> > 
> > vfio-pci can work with or without intx routing support.  Its presence is
> > just one requirement to enable kvm accelerated intx support.  Regardless
> > of whether it's easy or hard to implement intx routing in a given
> > chipset, I currently can't probe for it and make useful decisions about
> > whether or not to enable kvm support without potentially hitting an
> > assert.  It's arguable how important intx acceleration is for specific
> > applications, so while I'd like all chipsets to implement it, I don't
> > know that it should be a gating factor to chipset integration.  Thanks,
> > 
> > Alex
> 
> Yes but there's nothing kvm specific in the routing API,
> and IIRC it actually works fine without kvm.

Correct, but intx routing isn't very useful without kvm.

> As I see it, if some chipset does not expose it, it's a bug, and the
> reason for lack of support is because no one cares about supporting
> device assignment there.

Should we not have a more robust response to bugs than to kill the VM?
Especially when it's as trivial as using the non-accelerated intx mode
(vfio-pci) or having device init fail (pci-assign).  Calling assert is
lazy.

> So this API is not something devices should probe for.
> How about just assuming it works?

If that's the case, why test for any capabilities?  Let's just assume
everything is there and litter the code with asserts rather than
robustly deal with errors. </sarcasm>

> Otherwise, you are adding code and API that will become dead code
> when everyone supports the required API.

So q35 is going to be the last ever chipset?  It may be the next one to
implement intx routing, but hopefully not the last.

We're talking about this:

 hw/pci.c |    8 ++++++--
 hw/pci.h |    1 +
 2 files changed, 7 insertions(+), 2 deletions(-)

Is this somehow tipping us over the edge and creating an unmaintainable
API?  Does it somehow prevent a driver from being just as lazy and doing
assert(intx.mode == PCI_INTX_NOROUTE)?  The API is obviously not
complete as is if I have to assume it's there, call into it and hope for
the best.  Thanks,

Alex

> > > > On Tue, 2012-10-02 at 13:21 -0600, Alex Williamson wrote:
> > > > > A few cleanups that I'll also apply to vfio-pci.  First make intx
> > > > > route checking non-fatal.  vfio-pci has a fallback INTx mechanism
> > > > > that doesn't rely on this, so we can already run on q35, but not if
> > > > > we can't even probe for intx routing w/o blowing up.  Next, both
> > > > > vfio-pci and pci-assign test whether INTx routing has changed using
> > > > > similar functions.  Make this common.  Finally, expose a way to
> > > > > get the MSI message for an MSI vector.  Again, both pci-assign and
> > > > > vfio-pci need to do this to program the vector for KVM injection.
> > > > > Thanks,
> > > > > 
> > > > > Alex
> > > > > 
> > > > > ---
> > > > > 
> > > > > Alex Williamson (6):
> > > > >       pci-assign: Use msi_get_message()
> > > > >       msi: Add msi_get_message()
> > > > >       pci-assign: Use pci_intx_route_changed()
> > > > >       pci: Helper function for testing if an INTx route changed
> > > > >       pci-assign: Add support for no-route
> > > > >       pci: Add INTx no-route option
> > > > > 
> > > > > 
> > > > >  hw/kvm/pci-assign.c |   14 ++++++++------
> > > > >  hw/msi.c            |   45 +++++++++++++++++++++++++++++----------------
> > > > >  hw/msi.h            |    1 +
> > > > >  hw/pci.c            |   13 +++++++++++--
> > > > >  hw/pci.h            |    2 ++
> > > > >  5 files changed, 51 insertions(+), 24 deletions(-)
> > > > 
> > > > 
> > 
> > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 19:27     ` Alex Williamson
@ 2012-10-08 21:40       ` Michael S. Tsirkin
  2012-10-08 21:11         ` Alex Williamson
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-10-08 21:40 UTC (permalink / raw)
  To: Alex Williamson; +Cc: jan.kiszka, qemu-devel

On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > Michael, Jan,
> > > 
> > > Any comments on these?  I'd like to make the PCI changes before I update
> > > vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > anyone officially listed as maintainer of pci-assign since it's been
> > > moved to qemu.  I could include the pci-assign patches in my tree if you
> > > prefer.  Thanks,
> > > 
> > > Alex
> > 
> > Patches themselves look fine, but I'd like to
> > better understand why do we want the INTx fallback.
> > Isn't it easier to add intx routing support?
> 
> vfio-pci can work with or without intx routing support.  Its presence is
> just one requirement to enable kvm accelerated intx support.  Regardless
> of whether it's easy or hard to implement intx routing in a given
> chipset, I currently can't probe for it and make useful decisions about
> whether or not to enable kvm support without potentially hitting an
> assert.  It's arguable how important intx acceleration is for specific
> applications, so while I'd like all chipsets to implement it, I don't
> know that it should be a gating factor to chipset integration.  Thanks,
> 
> Alex

Yes but there's nothing kvm specific in the routing API,
and IIRC it actually works fine without kvm.

As I see it, if some chipset does not expose it, it's a bug, and the
reason for lack of support is because no one cares about supporting
device assignment there.

So this API is not something devices should probe for.
How about just assuming it works?

Otherwise, you are adding code and API that will become dead code
when everyone supports the required API.

> > > On Tue, 2012-10-02 at 13:21 -0600, Alex Williamson wrote:
> > > > A few cleanups that I'll also apply to vfio-pci.  First make intx
> > > > route checking non-fatal.  vfio-pci has a fallback INTx mechanism
> > > > that doesn't rely on this, so we can already run on q35, but not if
> > > > we can't even probe for intx routing w/o blowing up.  Next, both
> > > > vfio-pci and pci-assign test whether INTx routing has changed using
> > > > similar functions.  Make this common.  Finally, expose a way to
> > > > get the MSI message for an MSI vector.  Again, both pci-assign and
> > > > vfio-pci need to do this to program the vector for KVM injection.
> > > > Thanks,
> > > > 
> > > > Alex
> > > > 
> > > > ---
> > > > 
> > > > Alex Williamson (6):
> > > >       pci-assign: Use msi_get_message()
> > > >       msi: Add msi_get_message()
> > > >       pci-assign: Use pci_intx_route_changed()
> > > >       pci: Helper function for testing if an INTx route changed
> > > >       pci-assign: Add support for no-route
> > > >       pci: Add INTx no-route option
> > > > 
> > > > 
> > > >  hw/kvm/pci-assign.c |   14 ++++++++------
> > > >  hw/msi.c            |   45 +++++++++++++++++++++++++++++----------------
> > > >  hw/msi.h            |    1 +
> > > >  hw/pci.c            |   13 +++++++++++--
> > > >  hw/pci.h            |    2 ++
> > > >  5 files changed, 51 insertions(+), 24 deletions(-)
> > > 
> > > 
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 21:11         ` Alex Williamson
@ 2012-10-08 21:50           ` Michael S. Tsirkin
  2012-10-08 22:09             ` Alex Williamson
  2012-10-09  7:09           ` Jan Kiszka
  1 sibling, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-10-08 21:50 UTC (permalink / raw)
  To: Alex Williamson; +Cc: jan.kiszka, qemu-devel

On Mon, Oct 08, 2012 at 03:11:37PM -0600, Alex Williamson wrote:
> On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > > Michael, Jan,
> > > > > 
> > > > > Any comments on these?  I'd like to make the PCI changes before I update
> > > > > vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > > > anyone officially listed as maintainer of pci-assign since it's been
> > > > > moved to qemu.  I could include the pci-assign patches in my tree if you
> > > > > prefer.  Thanks,
> > > > > 
> > > > > Alex
> > > > 
> > > > Patches themselves look fine, but I'd like to
> > > > better understand why do we want the INTx fallback.
> > > > Isn't it easier to add intx routing support?
> > > 
> > > vfio-pci can work with or without intx routing support.  Its presence is
> > > just one requirement to enable kvm accelerated intx support.  Regardless
> > > of whether it's easy or hard to implement intx routing in a given
> > > chipset, I currently can't probe for it and make useful decisions about
> > > whether or not to enable kvm support without potentially hitting an
> > > assert.  It's arguable how important intx acceleration is for specific
> > > applications, so while I'd like all chipsets to implement it, I don't
> > > know that it should be a gating factor to chipset integration.  Thanks,
> > > 
> > > Alex
> > 
> > Yes but there's nothing kvm specific in the routing API,
> > and IIRC it actually works fine without kvm.
> 
> Correct, but intx routing isn't very useful without kvm.
> 
> > As I see it, if some chipset does not expose it, it's a bug, and the
> > reason for lack of support is because no one cares about supporting
> > device assignment there.
> 
> Should we not have a more robust response to bugs than to kill the VM?
> Especially when it's as trivial as using the non-accelerated intx mode
> (vfio-pci) or having device init fail (pci-assign).  Calling assert is
> lazy.

This API has nothing to do with acceleration that I can see.

> > So this API is not something devices should probe for.
> > How about just assuming it works?
> 
> If that's the case, why test for any capabilities?  Let's just assume
> everything is there and litter the code with asserts rather than
> robustly deal with errors. </sarcasm>
> 
> > Otherwise, you are adding code and API that will become dead code
> > when everyone supports the required API.
> 
> So q35 is going to be the last ever chipset?  It may be the next one to
> implement intx routing, but hopefully not the last.

Everyone should just implement same APIs.  This was exactly why I asked
Jan to change routing so everyone does the same thing, instead of this
bolted-on hack just for assignment.

I merged this anyway so we can stop maintaining two trees which is even
more painful.

> We're talking about this:
> 
>  hw/pci.c |    8 ++++++--
>  hw/pci.h |    1 +
>  2 files changed, 7 insertions(+), 2 deletions(-)
> 
> Is this somehow tipping us over the edge and creating an unmaintainable
> API?


One has to draw a line in the sand somewhere, and I think it's a good
idea to stop now before this metastazes all over the code.

> Does it somehow prevent a driver from being just as lazy and doing
> assert(intx.mode == PCI_INTX_NOROUTE)?

What does this return code mean? How can there be no route?

> The API is obviously not
> complete as is if I have to assume it's there, call into it and hope for
> the best.  Thanks,
> 
> Alex

Why can't all chipsets implement this? You are just working
around broken chipsets in your device, this is wrong.

> > > > > On Tue, 2012-10-02 at 13:21 -0600, Alex Williamson wrote:
> > > > > > A few cleanups that I'll also apply to vfio-pci.  First make intx
> > > > > > route checking non-fatal.  vfio-pci has a fallback INTx mechanism
> > > > > > that doesn't rely on this, so we can already run on q35, but not if
> > > > > > we can't even probe for intx routing w/o blowing up.  Next, both
> > > > > > vfio-pci and pci-assign test whether INTx routing has changed using
> > > > > > similar functions.  Make this common.  Finally, expose a way to
> > > > > > get the MSI message for an MSI vector.  Again, both pci-assign and
> > > > > > vfio-pci need to do this to program the vector for KVM injection.
> > > > > > Thanks,
> > > > > > 
> > > > > > Alex
> > > > > > 
> > > > > > ---
> > > > > > 
> > > > > > Alex Williamson (6):
> > > > > >       pci-assign: Use msi_get_message()
> > > > > >       msi: Add msi_get_message()
> > > > > >       pci-assign: Use pci_intx_route_changed()
> > > > > >       pci: Helper function for testing if an INTx route changed
> > > > > >       pci-assign: Add support for no-route
> > > > > >       pci: Add INTx no-route option
> > > > > > 
> > > > > > 
> > > > > >  hw/kvm/pci-assign.c |   14 ++++++++------
> > > > > >  hw/msi.c            |   45 +++++++++++++++++++++++++++++----------------
> > > > > >  hw/msi.h            |    1 +
> > > > > >  hw/pci.c            |   13 +++++++++++--
> > > > > >  hw/pci.h            |    2 ++
> > > > > >  5 files changed, 51 insertions(+), 24 deletions(-)
> > > > > 
> > > > > 
> > > 
> > > 
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 21:50           ` Michael S. Tsirkin
@ 2012-10-08 22:09             ` Alex Williamson
  2012-10-08 22:44               ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-10-08 22:09 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: jan.kiszka, qemu-devel

On Mon, 2012-10-08 at 23:50 +0200, Michael S. Tsirkin wrote:
> On Mon, Oct 08, 2012 at 03:11:37PM -0600, Alex Williamson wrote:
> > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > > > Michael, Jan,
> > > > > > 
> > > > > > Any comments on these?  I'd like to make the PCI changes before I update
> > > > > > vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > > > > anyone officially listed as maintainer of pci-assign since it's been
> > > > > > moved to qemu.  I could include the pci-assign patches in my tree if you
> > > > > > prefer.  Thanks,
> > > > > > 
> > > > > > Alex
> > > > > 
> > > > > Patches themselves look fine, but I'd like to
> > > > > better understand why do we want the INTx fallback.
> > > > > Isn't it easier to add intx routing support?
> > > > 
> > > > vfio-pci can work with or without intx routing support.  Its presence is
> > > > just one requirement to enable kvm accelerated intx support.  Regardless
> > > > of whether it's easy or hard to implement intx routing in a given
> > > > chipset, I currently can't probe for it and make useful decisions about
> > > > whether or not to enable kvm support without potentially hitting an
> > > > assert.  It's arguable how important intx acceleration is for specific
> > > > applications, so while I'd like all chipsets to implement it, I don't
> > > > know that it should be a gating factor to chipset integration.  Thanks,
> > > > 
> > > > Alex
> > > 
> > > Yes but there's nothing kvm specific in the routing API,
> > > and IIRC it actually works fine without kvm.
> > 
> > Correct, but intx routing isn't very useful without kvm.
> > 
> > > As I see it, if some chipset does not expose it, it's a bug, and the
> > > reason for lack of support is because no one cares about supporting
> > > device assignment there.
> > 
> > Should we not have a more robust response to bugs than to kill the VM?
> > Especially when it's as trivial as using the non-accelerated intx mode
> > (vfio-pci) or having device init fail (pci-assign).  Calling assert is
> > lazy.
> 
> This API has nothing to do with acceleration that I can see.

Ok, let's build on that...

> > > So this API is not something devices should probe for.
> > > How about just assuming it works?
> > 
> > If that's the case, why test for any capabilities?  Let's just assume
> > everything is there and litter the code with asserts rather than
> > robustly deal with errors. </sarcasm>
> > 
> > > Otherwise, you are adding code and API that will become dead code
> > > when everyone supports the required API.
> > 
> > So q35 is going to be the last ever chipset?  It may be the next one to
> > implement intx routing, but hopefully not the last.
> 
> Everyone should just implement same APIs.  This was exactly why I asked
> Jan to change routing so everyone does the same thing, instead of this
> bolted-on hack just for assignment.

Do you have any idea how to implement intx routing on a g3beige
platform?  Me neither, but vfio-pci works just fine on it if I don't
call pci_device_route_intx_to_irq on that platform.  I could test kvm
support before calling it, but we both agree that intx routing isn't
tied to kvm support, same for kvm irq chip support.  Now I'm in a pickle
of what can I test for to see if intx routing is in place that won't
break some day if we run g3beige on a ppc host w/ kvm acceleration, and
heaven forbid that it should ever implement something we call an
irqchip.

> I merged this anyway so we can stop maintaining two trees which is even
> more painful.
> 
> > We're talking about this:
> > 
> >  hw/pci.c |    8 ++++++--
> >  hw/pci.h |    1 +
> >  2 files changed, 7 insertions(+), 2 deletions(-)
> > 
> > Is this somehow tipping us over the edge and creating an unmaintainable
> > API?
> 
> 
> One has to draw a line in the sand somewhere, and I think it's a good
> idea to stop now before this metastazes all over the code.

I think the API is broken w/o this, so...

> > Does it somehow prevent a driver from being just as lazy and doing
> > assert(intx.mode == PCI_INTX_NOROUTE)?
> 
> What does this return code mean? How can there be no route?

The function returns PCIINTxRoute, so I can't very well return -1 or
-ENODEV if we don't know how to generate the route.  Obviously the route
exists somewhere in the depths of qemu_irq, but we don't know how to get
it.  Therefore, we return No Route.  It's far more obvious than trying
to figure out what INVERTED means.

> > The API is obviously not
> > complete as is if I have to assume it's there, call into it and hope for
> > the best.  Thanks,
> > 
> > Alex
> 
> Why can't all chipsets implement this? You are just working
> around broken chipsets in your device, this is wrong.

Like I asked, do you know how to make intx route work on g3beige or
define a test that doesn't pull in unrelated features to avoid calling
it there?  Implementing intx routing on a chipset that has no hopes of
qemu bypass is wasted effort, therefore we need an API that allows us to
test it.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 22:09             ` Alex Williamson
@ 2012-10-08 22:44               ` Michael S. Tsirkin
  2012-10-08 23:54                 ` Alex Williamson
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-10-08 22:44 UTC (permalink / raw)
  To: Alex Williamson; +Cc: jan.kiszka, qemu-devel

On Mon, Oct 08, 2012 at 04:09:48PM -0600, Alex Williamson wrote:
> On Mon, 2012-10-08 at 23:50 +0200, Michael S. Tsirkin wrote:
> > On Mon, Oct 08, 2012 at 03:11:37PM -0600, Alex Williamson wrote:
> > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > > On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > > On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > > > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > > > > Michael, Jan,
> > > > > > > 
> > > > > > > Any comments on these?  I'd like to make the PCI changes before I update
> > > > > > > vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > > > > > anyone officially listed as maintainer of pci-assign since it's been
> > > > > > > moved to qemu.  I could include the pci-assign patches in my tree if you
> > > > > > > prefer.  Thanks,
> > > > > > > 
> > > > > > > Alex
> > > > > > 
> > > > > > Patches themselves look fine, but I'd like to
> > > > > > better understand why do we want the INTx fallback.
> > > > > > Isn't it easier to add intx routing support?
> > > > > 
> > > > > vfio-pci can work with or without intx routing support.  Its presence is
> > > > > just one requirement to enable kvm accelerated intx support.  Regardless
> > > > > of whether it's easy or hard to implement intx routing in a given
> > > > > chipset, I currently can't probe for it and make useful decisions about
> > > > > whether or not to enable kvm support without potentially hitting an
> > > > > assert.  It's arguable how important intx acceleration is for specific
> > > > > applications, so while I'd like all chipsets to implement it, I don't
> > > > > know that it should be a gating factor to chipset integration.  Thanks,
> > > > > 
> > > > > Alex
> > > > 
> > > > Yes but there's nothing kvm specific in the routing API,
> > > > and IIRC it actually works fine without kvm.
> > > 
> > > Correct, but intx routing isn't very useful without kvm.
> > > 
> > > > As I see it, if some chipset does not expose it, it's a bug, and the
> > > > reason for lack of support is because no one cares about supporting
> > > > device assignment there.
> > > 
> > > Should we not have a more robust response to bugs than to kill the VM?
> > > Especially when it's as trivial as using the non-accelerated intx mode
> > > (vfio-pci) or having device init fail (pci-assign).  Calling assert is
> > > lazy.
> > 
> > This API has nothing to do with acceleration that I can see.
> 
> Ok, let's build on that...
> 
> > > > So this API is not something devices should probe for.
> > > > How about just assuming it works?
> > > 
> > > If that's the case, why test for any capabilities?  Let's just assume
> > > everything is there and litter the code with asserts rather than
> > > robustly deal with errors. </sarcasm>
> > > 
> > > > Otherwise, you are adding code and API that will become dead code
> > > > when everyone supports the required API.
> > > 
> > > So q35 is going to be the last ever chipset?  It may be the next one to
> > > implement intx routing, but hopefully not the last.
> > 
> > Everyone should just implement same APIs.  This was exactly why I asked
> > Jan to change routing so everyone does the same thing, instead of this
> > bolted-on hack just for assignment.
> 
> Do you have any idea how to implement intx routing on a g3beige
> platform?  Me neither, but vfio-pci works just fine on it if I don't
> call pci_device_route_intx_to_irq on that platform.  I could test kvm
> support before calling it, but we both agree that intx routing isn't
> tied to kvm support, same for kvm irq chip support.  Now I'm in a pickle
> of what can I test for to see if intx routing is in place that won't
> break some day if we run g3beige on a ppc host w/ kvm acceleration, and
> heaven forbid that it should ever implement something we call an
> irqchip.
> 
> > I merged this anyway so we can stop maintaining two trees which is even
> > more painful.
> > 
> > > We're talking about this:
> > > 
> > >  hw/pci.c |    8 ++++++--
> > >  hw/pci.h |    1 +
> > >  2 files changed, 7 insertions(+), 2 deletions(-)
> > > 
> > > Is this somehow tipping us over the edge and creating an unmaintainable
> > > API?
> > 
> > 
> > One has to draw a line in the sand somewhere, and I think it's a good
> > idea to stop now before this metastazes all over the code.
> 
> I think the API is broken w/o this, so...
> 
> > > Does it somehow prevent a driver from being just as lazy and doing
> > > assert(intx.mode == PCI_INTX_NOROUTE)?
> > 
> > What does this return code mean? How can there be no route?
> 
> The function returns PCIINTxRoute, so I can't very well return -1 or
> -ENODEV if we don't know how to generate the route.  Obviously the route
> exists somewhere in the depths of qemu_irq, but we don't know how to get
> it.  Therefore, we return No Route.

In other words it means there's a chipset bug. Why would
a device want to assert on this? Let chipset assert where
there's a chance to debug it.

>  It's far more obvious than trying
> to figure out what INVERTED means.
> 
> > > The API is obviously not
> > > complete as is if I have to assume it's there, call into it and hope for
> > > the best.  Thanks,
> > > 
> > > Alex
> > 
> > Why can't all chipsets implement this? You are just working
> > around broken chipsets in your device, this is wrong.
> 
> Like I asked, do you know how to make intx route work on g3beige or
> define a test that doesn't pull in unrelated features to avoid calling
> it there?  Implementing intx routing on a chipset that has no hopes of
> qemu bypass is wasted effort,
> therefore we need an API that allows us to
> test it.  Thanks,
> 
> Alex
> 

And this is why intx API is a wrong thing to keep long term.
We end up with code like
	if (fast thing works) {
		do fast thing
	} else
		do slow thing
in many devices which is not where I want the complexity
to be, I want to keep it in the core.

I plan to rework this, to make all devices do something like what Avi's
memory API does, flattening the hierarchy at registration time, and
prefer not to add more dead code that will need to be removed.

Is there any rush to use vfio on g3beige?  If yes I could merge
something like this patch as a temporary measure but I prefer not to.

-- 
MST

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 22:44               ` Michael S. Tsirkin
@ 2012-10-08 23:54                 ` Alex Williamson
  2012-10-11 14:11                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-10-08 23:54 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: jan.kiszka, qemu-devel

On Tue, 2012-10-09 at 00:44 +0200, Michael S. Tsirkin wrote:
> On Mon, Oct 08, 2012 at 04:09:48PM -0600, Alex Williamson wrote:
> > On Mon, 2012-10-08 at 23:50 +0200, Michael S. Tsirkin wrote:
> > > On Mon, Oct 08, 2012 at 03:11:37PM -0600, Alex Williamson wrote:
> > > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > > > On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > > > On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > > > > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > > > > > Michael, Jan,
> > > > > > > > 
> > > > > > > > Any comments on these?  I'd like to make the PCI changes before I update
> > > > > > > > vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > > > > > > anyone officially listed as maintainer of pci-assign since it's been
> > > > > > > > moved to qemu.  I could include the pci-assign patches in my tree if you
> > > > > > > > prefer.  Thanks,
> > > > > > > > 
> > > > > > > > Alex
> > > > > > > 
> > > > > > > Patches themselves look fine, but I'd like to
> > > > > > > better understand why do we want the INTx fallback.
> > > > > > > Isn't it easier to add intx routing support?
> > > > > > 
> > > > > > vfio-pci can work with or without intx routing support.  Its presence is
> > > > > > just one requirement to enable kvm accelerated intx support.  Regardless
> > > > > > of whether it's easy or hard to implement intx routing in a given
> > > > > > chipset, I currently can't probe for it and make useful decisions about
> > > > > > whether or not to enable kvm support without potentially hitting an
> > > > > > assert.  It's arguable how important intx acceleration is for specific
> > > > > > applications, so while I'd like all chipsets to implement it, I don't
> > > > > > know that it should be a gating factor to chipset integration.  Thanks,
> > > > > > 
> > > > > > Alex
> > > > > 
> > > > > Yes but there's nothing kvm specific in the routing API,
> > > > > and IIRC it actually works fine without kvm.
> > > > 
> > > > Correct, but intx routing isn't very useful without kvm.
> > > > 
> > > > > As I see it, if some chipset does not expose it, it's a bug, and the
> > > > > reason for lack of support is because no one cares about supporting
> > > > > device assignment there.
> > > > 
> > > > Should we not have a more robust response to bugs than to kill the VM?
> > > > Especially when it's as trivial as using the non-accelerated intx mode
> > > > (vfio-pci) or having device init fail (pci-assign).  Calling assert is
> > > > lazy.
> > > 
> > > This API has nothing to do with acceleration that I can see.
> > 
> > Ok, let's build on that...
> > 
> > > > > So this API is not something devices should probe for.
> > > > > How about just assuming it works?
> > > > 
> > > > If that's the case, why test for any capabilities?  Let's just assume
> > > > everything is there and litter the code with asserts rather than
> > > > robustly deal with errors. </sarcasm>
> > > > 
> > > > > Otherwise, you are adding code and API that will become dead code
> > > > > when everyone supports the required API.
> > > > 
> > > > So q35 is going to be the last ever chipset?  It may be the next one to
> > > > implement intx routing, but hopefully not the last.
> > > 
> > > Everyone should just implement same APIs.  This was exactly why I asked
> > > Jan to change routing so everyone does the same thing, instead of this
> > > bolted-on hack just for assignment.
> > 
> > Do you have any idea how to implement intx routing on a g3beige
> > platform?  Me neither, but vfio-pci works just fine on it if I don't
> > call pci_device_route_intx_to_irq on that platform.  I could test kvm
> > support before calling it, but we both agree that intx routing isn't
> > tied to kvm support, same for kvm irq chip support.  Now I'm in a pickle
> > of what can I test for to see if intx routing is in place that won't
> > break some day if we run g3beige on a ppc host w/ kvm acceleration, and
> > heaven forbid that it should ever implement something we call an
> > irqchip.
> > 
> > > I merged this anyway so we can stop maintaining two trees which is even
> > > more painful.
> > > 
> > > > We're talking about this:
> > > > 
> > > >  hw/pci.c |    8 ++++++--
> > > >  hw/pci.h |    1 +
> > > >  2 files changed, 7 insertions(+), 2 deletions(-)
> > > > 
> > > > Is this somehow tipping us over the edge and creating an unmaintainable
> > > > API?
> > > 
> > > 
> > > One has to draw a line in the sand somewhere, and I think it's a good
> > > idea to stop now before this metastazes all over the code.
> > 
> > I think the API is broken w/o this, so...
> > 
> > > > Does it somehow prevent a driver from being just as lazy and doing
> > > > assert(intx.mode == PCI_INTX_NOROUTE)?
> > > 
> > > What does this return code mean? How can there be no route?
> > 
> > The function returns PCIINTxRoute, so I can't very well return -1 or
> > -ENODEV if we don't know how to generate the route.  Obviously the route
> > exists somewhere in the depths of qemu_irq, but we don't know how to get
> > it.  Therefore, we return No Route.
> 
> In other words it means there's a chipset bug. Why would
> a device want to assert on this? Let chipset assert where
> there's a chance to debug it.

Is it a chipset bug to support a feature that has no value on that
platform?  Is it useful to assert on a known unimplemented feature?  I
say no to both.

> >  It's far more obvious than trying
> > to figure out what INVERTED means.
> > 
> > > > The API is obviously not
> > > > complete as is if I have to assume it's there, call into it and hope for
> > > > the best.  Thanks,
> > > > 
> > > > Alex
> > > 
> > > Why can't all chipsets implement this? You are just working
> > > around broken chipsets in your device, this is wrong.
> > 
> > Like I asked, do you know how to make intx route work on g3beige or
> > define a test that doesn't pull in unrelated features to avoid calling
> > it there?  Implementing intx routing on a chipset that has no hopes of
> > qemu bypass is wasted effort,
> > therefore we need an API that allows us to
> > test it.  Thanks,
> > 
> > Alex
> > 
> 
> And this is why intx API is a wrong thing to keep long term.
> We end up with code like
> 	if (fast thing works) {
> 		do fast thing
> 	} else
> 		do slow thing
> in many devices which is not where I want the complexity
> to be, I want to keep it in the core.
>
> I plan to rework this, to make all devices do something like what Avi's
> memory API does, flattening the hierarchy at registration time, and
> prefer not to add more dead code that will need to be removed.

While I agree qemu_irq needs a re-write, I have a hard time seeing
anything other than device assignment that needs this particular
feature, so I don't think there are "many devices" that will be doing
the above.  Even if there were, it's pretty common to test features and
enable fast paths based on availability.  So long as there are easy ways
to test both paths, which there are with vfio-pci, I don't see a problem
with that.

> Is there any rush to use vfio on g3beige?  If yes I could merge
> something like this patch as a temporary measure but I prefer not to.

Yes, I already use it as a non-x86 test platform and I'd like to not
break it or create tests on non-related features to avoid intx routing
code.  I'd also like to enable vfio-pci on any other platforms that
support PCI and don't want to deal with this on every chipset.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 21:11         ` Alex Williamson
  2012-10-08 21:50           ` Michael S. Tsirkin
@ 2012-10-09  7:09           ` Jan Kiszka
  2012-10-10 19:31             ` Alex Williamson
  1 sibling, 1 reply; 28+ messages in thread
From: Jan Kiszka @ 2012-10-09  7:09 UTC (permalink / raw)
  To: Alex Williamson; +Cc: qemu-devel@nongnu.org, Michael S. Tsirkin

On 2012-10-08 23:11, Alex Williamson wrote:
> On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
>> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
>>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
>>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
>>>>> Michael, Jan,
>>>>>
>>>>> Any comments on these?  I'd like to make the PCI changes before I update
>>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
>>>>> anyone officially listed as maintainer of pci-assign since it's been
>>>>> moved to qemu.  I could include the pci-assign patches in my tree if you
>>>>> prefer.  Thanks,
>>>>>
>>>>> Alex
>>>>
>>>> Patches themselves look fine, but I'd like to
>>>> better understand why do we want the INTx fallback.
>>>> Isn't it easier to add intx routing support?
>>>
>>> vfio-pci can work with or without intx routing support.  Its presence is
>>> just one requirement to enable kvm accelerated intx support.  Regardless
>>> of whether it's easy or hard to implement intx routing in a given
>>> chipset, I currently can't probe for it and make useful decisions about
>>> whether or not to enable kvm support without potentially hitting an
>>> assert.  It's arguable how important intx acceleration is for specific
>>> applications, so while I'd like all chipsets to implement it, I don't
>>> know that it should be a gating factor to chipset integration.  Thanks,
>>>
>>> Alex
>>
>> Yes but there's nothing kvm specific in the routing API,
>> and IIRC it actually works fine without kvm.
> 
> Correct, but intx routing isn't very useful without kvm.

Right now: yes. Long-term: no. The concept in general is also required
for decoupling I/O paths lock-wise from our main thread. We need to
explore the IRQ path and cache it in order to avoid taking lots of locks
on each delivery, possibly even the BQL. But we will likely need
something smarter at that point, i.e. something PCI-independent.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-09  7:09           ` Jan Kiszka
@ 2012-10-10 19:31             ` Alex Williamson
  2012-10-11 10:37               ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-10-10 19:31 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: qemu-devel@nongnu.org, Michael S. Tsirkin

On Tue, 2012-10-09 at 09:09 +0200, Jan Kiszka wrote:
> On 2012-10-08 23:11, Alex Williamson wrote:
> > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> >> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> >>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> >>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> >>>>> Michael, Jan,
> >>>>>
> >>>>> Any comments on these?  I'd like to make the PCI changes before I update
> >>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> >>>>> anyone officially listed as maintainer of pci-assign since it's been
> >>>>> moved to qemu.  I could include the pci-assign patches in my tree if you
> >>>>> prefer.  Thanks,
> >>>>>
> >>>>> Alex
> >>>>
> >>>> Patches themselves look fine, but I'd like to
> >>>> better understand why do we want the INTx fallback.
> >>>> Isn't it easier to add intx routing support?
> >>>
> >>> vfio-pci can work with or without intx routing support.  Its presence is
> >>> just one requirement to enable kvm accelerated intx support.  Regardless
> >>> of whether it's easy or hard to implement intx routing in a given
> >>> chipset, I currently can't probe for it and make useful decisions about
> >>> whether or not to enable kvm support without potentially hitting an
> >>> assert.  It's arguable how important intx acceleration is for specific
> >>> applications, so while I'd like all chipsets to implement it, I don't
> >>> know that it should be a gating factor to chipset integration.  Thanks,
> >>>
> >>> Alex
> >>
> >> Yes but there's nothing kvm specific in the routing API,
> >> and IIRC it actually works fine without kvm.
> > 
> > Correct, but intx routing isn't very useful without kvm.
> 
> Right now: yes. Long-term: no. The concept in general is also required
> for decoupling I/O paths lock-wise from our main thread. We need to
> explore the IRQ path and cache it in order to avoid taking lots of locks
> on each delivery, possibly even the BQL. But we will likely need
> something smarter at that point, i.e. something PCI-independent.

That sounds great long term, but in the interim I think this trivial
extension to the API is more than justified.  I hope that it can go in
soon so we can get vfio-pci kvm intx acceleration in before freeze
deadlines get much closer.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-10 19:31             ` Alex Williamson
@ 2012-10-11 10:37               ` Michael S. Tsirkin
  2012-10-11 13:38                 ` Alex Williamson
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-10-11 10:37 UTC (permalink / raw)
  To: Alex Williamson; +Cc: Jan Kiszka, qemu-devel@nongnu.org

On Wed, Oct 10, 2012 at 01:31:52PM -0600, Alex Williamson wrote:
> On Tue, 2012-10-09 at 09:09 +0200, Jan Kiszka wrote:
> > On 2012-10-08 23:11, Alex Williamson wrote:
> > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > >> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > >>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > >>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > >>>>> Michael, Jan,
> > >>>>>
> > >>>>> Any comments on these?  I'd like to make the PCI changes before I update
> > >>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > >>>>> anyone officially listed as maintainer of pci-assign since it's been
> > >>>>> moved to qemu.  I could include the pci-assign patches in my tree if you
> > >>>>> prefer.  Thanks,
> > >>>>>
> > >>>>> Alex
> > >>>>
> > >>>> Patches themselves look fine, but I'd like to
> > >>>> better understand why do we want the INTx fallback.
> > >>>> Isn't it easier to add intx routing support?
> > >>>
> > >>> vfio-pci can work with or without intx routing support.  Its presence is
> > >>> just one requirement to enable kvm accelerated intx support.  Regardless
> > >>> of whether it's easy or hard to implement intx routing in a given
> > >>> chipset, I currently can't probe for it and make useful decisions about
> > >>> whether or not to enable kvm support without potentially hitting an
> > >>> assert.  It's arguable how important intx acceleration is for specific
> > >>> applications, so while I'd like all chipsets to implement it, I don't
> > >>> know that it should be a gating factor to chipset integration.  Thanks,
> > >>>
> > >>> Alex
> > >>
> > >> Yes but there's nothing kvm specific in the routing API,
> > >> and IIRC it actually works fine without kvm.
> > > 
> > > Correct, but intx routing isn't very useful without kvm.
> > 
> > Right now: yes. Long-term: no. The concept in general is also required
> > for decoupling I/O paths lock-wise from our main thread. We need to
> > explore the IRQ path and cache it in order to avoid taking lots of locks
> > on each delivery, possibly even the BQL. But we will likely need
> > something smarter at that point, i.e. something PCI-independent.
> 
> That sounds great long term, but in the interim I think this trivial
> extension to the API is more than justified.  I hope that it can go in
> soon so we can get vfio-pci kvm intx acceleration in before freeze
> deadlines get much closer.  Thanks,
> 
> Alex

Simply reorder the patches:
1. add vfio acceleration with no fallback
2. add way for intx routing to fail
3. add vfio fallback if intx routing fails

Then we can apply 1 and argue about the need for 2/3
afterwards.

-- 
MST

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-11 10:37               ` Michael S. Tsirkin
@ 2012-10-11 13:38                 ` Alex Williamson
  2012-10-11 13:49                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Alex Williamson @ 2012-10-11 13:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Jan Kiszka, qemu-devel@nongnu.org

On Thu, 2012-10-11 at 12:37 +0200, Michael S. Tsirkin wrote:
> On Wed, Oct 10, 2012 at 01:31:52PM -0600, Alex Williamson wrote:
> > On Tue, 2012-10-09 at 09:09 +0200, Jan Kiszka wrote:
> > > On 2012-10-08 23:11, Alex Williamson wrote:
> > > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > >> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > >>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > >>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > >>>>> Michael, Jan,
> > > >>>>>
> > > >>>>> Any comments on these?  I'd like to make the PCI changes before I update
> > > >>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > >>>>> anyone officially listed as maintainer of pci-assign since it's been
> > > >>>>> moved to qemu.  I could include the pci-assign patches in my tree if you
> > > >>>>> prefer.  Thanks,
> > > >>>>>
> > > >>>>> Alex
> > > >>>>
> > > >>>> Patches themselves look fine, but I'd like to
> > > >>>> better understand why do we want the INTx fallback.
> > > >>>> Isn't it easier to add intx routing support?
> > > >>>
> > > >>> vfio-pci can work with or without intx routing support.  Its presence is
> > > >>> just one requirement to enable kvm accelerated intx support.  Regardless
> > > >>> of whether it's easy or hard to implement intx routing in a given
> > > >>> chipset, I currently can't probe for it and make useful decisions about
> > > >>> whether or not to enable kvm support without potentially hitting an
> > > >>> assert.  It's arguable how important intx acceleration is for specific
> > > >>> applications, so while I'd like all chipsets to implement it, I don't
> > > >>> know that it should be a gating factor to chipset integration.  Thanks,
> > > >>>
> > > >>> Alex
> > > >>
> > > >> Yes but there's nothing kvm specific in the routing API,
> > > >> and IIRC it actually works fine without kvm.
> > > > 
> > > > Correct, but intx routing isn't very useful without kvm.
> > > 
> > > Right now: yes. Long-term: no. The concept in general is also required
> > > for decoupling I/O paths lock-wise from our main thread. We need to
> > > explore the IRQ path and cache it in order to avoid taking lots of locks
> > > on each delivery, possibly even the BQL. But we will likely need
> > > something smarter at that point, i.e. something PCI-independent.
> > 
> > That sounds great long term, but in the interim I think this trivial
> > extension to the API is more than justified.  I hope that it can go in
> > soon so we can get vfio-pci kvm intx acceleration in before freeze
> > deadlines get much closer.  Thanks,
> > 
> > Alex
> 
> Simply reorder the patches:
> 1. add vfio acceleration with no fallback
> 2. add way for intx routing to fail
> 3. add vfio fallback if intx routing fails
> 
> Then we can apply 1 and argue about the need for 2/3
> afterwards.

And patches 2-6 of this series; are they also far too controversial to
consider applying now?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-11 13:38                 ` Alex Williamson
@ 2012-10-11 13:49                   ` Michael S. Tsirkin
  2012-10-11 14:10                     ` Alex Williamson
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-10-11 13:49 UTC (permalink / raw)
  To: Alex Williamson; +Cc: Jan Kiszka, qemu-devel@nongnu.org

On Thu, Oct 11, 2012 at 07:38:54AM -0600, Alex Williamson wrote:
> On Thu, 2012-10-11 at 12:37 +0200, Michael S. Tsirkin wrote:
> > On Wed, Oct 10, 2012 at 01:31:52PM -0600, Alex Williamson wrote:
> > > On Tue, 2012-10-09 at 09:09 +0200, Jan Kiszka wrote:
> > > > On 2012-10-08 23:11, Alex Williamson wrote:
> > > > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > > >> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > >>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > >>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > >>>>> Michael, Jan,
> > > > >>>>>
> > > > >>>>> Any comments on these?  I'd like to make the PCI changes before I update
> > > > >>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > > >>>>> anyone officially listed as maintainer of pci-assign since it's been
> > > > >>>>> moved to qemu.  I could include the pci-assign patches in my tree if you
> > > > >>>>> prefer.  Thanks,
> > > > >>>>>
> > > > >>>>> Alex
> > > > >>>>
> > > > >>>> Patches themselves look fine, but I'd like to
> > > > >>>> better understand why do we want the INTx fallback.
> > > > >>>> Isn't it easier to add intx routing support?
> > > > >>>
> > > > >>> vfio-pci can work with or without intx routing support.  Its presence is
> > > > >>> just one requirement to enable kvm accelerated intx support.  Regardless
> > > > >>> of whether it's easy or hard to implement intx routing in a given
> > > > >>> chipset, I currently can't probe for it and make useful decisions about
> > > > >>> whether or not to enable kvm support without potentially hitting an
> > > > >>> assert.  It's arguable how important intx acceleration is for specific
> > > > >>> applications, so while I'd like all chipsets to implement it, I don't
> > > > >>> know that it should be a gating factor to chipset integration.  Thanks,
> > > > >>>
> > > > >>> Alex
> > > > >>
> > > > >> Yes but there's nothing kvm specific in the routing API,
> > > > >> and IIRC it actually works fine without kvm.
> > > > > 
> > > > > Correct, but intx routing isn't very useful without kvm.
> > > > 
> > > > Right now: yes. Long-term: no. The concept in general is also required
> > > > for decoupling I/O paths lock-wise from our main thread. We need to
> > > > explore the IRQ path and cache it in order to avoid taking lots of locks
> > > > on each delivery, possibly even the BQL. But we will likely need
> > > > something smarter at that point, i.e. something PCI-independent.
> > > 
> > > That sounds great long term, but in the interim I think this trivial
> > > extension to the API is more than justified.  I hope that it can go in
> > > soon so we can get vfio-pci kvm intx acceleration in before freeze
> > > deadlines get much closer.  Thanks,
> > > 
> > > Alex
> > 
> > Simply reorder the patches:
> > 1. add vfio acceleration with no fallback
> > 2. add way for intx routing to fail
> > 3. add vfio fallback if intx routing fails
> > 
> > Then we can apply 1 and argue about the need for 2/3
> > afterwards.
> 
> And patches 2-6 of this series; are they also far too controversial to
> consider applying now?

You mean 3-6, right?
I think they are fine. Sorry about not making this clear.
Would you like me to apply them right away?
I delayed that in case there's fallout from splitting patches 1-2
but looking at it more closely they are unrelated so this seems unlikely.
Pls let me know.

-- 
MSR

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-11 13:49                   ` Michael S. Tsirkin
@ 2012-10-11 14:10                     ` Alex Williamson
  0 siblings, 0 replies; 28+ messages in thread
From: Alex Williamson @ 2012-10-11 14:10 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Jan Kiszka, qemu-devel@nongnu.org

On Thu, 2012-10-11 at 15:49 +0200, Michael S. Tsirkin wrote:
> On Thu, Oct 11, 2012 at 07:38:54AM -0600, Alex Williamson wrote:
> > On Thu, 2012-10-11 at 12:37 +0200, Michael S. Tsirkin wrote:
> > > On Wed, Oct 10, 2012 at 01:31:52PM -0600, Alex Williamson wrote:
> > > > On Tue, 2012-10-09 at 09:09 +0200, Jan Kiszka wrote:
> > > > > On 2012-10-08 23:11, Alex Williamson wrote:
> > > > > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > > > >> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > > >>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > > >>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > > >>>>> Michael, Jan,
> > > > > >>>>>
> > > > > >>>>> Any comments on these?  I'd like to make the PCI changes before I update
> > > > > >>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > > > >>>>> anyone officially listed as maintainer of pci-assign since it's been
> > > > > >>>>> moved to qemu.  I could include the pci-assign patches in my tree if you
> > > > > >>>>> prefer.  Thanks,
> > > > > >>>>>
> > > > > >>>>> Alex
> > > > > >>>>
> > > > > >>>> Patches themselves look fine, but I'd like to
> > > > > >>>> better understand why do we want the INTx fallback.
> > > > > >>>> Isn't it easier to add intx routing support?
> > > > > >>>
> > > > > >>> vfio-pci can work with or without intx routing support.  Its presence is
> > > > > >>> just one requirement to enable kvm accelerated intx support.  Regardless
> > > > > >>> of whether it's easy or hard to implement intx routing in a given
> > > > > >>> chipset, I currently can't probe for it and make useful decisions about
> > > > > >>> whether or not to enable kvm support without potentially hitting an
> > > > > >>> assert.  It's arguable how important intx acceleration is for specific
> > > > > >>> applications, so while I'd like all chipsets to implement it, I don't
> > > > > >>> know that it should be a gating factor to chipset integration.  Thanks,
> > > > > >>>
> > > > > >>> Alex
> > > > > >>
> > > > > >> Yes but there's nothing kvm specific in the routing API,
> > > > > >> and IIRC it actually works fine without kvm.
> > > > > > 
> > > > > > Correct, but intx routing isn't very useful without kvm.
> > > > > 
> > > > > Right now: yes. Long-term: no. The concept in general is also required
> > > > > for decoupling I/O paths lock-wise from our main thread. We need to
> > > > > explore the IRQ path and cache it in order to avoid taking lots of locks
> > > > > on each delivery, possibly even the BQL. But we will likely need
> > > > > something smarter at that point, i.e. something PCI-independent.
> > > > 
> > > > That sounds great long term, but in the interim I think this trivial
> > > > extension to the API is more than justified.  I hope that it can go in
> > > > soon so we can get vfio-pci kvm intx acceleration in before freeze
> > > > deadlines get much closer.  Thanks,
> > > > 
> > > > Alex
> > > 
> > > Simply reorder the patches:
> > > 1. add vfio acceleration with no fallback
> > > 2. add way for intx routing to fail
> > > 3. add vfio fallback if intx routing fails
> > > 
> > > Then we can apply 1 and argue about the need for 2/3
> > > afterwards.
> > 
> > And patches 2-6 of this series; are they also far too controversial to
> > consider applying now?
> 
> You mean 3-6, right?
> I think they are fine. Sorry about not making this clear.
> Would you like me to apply them right away?
> I delayed that in case there's fallout from splitting patches 1-2
> but looking at it more closely they are unrelated so this seems unlikely.
> Pls let me know.
> 

Yes, please still apply 3-6.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
  2012-10-08 23:54                 ` Alex Williamson
@ 2012-10-11 14:11                   ` Michael S. Tsirkin
  0 siblings, 0 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-10-11 14:11 UTC (permalink / raw)
  To: Alex Williamson; +Cc: jan.kiszka, qemu-devel

On Mon, Oct 08, 2012 at 05:54:58PM -0600, Alex Williamson wrote:
> On Tue, 2012-10-09 at 00:44 +0200, Michael S. Tsirkin wrote:
> > On Mon, Oct 08, 2012 at 04:09:48PM -0600, Alex Williamson wrote:
> > > On Mon, 2012-10-08 at 23:50 +0200, Michael S. Tsirkin wrote:
> > > > On Mon, Oct 08, 2012 at 03:11:37PM -0600, Alex Williamson wrote:
> > > > > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > > > > On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > > > > On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > > > > > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > > > > > > Michael, Jan,
> > > > > > > > > 
> > > > > > > > > Any comments on these?  I'd like to make the PCI changes before I update
> > > > > > > > > vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> > > > > > > > > anyone officially listed as maintainer of pci-assign since it's been
> > > > > > > > > moved to qemu.  I could include the pci-assign patches in my tree if you
> > > > > > > > > prefer.  Thanks,
> > > > > > > > > 
> > > > > > > > > Alex
> > > > > > > > 
> > > > > > > > Patches themselves look fine, but I'd like to
> > > > > > > > better understand why do we want the INTx fallback.
> > > > > > > > Isn't it easier to add intx routing support?
> > > > > > > 
> > > > > > > vfio-pci can work with or without intx routing support.  Its presence is
> > > > > > > just one requirement to enable kvm accelerated intx support.  Regardless
> > > > > > > of whether it's easy or hard to implement intx routing in a given
> > > > > > > chipset, I currently can't probe for it and make useful decisions about
> > > > > > > whether or not to enable kvm support without potentially hitting an
> > > > > > > assert.  It's arguable how important intx acceleration is for specific
> > > > > > > applications, so while I'd like all chipsets to implement it, I don't
> > > > > > > know that it should be a gating factor to chipset integration.  Thanks,
> > > > > > > 
> > > > > > > Alex
> > > > > > 
> > > > > > Yes but there's nothing kvm specific in the routing API,
> > > > > > and IIRC it actually works fine without kvm.
> > > > > 
> > > > > Correct, but intx routing isn't very useful without kvm.
> > > > > 
> > > > > > As I see it, if some chipset does not expose it, it's a bug, and the
> > > > > > reason for lack of support is because no one cares about supporting
> > > > > > device assignment there.
> > > > > 
> > > > > Should we not have a more robust response to bugs than to kill the VM?
> > > > > Especially when it's as trivial as using the non-accelerated intx mode
> > > > > (vfio-pci) or having device init fail (pci-assign).  Calling assert is
> > > > > lazy.
> > > > 
> > > > This API has nothing to do with acceleration that I can see.
> > > 
> > > Ok, let's build on that...
> > > 
> > > > > > So this API is not something devices should probe for.
> > > > > > How about just assuming it works?
> > > > > 
> > > > > If that's the case, why test for any capabilities?  Let's just assume
> > > > > everything is there and litter the code with asserts rather than
> > > > > robustly deal with errors. </sarcasm>
> > > > > 
> > > > > > Otherwise, you are adding code and API that will become dead code
> > > > > > when everyone supports the required API.
> > > > > 
> > > > > So q35 is going to be the last ever chipset?  It may be the next one to
> > > > > implement intx routing, but hopefully not the last.
> > > > 
> > > > Everyone should just implement same APIs.  This was exactly why I asked
> > > > Jan to change routing so everyone does the same thing, instead of this
> > > > bolted-on hack just for assignment.
> > > 
> > > Do you have any idea how to implement intx routing on a g3beige
> > > platform?  Me neither, but vfio-pci works just fine on it if I don't
> > > call pci_device_route_intx_to_irq on that platform.  I could test kvm
> > > support before calling it, but we both agree that intx routing isn't
> > > tied to kvm support, same for kvm irq chip support.  Now I'm in a pickle
> > > of what can I test for to see if intx routing is in place that won't
> > > break some day if we run g3beige on a ppc host w/ kvm acceleration, and
> > > heaven forbid that it should ever implement something we call an
> > > irqchip.
> > > 
> > > > I merged this anyway so we can stop maintaining two trees which is even
> > > > more painful.
> > > > 
> > > > > We're talking about this:
> > > > > 
> > > > >  hw/pci.c |    8 ++++++--
> > > > >  hw/pci.h |    1 +
> > > > >  2 files changed, 7 insertions(+), 2 deletions(-)
> > > > > 
> > > > > Is this somehow tipping us over the edge and creating an unmaintainable
> > > > > API?
> > > > 
> > > > 
> > > > One has to draw a line in the sand somewhere, and I think it's a good
> > > > idea to stop now before this metastazes all over the code.
> > > 
> > > I think the API is broken w/o this, so...
> > > 
> > > > > Does it somehow prevent a driver from being just as lazy and doing
> > > > > assert(intx.mode == PCI_INTX_NOROUTE)?
> > > > 
> > > > What does this return code mean? How can there be no route?
> > > 
> > > The function returns PCIINTxRoute, so I can't very well return -1 or
> > > -ENODEV if we don't know how to generate the route.  Obviously the route
> > > exists somewhere in the depths of qemu_irq, but we don't know how to get
> > > it.  Therefore, we return No Route.
> > 
> > In other words it means there's a chipset bug. Why would
> > a device want to assert on this? Let chipset assert where
> > there's a chance to debug it.
> 
> Is it a chipset bug to support a feature that has no value on that
> platform?

> Is it useful to assert on a known unimplemented feature?  I
> say no to both.

The point all along was a temporary measure to catch missing code.
You are basically saying this API has no value without
level irqfd support in kvm. vfio should just limit this call to that
condition.  And then when people add kvm level irqfd on ppc, and this
assert suddenly starts triggering, it will serve its intended purpose
and alert us that this API needs to be implemented for level irqfd to
acutally serve its intended purpose.


> > >  It's far more obvious than trying
> > > to figure out what INVERTED means.
> > > 
> > > > > The API is obviously not
> > > > > complete as is if I have to assume it's there, call into it and hope for
> > > > > the best.  Thanks,
> > > > > 
> > > > > Alex
> > > > 
> > > > Why can't all chipsets implement this? You are just working
> > > > around broken chipsets in your device, this is wrong.
> > > 
> > > Like I asked, do you know how to make intx route work on g3beige or
> > > define a test that doesn't pull in unrelated features to avoid calling
> > > it there?  Implementing intx routing on a chipset that has no hopes of
> > > qemu bypass is wasted effort,
> > > therefore we need an API that allows us to
> > > test it.  Thanks,
> > > 
> > > Alex
> > > 
> > 
> > And this is why intx API is a wrong thing to keep long term.
> > We end up with code like
> > 	if (fast thing works) {
> > 		do fast thing
> > 	} else
> > 		do slow thing
> > in many devices which is not where I want the complexity
> > to be, I want to keep it in the core.
> >
> > I plan to rework this, to make all devices do something like what Avi's
> > memory API does, flattening the hierarchy at registration time, and
> > prefer not to add more dead code that will need to be removed.
> 
> While I agree qemu_irq needs a re-write, I have a hard time seeing
> anything other than device assignment that needs this particular
> feature, so I don't think there are "many devices" that will be doing
> the above.  Even if there were, it's pretty common to test features and
> enable fast paths based on availability.

Yes but this API has nothing to do with fast path. It is simply
missing code, assert is there exactly so we can catch it.

>  So long as there are easy ways
> to test both paths, which there are with vfio-pci, I don't see a problem
> with that.
> > Is there any rush to use vfio on g3beige?  If yes I could merge
> > something like this patch as a temporary measure but I prefer not to.
> 
> Yes, I already use it as a non-x86 test platform and I'd like to not
> break it or create tests on non-related features to avoid intx routing
> code.  I'd also like to enable vfio-pci on any other platforms that
> support PCI and don't want to deal with this on every chipset.  Thanks,
> 
> Alex

Actually if (kvm_irqchip && kvm_level_irq) looks more or less
exactly what you need ATM. I would go for this for now,
I understand you think this is ugly but alternatives look ugly too.


-- 
MST

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2012-10-11 14:10 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-02 19:21 [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
2012-10-02 19:21 ` [Qemu-devel] [PATCH 1/6] pci: Add INTx no-route option Alex Williamson
2012-10-02 19:21 ` [Qemu-devel] [PATCH 2/6] pci-assign: Add support for no-route Alex Williamson
2012-10-02 19:55   ` Anthony Liguori
2012-10-02 20:12     ` Alex Williamson
2012-10-03  9:52       ` Paolo Bonzini
2012-10-03 12:32         ` Anthony Liguori
2012-10-02 19:21 ` [Qemu-devel] [PATCH 3/6] pci: Helper function for testing if an INTx route changed Alex Williamson
2012-10-02 19:22 ` [Qemu-devel] [PATCH 4/6] pci-assign: Use pci_intx_route_changed() Alex Williamson
2012-10-02 19:22 ` [Qemu-devel] [PATCH 5/6] msi: Add msi_get_message() Alex Williamson
2012-10-02 19:22 ` [Qemu-devel] [PATCH 6/6] pci-assign: Use msi_get_message() Alex Williamson
2012-10-08 15:58 ` [Qemu-devel] [PATCH 0/6] Misc PCI cleanups Alex Williamson
2012-10-08 16:01   ` Jan Kiszka
2012-10-08 20:15   ` Michael S. Tsirkin
2012-10-08 19:27     ` Alex Williamson
2012-10-08 21:40       ` Michael S. Tsirkin
2012-10-08 21:11         ` Alex Williamson
2012-10-08 21:50           ` Michael S. Tsirkin
2012-10-08 22:09             ` Alex Williamson
2012-10-08 22:44               ` Michael S. Tsirkin
2012-10-08 23:54                 ` Alex Williamson
2012-10-11 14:11                   ` Michael S. Tsirkin
2012-10-09  7:09           ` Jan Kiszka
2012-10-10 19:31             ` Alex Williamson
2012-10-11 10:37               ` Michael S. Tsirkin
2012-10-11 13:38                 ` Alex Williamson
2012-10-11 13:49                   ` Michael S. Tsirkin
2012-10-11 14:10                     ` Alex Williamson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).