* [PATCH 0/3] VMD fixes
@ 2016-06-20 15:39 Jon Derrick
2016-06-20 15:39 ` [PATCH 1/3] vmd: Use lock save/restore in interrupt enable path Jon Derrick
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Jon Derrick @ 2016-06-20 15:39 UTC (permalink / raw)
To: helgaas; +Cc: Jon Derrick, keith.busch, linux-pci
Hi Bjorn,
This set has a few miscellaneous fixes for VMD for issues which fell-out
from various testing configurations.
Jon Derrick (1):
vmd: Use lock save/restore in interrupt enable path
Keith Busch (2):
vmd: Use x86_vector_domain as parent domain
vmd: Separate MSI and MSI-x vector sharing
arch/x86/pci/vmd.c | 34 +++++++++++++++++++++-------------
1 file changed, 21 insertions(+), 13 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/3] vmd: Use lock save/restore in interrupt enable path
2016-06-20 15:39 [PATCH 0/3] VMD fixes Jon Derrick
@ 2016-06-20 15:39 ` Jon Derrick
2016-06-20 15:39 ` [PATCH 2/3] vmd: Use x86_vector_domain as parent domain Jon Derrick
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Jon Derrick @ 2016-06-20 15:39 UTC (permalink / raw)
To: helgaas; +Cc: Jon Derrick, keith.busch, linux-pci
Enabling interrupts may result in an interrupt raised and serviced while
VMD holds a lock, resulting in contention with the spin lock held while
enabling interrupts.
The solution is to disable preemption and save/restore the state during
interrupt enable and disable.
Fixes lockdep:
[ 52.340274] ======================================================
[ 52.347381] [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
[ 52.355129] 4.6.0-2016-06-16-lockdep+ #47 Tainted: G E
[ 52.362334] ------------------------------------------------------
[ 52.369443] kworker/0:1/447 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
[ 52.377245] (list_lock){+.+...}, at: [<ffffffffa04eb8fc>] vmd_irq_enable+0x3c/0x70 [vmd]
[ 52.386736]
[ 52.386736] and this task is already holding:
[ 52.393427] (&irq_desc_lock_class){-.-...}, at: [<ffffffff810e1ff6>] __setup_irq+0xa6/0x610
[ 52.403204] which would create a new lock dependency:
[ 52.409004] (&irq_desc_lock_class){-.-...} -> (list_lock){+.+...}
[ 52.416265]
[ 52.416265] but this new dependency connects a HARDIRQ-irq-safe lock:
[ 52.425328] (&irq_desc_lock_class){-.-...}
... which became HARDIRQ-irq-safe at:
[ 52.434084] [<ffffffff810c9f21>] __lock_acquire+0x981/0xe00
[ 52.440702] [<ffffffff810cb039>] lock_acquire+0x119/0x220
[ 52.447109] [<ffffffff8167294d>] _raw_spin_lock+0x3d/0x80
[ 52.453519] [<ffffffff810e36d4>] handle_level_irq+0x24/0x110
[ 52.460232] [<ffffffff8101f20a>] handle_irq+0x1a/0x30
[ 52.466248] [<ffffffff81675fc1>] do_IRQ+0x61/0x120
[ 52.471966] [<ffffffff8167404c>] ret_from_intr+0x0/0x20
[ 52.478180] [<ffffffff81672e30>] _raw_spin_unlock_irqrestore+0x40/0x60
[ 52.485893] [<ffffffff810e21ee>] __setup_irq+0x29e/0x610
[ 52.492210] [<ffffffff810e25a1>] setup_irq+0x41/0x90
[ 52.498133] [<ffffffff81f5777f>] setup_default_timer_irq+0x1e/0x20
[ 52.505450] [<ffffffff81f57798>] hpet_time_init+0x17/0x19
[ 52.511870] [<ffffffff81f5775a>] x86_late_time_init+0xa/0x11
[ 52.518587] [<ffffffff81f51e9b>] start_kernel+0x382/0x436
[ 52.525006] [<ffffffff81f51308>] x86_64_start_reservations+0x2a/0x2c
[ 52.532548] [<ffffffff81f51445>] x86_64_start_kernel+0x13b/0x14a
[ 52.539669]
[ 52.539669] to a HARDIRQ-irq-unsafe lock:
[ 52.545955] (list_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
[ 52.553801] ... [<ffffffff810c9d8e>] __lock_acquire+0x7ee/0xe00
[ 52.560732] [<ffffffff810cb039>] lock_acquire+0x119/0x220
[ 52.567147] [<ffffffff8167294d>] _raw_spin_lock+0x3d/0x80
[ 52.573556] [<ffffffffa04eba42>] vmd_msi_init+0x72/0x150 [vmd]
[ 52.580465] [<ffffffff810e8597>] msi_domain_alloc+0xb7/0x140
[ 52.587182] [<ffffffff810e6b10>] irq_domain_alloc_irqs_recursive+0x40/0xa0
[ 52.595295] [<ffffffff810e6cea>] __irq_domain_alloc_irqs+0x14a/0x330
[ 52.602815] [<ffffffff810e8a8c>] msi_domain_alloc_irqs+0x8c/0x1d0
[ 52.616372] [<ffffffff813ca4e3>] pci_msi_setup_msi_irqs+0x43/0x70
[ 52.629967] [<ffffffff813cada1>] pci_enable_msi_range+0x131/0x280
[ 52.643533] [<ffffffff813bf5e0>] pcie_port_device_register+0x320/0x4e0
[ 52.657800] [<ffffffff813bf9a4>] pcie_portdrv_probe+0x34/0x60
[ 52.670985] [<ffffffff813b0e85>] local_pci_probe+0x45/0xa0
[ 52.683782] [<ffffffff813b226b>] pci_device_probe+0xdb/0x130
[ 52.696720] [<ffffffff8149e3cc>] driver_probe_device+0x22c/0x440
[ 52.710053] [<ffffffff8149e774>] __device_attach_driver+0x94/0x110
[ 52.723578] [<ffffffff8149bfad>] bus_for_each_drv+0x5d/0x90
[ 52.736258] [<ffffffff8149e030>] __device_attach+0xc0/0x140
[ 52.748793] [<ffffffff8149e0c0>] device_attach+0x10/0x20
[ 52.760936] [<ffffffff813a77f7>] pci_bus_add_device+0x47/0x90
[ 52.773563] [<ffffffff813a7879>] pci_bus_add_devices+0x39/0x70
[ 52.786176] [<ffffffff813aaba7>] pci_rescan_bus+0x27/0x30
[ 52.798099] [<ffffffffa04ec1af>] vmd_probe+0x68f/0x76c [vmd]
[ 52.810302] [<ffffffff813b0e85>] local_pci_probe+0x45/0xa0
[ 52.822584] [<ffffffff81088064>] work_for_cpu_fn+0x14/0x20
[ 52.834688] [<ffffffff8108c244>] process_one_work+0x1f4/0x740
[ 52.846834] [<ffffffff8108c9c6>] worker_thread+0x236/0x4f0
[ 52.858896] [<ffffffff810935c2>] kthread+0xf2/0x110
[ 52.869910] [<ffffffff816738f2>] ret_from_fork+0x22/0x50
[ 52.881391]
[ 52.881391] other info that might help us debug this:
[ 52.881391]
[ 52.906283] Possible interrupt unsafe locking scenario:
[ 52.906283]
[ 52.924922] CPU0 CPU1
[ 52.935372] ---- ----
[ 52.945702] lock(list_lock);
[ 52.954329] local_irq_disable();
[ 52.966240] lock(&irq_desc_lock_class);
[ 52.978859] lock(list_lock);
[ 52.990325] <Interrupt>
[ 52.998369] lock(&irq_desc_lock_class);
[ 53.008269]
[ 53.008269] *** DEADLOCK ***
Acked-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
---
arch/x86/pci/vmd.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/pci/vmd.c b/arch/x86/pci/vmd.c
index 2c82b19..bd81393 100644
--- a/arch/x86/pci/vmd.c
+++ b/arch/x86/pci/vmd.c
@@ -119,10 +119,11 @@ static void vmd_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
static void vmd_irq_enable(struct irq_data *data)
{
struct vmd_irq *vmdirq = data->chip_data;
+ unsigned long flags;
- raw_spin_lock(&list_lock);
+ raw_spin_lock_irqsave(&list_lock, flags);
list_add_tail_rcu(&vmdirq->node, &vmdirq->irq->irq_list);
- raw_spin_unlock(&list_lock);
+ raw_spin_unlock_irqrestore(&list_lock, flags);
data->chip->irq_unmask(data);
}
@@ -130,13 +131,14 @@ static void vmd_irq_enable(struct irq_data *data)
static void vmd_irq_disable(struct irq_data *data)
{
struct vmd_irq *vmdirq = data->chip_data;
+ unsigned long flags;
data->chip->irq_mask(data);
- raw_spin_lock(&list_lock);
+ raw_spin_lock_irqsave(&list_lock, flags);
list_del_rcu(&vmdirq->node);
INIT_LIST_HEAD_RCU(&vmdirq->node);
- raw_spin_unlock(&list_lock);
+ raw_spin_unlock_irqrestore(&list_lock, flags);
}
/*
@@ -170,13 +172,14 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd)
{
int i, best = 0;
+ unsigned long flags;
- raw_spin_lock(&list_lock);
+ raw_spin_lock_irqsave(&list_lock, flags);
for (i = 1; i < vmd->msix_count; i++)
if (vmd->irqs[i].count < vmd->irqs[best].count)
best = i;
vmd->irqs[best].count++;
- raw_spin_unlock(&list_lock);
+ raw_spin_unlock_irqrestore(&list_lock, flags);
return &vmd->irqs[best];
}
@@ -204,11 +207,12 @@ static void vmd_msi_free(struct irq_domain *domain,
struct msi_domain_info *info, unsigned int virq)
{
struct vmd_irq *vmdirq = irq_get_chip_data(virq);
+ unsigned long flags;
/* XXX: Potential optimization to rebalance */
- raw_spin_lock(&list_lock);
+ raw_spin_lock_irqsave(&list_lock, flags);
vmdirq->irq->count--;
- raw_spin_unlock(&list_lock);
+ raw_spin_unlock_irqrestore(&list_lock, flags);
kfree_rcu(vmdirq, rcu);
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/3] vmd: Use x86_vector_domain as parent domain
2016-06-20 15:39 [PATCH 0/3] VMD fixes Jon Derrick
2016-06-20 15:39 ` [PATCH 1/3] vmd: Use lock save/restore in interrupt enable path Jon Derrick
@ 2016-06-20 15:39 ` Jon Derrick
2016-06-20 15:39 ` [PATCH 3/3] vmd: Separate MSI and MSI-x vector sharing Jon Derrick
2016-06-20 19:51 ` [PATCH 0/3] VMD fixes Bjorn Helgaas
3 siblings, 0 replies; 5+ messages in thread
From: Jon Derrick @ 2016-06-20 15:39 UTC (permalink / raw)
To: helgaas; +Cc: Keith Busch, linux-pci, Jon Derrick
From: Keith Busch <keith.busch@intel.com>
Otherwise apic code assumes vmd's irq domain can be managed by the apic,
resulting in an invalid cast of irq_data during irq_force_complete_move.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
---
arch/x86/pci/vmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/pci/vmd.c b/arch/x86/pci/vmd.c
index bd81393..a651eb0 100644
--- a/arch/x86/pci/vmd.c
+++ b/arch/x86/pci/vmd.c
@@ -599,7 +599,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd)
sd->node = pcibus_to_node(vmd->dev->bus);
vmd->irq_domain = pci_msi_create_irq_domain(NULL, &vmd_msi_domain_info,
- NULL);
+ x86_vector_domain);
if (!vmd->irq_domain)
return -ENODEV;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 3/3] vmd: Separate MSI and MSI-x vector sharing
2016-06-20 15:39 [PATCH 0/3] VMD fixes Jon Derrick
2016-06-20 15:39 ` [PATCH 1/3] vmd: Use lock save/restore in interrupt enable path Jon Derrick
2016-06-20 15:39 ` [PATCH 2/3] vmd: Use x86_vector_domain as parent domain Jon Derrick
@ 2016-06-20 15:39 ` Jon Derrick
2016-06-20 19:51 ` [PATCH 0/3] VMD fixes Bjorn Helgaas
3 siblings, 0 replies; 5+ messages in thread
From: Jon Derrick @ 2016-06-20 15:39 UTC (permalink / raw)
To: helgaas; +Cc: Keith Busch, linux-pci
From: Keith Busch <keith.busch@intel.com>
Child devices in a VMD domain that want to use MSI are slowing down
MSI-x using devices sharing the same vectors. This moves all MSI usage
to a single VMD vector, and MSI-x devices can share the rest.
Acked-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
---
arch/x86/pci/vmd.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/x86/pci/vmd.c b/arch/x86/pci/vmd.c
index a651eb0..e88b417 100644
--- a/arch/x86/pci/vmd.c
+++ b/arch/x86/pci/vmd.c
@@ -169,11 +169,14 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
* XXX: We can be even smarter selecting the best IRQ once we solve the
* affinity problem.
*/
-static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd)
+static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
{
- int i, best = 0;
+ int i, best = 1;
unsigned long flags;
+ if (!desc->msi_attrib.is_msix || vmd->msix_count == 1)
+ return &vmd->irqs[0];
+
raw_spin_lock_irqsave(&list_lock, flags);
for (i = 1; i < vmd->msix_count; i++)
if (vmd->irqs[i].count < vmd->irqs[best].count)
@@ -188,14 +191,15 @@ static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
unsigned int virq, irq_hw_number_t hwirq,
msi_alloc_info_t *arg)
{
- struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(arg->desc)->bus);
+ struct msi_desc *desc = arg->desc;
+ struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(desc)->bus);
struct vmd_irq *vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL);
if (!vmdirq)
return -ENOMEM;
INIT_LIST_HEAD(&vmdirq->node);
- vmdirq->irq = vmd_next_irq(vmd);
+ vmdirq->irq = vmd_next_irq(vmd, desc);
vmdirq->virq = virq;
irq_domain_set_info(domain, virq, vmdirq->irq->vmd_vector, info->chip,
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 0/3] VMD fixes
2016-06-20 15:39 [PATCH 0/3] VMD fixes Jon Derrick
` (2 preceding siblings ...)
2016-06-20 15:39 ` [PATCH 3/3] vmd: Separate MSI and MSI-x vector sharing Jon Derrick
@ 2016-06-20 19:51 ` Bjorn Helgaas
3 siblings, 0 replies; 5+ messages in thread
From: Bjorn Helgaas @ 2016-06-20 19:51 UTC (permalink / raw)
To: Jon Derrick; +Cc: keith.busch, linux-pci
On Mon, Jun 20, 2016 at 09:39:50AM -0600, Jon Derrick wrote:
> Hi Bjorn,
> This set has a few miscellaneous fixes for VMD for issues which fell-out
> from various testing configurations.
>
> Jon Derrick (1):
> vmd: Use lock save/restore in interrupt enable path
>
> Keith Busch (2):
> vmd: Use x86_vector_domain as parent domain
> vmd: Separate MSI and MSI-x vector sharing
>
> arch/x86/pci/vmd.c | 34 +++++++++++++++++++++-------------
> 1 file changed, 21 insertions(+), 13 deletions(-)
Applied to pci/host-vmd, thanks!
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-06-20 19:51 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-20 15:39 [PATCH 0/3] VMD fixes Jon Derrick
2016-06-20 15:39 ` [PATCH 1/3] vmd: Use lock save/restore in interrupt enable path Jon Derrick
2016-06-20 15:39 ` [PATCH 2/3] vmd: Use x86_vector_domain as parent domain Jon Derrick
2016-06-20 15:39 ` [PATCH 3/3] vmd: Separate MSI and MSI-x vector sharing Jon Derrick
2016-06-20 19:51 ` [PATCH 0/3] VMD fixes Bjorn Helgaas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).