From: Thomas Gleixner <tglx@linutronix.de>
To: Jacob Pan <jacob.jun.pan@linux.intel.com>,
Dimitri Sivanich <sivanich@hpe.com>
Cc: David Woodhouse <dwmw2@infradead.org>,
Lu Baolu <baolu.lu@linux.intel.com>,
Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
iommu@lists.linux.dev, linux-kernel@vger.kernel.org,
Steve Wahl <steve.wahl@hpe.com>,
Russ Anderson <russ.anderson@hpe.com>,
jacob.jun.pan@linux.intel.com
Subject: Re: [PATCH] Allocate DMAR fault interrupts locally
Date: Sun, 24 Mar 2024 22:05:35 +0100 [thread overview]
Message-ID: <87msqnfihs.ffs@tglx> (raw)
In-Reply-To: <20240321151357.1d18127f@jacob-builder>
Jacob!
On Thu, Mar 21 2024 at 15:13, Jacob Pan wrote:
> On Mon, 11 Mar 2024 15:38:59 -0500, Dimitri Sivanich <sivanich@hpe.com>
> wrote:
>> So the code seems to have been designed based on the assumption that it
>> will be run on an already active (though not necessarily fully onlined?)
>> cpu. To make this work, any code based on that assumption would need to
>> be fixed. Otherwise, a different approach is needed.
>
> This may not be pretty but since DMAR fault is for unrecoverable faults,
> they are rare and infrequent, should never happen on a healthy system. Can
> we share one BSP vector for all DMARs? i.e. let dmar_fault() handler search
> for the offending DMAR for fault reasons.
It might not be pretty on the first thought, but it has a charm.
You are right. If DMAR faults happen then there is an issue which is
going to affect the machine badly anyway, so the extra search through
the iommu list is not necessarily horrible and it does not matter at all
whether the access is local or not.
But there are two interrupts involved. The DMAR one and the SVM one.
And all of that DMAR/SVM code is built around the assumption that
everything can be set up at boot time on the BSP.
The DMAR case definitely is solvable by sharing the interrupt, see the
uncompiled below. I still need to wrap my brain around the SVM part, but
feel free to beat me to it or preferrably explain to me that it's not
necessary at all :)
Thanks,
tglx
---
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1182,7 +1182,6 @@ static void free_iommu(struct intel_iomm
iommu->pr_irq = 0;
}
free_irq(iommu->irq, iommu);
- dmar_free_hwirq(iommu->irq);
iommu->irq = 0;
}
@@ -1956,17 +1955,21 @@ void dmar_msi_mask(struct irq_data *data
raw_spin_unlock_irqrestore(&iommu->register_lock, flag);
}
-void dmar_msi_write(int irq, struct msi_msg *msg)
+static void dmar_iommu_msi_write(struct intel_iommu *iommu, struct msi_msg *msg)
{
- struct intel_iommu *iommu = irq_get_handler_data(irq);
int reg = dmar_msi_reg(iommu, irq);
- unsigned long flag;
+ unsigned long flags;
- raw_spin_lock_irqsave(&iommu->register_lock, flag);
+ raw_spin_lock_irqsave(&iommu->register_lock, flags);
writel(msg->data, iommu->reg + reg + 4);
writel(msg->address_lo, iommu->reg + reg + 8);
writel(msg->address_hi, iommu->reg + reg + 12);
- raw_spin_unlock_irqrestore(&iommu->register_lock, flag);
+ raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
+}
+
+void dmar_msi_write(int irq, struct msi_msg *msg)
+{
+ dmar_iommu_msi_write(irq_get_handler_data(irq), msg);
}
void dmar_msi_read(int irq, struct msi_msg *msg)
@@ -2100,23 +2103,37 @@ irqreturn_t dmar_fault(int irq, void *de
int dmar_set_interrupt(struct intel_iommu *iommu)
{
- int irq, ret;
+ static int dmar_irq;
+ int ret;
- /*
- * Check if the fault interrupt is already initialized.
- */
+ /* Don't initialize it twice for a given iommu */
if (iommu->irq)
return 0;
- irq = dmar_alloc_hwirq(iommu->seq_id, iommu->node, iommu);
- if (irq > 0) {
- iommu->irq = irq;
+ /*
+ * There is one shared interrupt for all IOMMUs to prevent vector
+ * exhaustion.
+ */
+ if (!dmar_irq) {
+ int irq = dmar_alloc_hwirq(iommu->seq_id, iommu->node, iommu);
+
+ if (irq <= 0) {
+ pr_err("No free IRQ vectors\n");
+ return -EINVAL;
+ }
+ dmar_irq = irq;
} else {
- pr_err("No free IRQ vectors\n");
- return -EINVAL;
+ struct msi_msg msg;
+
+ /*
+ * Get the MSI message from the shared interrupt and write
+ * it to the iommu MSI registers.
+ */
+ dmar_msi_read(dmar_irq, &msg);
+ dmar_iommu_msi_write(iommu, &msg);
}
- ret = request_irq(irq, dmar_fault, IRQF_NO_THREAD, iommu->name, iommu);
+ ret = request_irq(dmar_irq, dmar_fault, IRQF_NO_THREAD | IRQF_SHARED, iommu->name, iommu);
if (ret)
pr_err("Can't request irq\n");
return ret;
next prev parent reply other threads:[~2024-03-24 21:05 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-29 20:07 [PATCH] Allocate DMAR fault interrupts locally Dimitri Sivanich
2024-02-29 22:18 ` Thomas Gleixner
2024-03-01 19:50 ` Jacob Pan
2024-03-01 19:59 ` Thomas Gleixner
2024-03-11 20:38 ` Dimitri Sivanich
2024-03-21 22:13 ` Jacob Pan
2024-03-24 21:05 ` Thomas Gleixner [this message]
2024-03-25 18:56 ` Jacob Pan
2024-04-04 0:00 ` Jacob Pan
2024-03-24 20:05 ` Thomas Gleixner
2024-03-25 12:20 ` Dimitri Sivanich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87msqnfihs.ffs@tglx \
--to=tglx@linutronix.de \
--cc=baolu.lu@linux.intel.com \
--cc=dwmw2@infradead.org \
--cc=iommu@lists.linux.dev \
--cc=jacob.jun.pan@linux.intel.com \
--cc=joro@8bytes.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=russ.anderson@hpe.com \
--cc=sivanich@hpe.com \
--cc=steve.wahl@hpe.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox