From: "Derrick, Jonathan" <jonathan.derrick@intel.com>
To: "lorenzo.pieralisi@arm.com" <lorenzo.pieralisi@arm.com>
Cc: "linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"helgaas@kernel.org" <helgaas@kernel.org>,
"jhp@endlessos.org" <jhp@endlessos.org>
Subject: Re: [PATCH v4] PCI: vmd: Offset Client VMD MSI-X vectors
Date: Sat, 21 Nov 2020 00:52:10 +0000 [thread overview]
Message-ID: <e7946ec9ac1a425dc8ccccd506770ba89a48a98a.camel@intel.com> (raw)
In-Reply-To: <20201102222223.92978-1-jonathan.derrick@intel.com>
Hi Lorenzo
Please don't forget this one
Thanks
On Mon, 2020-11-02 at 15:22 -0700, Jon Derrick wrote:
> Client VMD platforms have a software-triggered MSI-X vector 0 that will
> not forward hardware-remapped MSI from the sub-device domain. This
> causes an issue with VMD platforms that use AHCI behind VMD and have a
> single MSI-X vector remapped to VMD vector 0. Add a VMD MSI-X vector
> offset for these platforms.
>
> Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
> Tested-by: Jian-Hong Pan <jhp@endlessos.org>
> ---
> v3->v4: Rebase for v5.10
>
> drivers/pci/controller/vmd.c | 37 +++++++++++++++++++++++++-----------
> 1 file changed, 26 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index f375c21ceeb1..c31e4d5cb146 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -53,6 +53,12 @@ enum vmd_features {
> * vendor-specific capability space
> */
> VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP = (1 << 2),
> +
> + /*
> + * Device may use MSI-X vector 0 for software triggering and will not
> + * be used for MSI remapping
> + */
> + VMD_FEAT_OFFSET_FIRST_VECTOR = (1 << 3),
> };
>
> /*
> @@ -104,6 +110,7 @@ struct vmd_dev {
> struct irq_domain *irq_domain;
> struct pci_bus *bus;
> u8 busn_start;
> + u8 first_vec;
> };
>
> static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
> @@ -199,11 +206,11 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
> */
> static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
> {
> - int i, best = 1;
> unsigned long flags;
> + int i, best;
>
> - if (vmd->msix_count == 1)
> - return &vmd->irqs[0];
> + if (vmd->msix_count == 1 + vmd->first_vec)
> + return &vmd->irqs[vmd->first_vec];
>
> /*
> * White list for fast-interrupt handlers. All others will share the
> @@ -213,11 +220,12 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
> case PCI_CLASS_STORAGE_EXPRESS:
> break;
> default:
> - return &vmd->irqs[0];
> + return &vmd->irqs[vmd->first_vec];
> }
>
> raw_spin_lock_irqsave(&list_lock, flags);
> - for (i = 1; i < vmd->msix_count; i++)
> + best = vmd->first_vec + 1;
> + for (i = best; i < vmd->msix_count; i++)
> if (vmd->irqs[i].count < vmd->irqs[best].count)
> best = i;
> vmd->irqs[best].count++;
> @@ -550,8 +558,8 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd)
> if (vmd->msix_count < 0)
> return -ENODEV;
>
> - vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count,
> - PCI_IRQ_MSIX);
> + vmd->msix_count = pci_alloc_irq_vectors(dev, vmd->first_vec + 1,
> + vmd->msix_count, PCI_IRQ_MSIX);
> if (vmd->msix_count < 0)
> return vmd->msix_count;
>
> @@ -719,6 +727,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>
> static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
> {
> + unsigned long features = (unsigned long) id->driver_data;
> struct vmd_dev *vmd;
> int err;
>
> @@ -743,13 +752,16 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
> dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)))
> return -ENODEV;
>
> + if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
> + vmd->first_vec = 1;
> +
> err = vmd_alloc_irqs(vmd);
> if (err)
> return err;
>
> spin_lock_init(&vmd->cfg_lock);
> pci_set_drvdata(dev, vmd);
> - err = vmd_enable_domain(vmd, (unsigned long) id->driver_data);
> + err = vmd_enable_domain(vmd, features);
> if (err)
> return err;
>
> @@ -818,13 +830,16 @@ static const struct pci_device_id vmd_ids[] = {
> VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f),
> .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
> - VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + VMD_FEAT_HAS_BUS_RESTRICTIONS |
> + VMD_FEAT_OFFSET_FIRST_VECTOR,},
> {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c3d),
> .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
> - VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + VMD_FEAT_HAS_BUS_RESTRICTIONS |
> + VMD_FEAT_OFFSET_FIRST_VECTOR,},
> {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
> .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
> - VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + VMD_FEAT_HAS_BUS_RESTRICTIONS |
> + VMD_FEAT_OFFSET_FIRST_VECTOR,},
> {0,}
> };
> MODULE_DEVICE_TABLE(pci, vmd_ids);
next prev parent reply other threads:[~2020-11-21 0:52 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-02 22:22 [PATCH v4] PCI: vmd: Offset Client VMD MSI-X vectors Jon Derrick
2020-11-21 0:52 ` Derrick, Jonathan [this message]
2020-11-23 9:45 ` Lorenzo Pieralisi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e7946ec9ac1a425dc8ccccd506770ba89a48a98a.camel@intel.com \
--to=jonathan.derrick@intel.com \
--cc=helgaas@kernel.org \
--cc=jhp@endlessos.org \
--cc=linux-pci@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).