From: Nirmal Patel <nirmal.patel@linux.intel.com>
To: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: linux-pci@vger.kernel.org, paul.m.stillwell.jr@intel.com
Subject: Re: [PATCH v2] PCI: vmd: Clear PCI_INTERRUPT_LINE for VMD sub-devices
Date: Thu, 12 Sep 2024 08:31:00 -0700 [thread overview]
Message-ID: <20240912083100.000069bf@linux.intel.com> (raw)
In-Reply-To: <20240912143657.sgigcoj2lkedwfu4@thinkpad>
On Thu, 12 Sep 2024 20:06:57 +0530
Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> wrote:
> On Thu, Aug 22, 2024 at 11:30:10AM -0700, Nirmal Patel wrote:
> > On Thu, 22 Aug 2024 15:18:06 +0530
> > Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> wrote:
> >
> > > On Tue, Aug 20, 2024 at 03:32:13PM -0700, Nirmal Patel wrote:
> > > > VMD does not support INTx for devices downstream from a VMD
> > > > endpoint. So initialize the PCI_INTERRUPT_LINE to 0 for all NVMe
> > > > devices under VMD to ensure other applications don't try to set
> > > > up an INTx for them.
> > > >
> > > > Signed-off-by: Nirmal Patel <nirmal.patel@linux.intel.com>
> > >
> > > I shared a diff to put it in pci_assign_irq() and you said that
> > > you were going to test it [1]. I don't see a reply to that and
> > > now you came up with another approach.
> > >
> > > What happened inbetween?
> >
> > Apologies, I did perform the tests and the patch worked fine.
> > However, I was able to see lot of bridge devices had the register
> > set to 0xFF and I didn't want to alter them.
>
> You should've either replied to my comment or mentioned it in the
> changelog.
>
> > Also pci_assign_irg would still set the
> > interrupt line register to 0 with or without VMD. Since I didn't
> > want to introduce issues for non-VMD setup, I decide to keep the
> > change limited only to the VMD.
> >
>
> Sorry no. SPDK usecase is not specific to VMD and so is the issue. So
> this should be fixed in the PCI core as I proposed. What if another
> bridge also wants to do the same?
Okay. Should I clear every device that doesn't have map_irq setup like
you mentioned in your suggested patch or keep it to NVMe or devices
with storage class code?
-nirmal
>
> - Mani
>
> > -Nirmal
> > >
> > > - Mani
> > >
> > > [1]
> > > https://lore.kernel.org/linux-pci/20240801115756.0000272e@linux.intel.com
> > >
> > > > ---
> > > > v2->v1: Change the execution from fixup.c to vmd.c
> > > > ---
> > > > drivers/pci/controller/vmd.c | 13 +++++++++++++
> > > > 1 file changed, 13 insertions(+)
> > > >
> > > > diff --git a/drivers/pci/controller/vmd.c
> > > > b/drivers/pci/controller/vmd.c index a726de0af011..2e9b99969b81
> > > > 100644 --- a/drivers/pci/controller/vmd.c
> > > > +++ b/drivers/pci/controller/vmd.c
> > > > @@ -778,6 +778,18 @@ static int vmd_pm_enable_quirk(struct
> > > > pci_dev *pdev, void *userdata) return 0;
> > > > }
> > > >
> > > > +/*
> > > > + * Some applications like SPDK reads PCI_INTERRUPT_LINE to
> > > > decide
> > > > + * whether INTx is enabled or not. Since VMD doesn't support
> > > > INTx,
> > > > + * write 0 to all NVMe devices under VMD.
> > > > + */
> > > > +static int vmd_clr_int_line_reg(struct pci_dev *dev, void
> > > > *userdata) +{
> > > > + if(dev->class == PCI_CLASS_STORAGE_EXPRESS)
> > > > + pci_write_config_byte(dev, PCI_INTERRUPT_LINE,
> > > > 0);
> > > > + return 0;
> > > > +}
> > > > +
> > > > static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long
> > > > features) {
> > > > struct pci_sysdata *sd = &vmd->sysdata;
> > > > @@ -932,6 +944,7 @@ static int vmd_enable_domain(struct vmd_dev
> > > > *vmd, unsigned long features)
> > > > pci_scan_child_bus(vmd->bus);
> > > > vmd_domain_reset(vmd);
> > > > + pci_walk_bus(vmd->bus, vmd_clr_int_line_reg,
> > > > &features);
> > > > /* When Intel VMD is enabled, the OS does not discover
> > > > the Root Ports
> > > > * owned by Intel VMD within the MMCFG space.
> > > > pci_reset_bus() applies --
> > > > 2.39.1
> > > >
> > > >
> > >
> >
>
next prev parent reply other threads:[~2024-09-12 15:31 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-20 22:32 [PATCH v2] PCI: vmd: Clear PCI_INTERRUPT_LINE for VMD sub-devices Nirmal Patel
2024-08-22 9:48 ` Manivannan Sadhasivam
2024-08-22 18:30 ` Nirmal Patel
2024-09-12 14:36 ` Manivannan Sadhasivam
2024-09-12 15:31 ` Nirmal Patel [this message]
2024-09-12 16:47 ` Manivannan Sadhasivam
2024-09-12 17:11 ` Dan Williams
2024-09-12 17:25 ` Manivannan Sadhasivam
2024-09-12 18:10 ` Dan Williams
2024-09-12 19:15 ` Nirmal Patel
2024-09-13 0:01 ` Dan Williams
2024-09-13 10:55 ` Manivannan Sadhasivam
2024-09-13 16:02 ` Nirmal Patel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240912083100.000069bf@linux.intel.com \
--to=nirmal.patel@linux.intel.com \
--cc=linux-pci@vger.kernel.org \
--cc=manivannan.sadhasivam@linaro.org \
--cc=paul.m.stillwell.jr@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).