linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bjorn Helgaas <helgaas@kernel.org>
To: "Veal, Bryan E." <bryan.e.veal@intel.com>
Cc: Keith Busch <keith.busch@intel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	x86@kernel.org, linux-pci@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Jon Derrick <jonathan.derrick@intel.com>
Subject: Re: [PATCHv8 0/5] Driver for new "VMD" device
Date: Fri, 15 Jan 2016 15:49:02 -0600	[thread overview]
Message-ID: <20160115214902.GA10272@localhost> (raw)
In-Reply-To: <20160115193103.GA2249@intel.com>

On Fri, Jan 15, 2016 at 11:31:03AM -0800, Veal, Bryan E. wrote:
> On Fri, Jan 15, 2016 at 12:19:38PM -0600, Bjorn Helgaas wrote:
> > I also have a more substantive question about the flags setup.  I
> > think you should not clear IORESOURCE_MEM_64.  The intent of
> > IORESOURCE_MEM_64 is to describe the *capability* of a BAR, not its
> > contents.  But I assume you cleared it for a reason.  vmd->resources[n]
> > are not BARs, so the PCI core won't assign resources to them like it
> > does for BAR, so we shouldn't care about IORESOURCE_MEM_64 for that
> > reason.  Is there some other reason IORESOURCE_MEM_64 makes a
> > difference there?
> 
> I did this to fix an issue in pre-RFC code.

Even though you found this issue before posting the RFC code, I assume
the issue is still relevant in the current code, and you still want to
clear IORESOURCE_MEM_64, right?

> The flag is subtly restrictive in one specific scenario: spec-compliant
> PCIe ports lack the ability to specify a 64-bit, non-prefetchable range.

Right; I think this is just a consequence of PCIe ports being PCI
bridges, and bridges having:

  - optional 32-bit I/O window
  - required 32-bit non-prefetchable memory window
  - optional prefetchable memory window (either 32-bit or 64-bit)

If we have a device with a 64-bit non-prefetchable BAR, we can assign
a 64-bit address if the device is on a root bus and the host bridge
has a 64-bit non-prefetchable window.  Otherwise, the device is below
a P2P bridge and we have to assign space from the 32-bit
non-prefetchable window.

So far this is all standard PCI stuff, not VMD- or even PCIe-specific.

> IORESOURCE_MEM_64 directs the PCI subsystem to put the address into the
> 64-bit *prefetchable* range. 

This is where I get confused.  IORESOURCE_MEM_64 *should* mean "the
hardware register associated with this resource can accommodate a
64-bit value."  If we're using IORESOURCE_MEM_64 to decide whether to
use a prefetchable vs. a non-prefetchable window, that sounds broken.

Can you point me to the relevant code, and maybe give an example?  I'm
pretty sure the code doesn't completely match the spec, and maybe this
is a case where we have to set the flags non-intuitively to get the
desired result.

> Below the port, the "prefetchable" propoerty
> *is* restrictive: the addresses can't be used for non-prefetchable BARs.
> 
> Thus, in the specific case where a 64-bit non-prefetchable VMD bar happens
> to contain a 32-bit address, removing the IORESOURCE_MEM_64 flag allows
> the address resource to be used for *any* non-prefetchable BARs (32-bit or
> 64-bit) downstream.  

If I understand correctly, these VMD BARs (VMD_MEMBAR1 and
VMD_MEMBAR2) effectively become the host bridge windows available for
devices below the VMD.

I infer that if the VMD host bridge window is non-prefetchable and has
IORESOURCE_MEM_64 set, we won't put a 32-bit non-prefetchable BAR in
that window.  That sounds like a bug, but let me be the first to admit
that I don't understand our PCI resource allocation code.

BTW, I forgot to ask about the 0x2000 offset applied to VMD_MEMBAR2.
Can we add a comment about what that offset is doing?

BTW2, is one of these (VMD_MEMBAR1 and VMD_MEMBAR2) prefetchable and
the other non-prefetchable?  If so, a comment would really help.

Bjorn

  reply	other threads:[~2016-01-15 21:49 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-12 20:18 [PATCHv8 0/5] Driver for new "VMD" device Keith Busch
2016-01-12 20:18 ` [PATCHv8 1/5] msi: Relax msi_domain_alloc() to support parentless MSI irqdomains Keith Busch
2016-01-12 20:18 ` [PATCHv8 2/5] x86/IRQ: Export IRQ domain function for module use Keith Busch
2016-01-12 20:18 ` [PATCHv8 3/5] x86/PCI: Allow PCI domain specific dma ops Keith Busch
2016-01-12 20:18 ` [PATCHv8 4/5] PCI/AER: Use 32 bit int type domains Keith Busch
2016-01-12 20:18 ` [PATCHv8 5/5] x86/PCI: Initial commit for new VMD device driver Keith Busch
2016-01-15 18:19 ` [PATCHv8 0/5] Driver for new "VMD" device Bjorn Helgaas
2016-01-15 19:31   ` Veal, Bryan E.
2016-01-15 21:49     ` Bjorn Helgaas [this message]
2016-01-16 22:19       ` Veal, Bryan E.
2016-01-20 22:01         ` Bjorn Helgaas
2016-02-22 22:10           ` Bjorn Helgaas
2016-02-23 18:24             ` Keith Busch
2016-02-25 14:42               ` Bjorn Helgaas
2016-02-25 14:50                 ` Keith Busch
2016-02-26 15:29                 ` Keith Busch
2016-01-15 19:32   ` Keith Busch
2016-01-15 19:44   ` Thomas Gleixner
2016-01-15 20:03     ` Bjorn Helgaas
2016-01-15 20:14       ` Thomas Gleixner
2016-01-15 19:48   ` Derrick, Jonathan
2016-01-15 19:54     ` Keith Busch
2016-01-15 20:02       ` Jon Derrick
2016-01-15 22:06     ` Bjorn Helgaas
2016-01-19 15:38       ` Keith Busch
2016-01-19 16:02         ` Christoph Hellwig
2016-01-19 16:36           ` Keith Busch
2016-01-19 22:05             ` Veal, Bryan E.
2016-01-20 20:43           ` Bjorn Helgaas
2016-01-26 16:46             ` Christoph Hellwig
2016-01-26 18:23               ` Veal, Bryan E.
2016-01-17 17:58 ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160115214902.GA10272@localhost \
    --to=helgaas@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=bryan.e.veal@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=jonathan.derrick@intel.com \
    --cc=keith.busch@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).