Linux on ARM based TI OMAP SoCs
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Kishon Vijay Abraham I <kishon@ti.com>,
	Christoph Hellwig <hch@lst.de>, Rob Herring <robh+dt@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>,
	linux-omap@vger.kernel.org, linux-pci@vger.kernel.org,
	devicetree@vger.kernel.org, linux-kernel@vger.kernel.org,
	nsekhar@ti.com
Subject: Re: [PATCH] of: Devices with pci_epf_bus_type require DMA configuration
Date: Mon, 23 Oct 2017 14:05:00 +0100	[thread overview]
Message-ID: <037082e6-5abe-daf9-47aa-9586a4aead99@arm.com> (raw)
In-Reply-To: <3599d3e2-d496-eac3-7179-2d1795fc059d@ti.com>

On 23/10/17 06:43, Kishon Vijay Abraham I wrote:
> Hi,
> 
> On Wednesday 11 October 2017 10:15 PM, Robin Murphy wrote:
>> On 11/10/17 09:00, Kishon Vijay Abraham I wrote:
>>> pci-epc-core.c invokes of_dma_configure in order to configure
>>> the coherent_dma_mask/dma_mask of endpoint function device. This is
>>> required for dma_alloc_coherent to succeed in pci function driver
>>> (pci-epf-test.c). However after
>>> commit 723288836628bc1c08 ("of: restrict DMA configuration"),
>>> of_dma_configure doesn't configure the coherent_dma_mask/dma_mask
>>> of endpoint function device (since it doesn't have dma-ranges
>>> property), resulting in dma_alloc_coherent in pci endpoint function
>>> driver to to fail. Fix it by making sure of_dma_configure configures
>>> coherent_dma_mask/dma_mask irrespective of whether the node has
>>> dma-ranges property or not.
>>
>> Frankly, what the endpoint stuff is doing looks wrong anyway. As I
>> understand it, the endpoint functions aren't real devices, just a
>> partitioning of resources - the only piece of hardware actually doing
>> DMA is the EPC itself, which should already have been configured
>> appropriately as a platform device.
> 
> EPF devices use EPC devices which in turn use the actual platform device for
> configuring the hardware. IMO the devices in one layer shouldn't have to
> explicitly use devices in another layer other than using clearly defined API's.
> Here platform_device is the bottom later, above which is epc_device and on top
> is epf_device.

Note that the "sysdev"-type model that I'm implying already has
precedent elsewhere, e.g. in the USB layer. Since you already have
things abstracted behind the pci_epf_{alloc,free}_space() API, this
seems like a simple change to make.

> The idea is just by doing the initial setup in the framework, the epf driver be
> able to use APIs like dma_alloc_coherent using it's own *device* rather than
> the EPC's "device".

OK, but when would that actually happen? Please correct me if I've got
it wrong, but as best I understand it, the hardware extent of the EPF is
some registers controlling the config space that the EPC exposes to the
other end of the PCI link - the only actual DMA happening will be in
response to PCI traffic in and out of the EPF's BARs, between the EPC
and the memory backing those BAR regions which is already controlled by
your API. Sure, the EPF *driver* is free to access whatever memory it
feels like, but as a software construct that doesn't constitute DMA.

To put it another way, if an endpoint driver *does* call
dma_alloc_coherent(epf_device, ...), what does it then do with the
resulting DMA address?

>> It seems to me that the EPF BAR allocations should just be using the EPC
>> device directly, rather than trying to pretend the EPFs are distinct DMA
>> masters.
>>
>> Furthermore, now that I've looked:
>>
>>> 	dma_addr_t phys_addr;
>>
>> please no :(
>>
>> (I can easily think of more than one system with an EP-capable DWC PCIe
>> block integrated behind an IOMMU)
> 
> hmm.. haven't used IOMMU but won't setting up parent-child relationship between
> EPC and EPF help in the case of IOMMU?

I was merely referring to the characterisation throughout this code that
a DMA address is a physical address; it isn't, for any number of reasons
(IOMMUs are just the most obvious). Fortunately it's only a cosmetic
naming problem, the actual usage looks OK.

Robin.

  parent reply	other threads:[~2017-10-23 13:05 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-11  8:00 [PATCH] of: Devices with pci_epf_bus_type require DMA configuration Kishon Vijay Abraham I
2017-10-11 13:07 ` Rob Herring
     [not found] ` <20171011080041.12918-1-kishon-l0cyMroinI0@public.gmane.org>
2017-10-11 16:45   ` Robin Murphy
     [not found]     ` <cae4b8e9-9944-6482-c12c-8ce87aea1487-5wv7dgnIgG8@public.gmane.org>
2017-10-23  5:43       ` Kishon Vijay Abraham I
2017-10-23  6:31         ` Christoph Hellwig
2017-10-23 13:05         ` Robin Murphy [this message]
     [not found]           ` <037082e6-5abe-daf9-47aa-9586a4aead99-5wv7dgnIgG8@public.gmane.org>
2017-10-24  5:43             ` Kishon Vijay Abraham I

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=037082e6-5abe-daf9-47aa-9586a4aead99@arm.com \
    --to=robin.murphy@arm.com \
    --cc=bhelgaas@google.com \
    --cc=devicetree@vger.kernel.org \
    --cc=hch@lst.de \
    --cc=kishon@ti.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-omap@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=nsekhar@ti.com \
    --cc=robh+dt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox