* [PATCH v8 1/9] Fix ioport_map() for !CONFIG_GENERIC_IOMAP cases.
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
2014-07-01 18:43 ` [PATCH v8 2/9] pci: Export find_pci_host_bridge() function Liviu Dudau
` (8 subsequent siblings)
9 siblings, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: Device Tree ML, LKML, LAKML
The inline version of ioport_map() that gets used when !CONFIG_GENERIC_IOMAP
is wrong. It returns a mapped (i.e. virtual) address that can start from
zero and completely ignores the PCI_IOBASE and IO_SPACE_LIMIT that most
architectures that use !CONFIG_GENERIC_MAP define.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Tanmay Inamdar <tinamdar@apm.com>
---
include/asm-generic/io.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index 975e1cc..2e2161b 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
@@ -331,7 +331,7 @@ static inline void iounmap(void __iomem *addr)
#ifndef CONFIG_GENERIC_IOMAP
static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
{
- return (void __iomem *) port;
+ return (void __iomem *)(PCI_IOBASE + (port & IO_SPACE_LIMIT));
}
static inline void ioport_unmap(void __iomem *p)
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
* [PATCH v8 2/9] pci: Export find_pci_host_bridge() function.
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
2014-07-01 18:43 ` [PATCH v8 1/9] Fix ioport_map() for !CONFIG_GENERIC_IOMAP cases Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
2014-07-02 18:06 ` Tanmay Inamdar
[not found] ` <1404240214-9804-3-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
2014-07-01 18:43 ` [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function Liviu Dudau
` (7 subsequent siblings)
9 siblings, 2 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: Device Tree ML, LKML, LAKML
This is a useful function and we should make it visible outside the
generic PCI code. Export it as a GPL symbol.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Tested-by: Tanmay Inamdar <tinamdar@apm.com>
---
drivers/pci/host-bridge.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
index 0e5f3c9..36c669e 100644
--- a/drivers/pci/host-bridge.c
+++ b/drivers/pci/host-bridge.c
@@ -16,12 +16,13 @@ static struct pci_bus *find_pci_root_bus(struct pci_bus *bus)
return bus;
}
-static struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
+struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
{
struct pci_bus *root_bus = find_pci_root_bus(bus);
return to_pci_host_bridge(root_bus->bridge);
}
+EXPORT_SYMBOL_GPL(find_pci_host_bridge);
void pci_set_host_bridge_release(struct pci_host_bridge *bridge,
void (*release_fn)(struct pci_host_bridge *),
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
* Re: [PATCH v8 2/9] pci: Export find_pci_host_bridge() function.
2014-07-01 18:43 ` [PATCH v8 2/9] pci: Export find_pci_host_bridge() function Liviu Dudau
@ 2014-07-02 18:06 ` Tanmay Inamdar
2014-07-02 19:12 ` Arnd Bergmann
[not found] ` <1404240214-9804-3-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
1 sibling, 1 reply; 92+ messages in thread
From: Tanmay Inamdar @ 2014-07-02 18:06 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML, patches
Hi,
On Tue, Jul 1, 2014 at 11:43 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> This is a useful function and we should make it visible outside the
> generic PCI code. Export it as a GPL symbol.
>
> Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> ---
> drivers/pci/host-bridge.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
> index 0e5f3c9..36c669e 100644
> --- a/drivers/pci/host-bridge.c
> +++ b/drivers/pci/host-bridge.c
> @@ -16,12 +16,13 @@ static struct pci_bus *find_pci_root_bus(struct pci_bus *bus)
> return bus;
> }
>
> -static struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
> +struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
> {
> struct pci_bus *root_bus = find_pci_root_bus(bus);
>
> return to_pci_host_bridge(root_bus->bridge);
> }
> +EXPORT_SYMBOL_GPL(find_pci_host_bridge);
Is there any specific reason behind making this symbol GPL? I think
the other functions in this file are just EXPORT_SYMBOL. Ultimately
companies which have non gpl Linux modules (nvidia) will face issue
using this API.
The same applies to 'of_create_pci_host_bridge'.
>
> void pci_set_host_bridge_release(struct pci_host_bridge *bridge,
> void (*release_fn)(struct pci_host_bridge *),
> --
> 2.0.0
>
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 2/9] pci: Export find_pci_host_bridge() function.
2014-07-02 18:06 ` Tanmay Inamdar
@ 2014-07-02 19:12 ` Arnd Bergmann
2014-07-02 20:43 ` Tanmay Inamdar
0 siblings, 1 reply; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-02 19:12 UTC (permalink / raw)
To: Tanmay Inamdar
Cc: Liviu Dudau, linux-pci, Bjorn Helgaas, Catalin Marinas,
Will Deacon, Benjamin Herrenschmidt, linaro-kernel, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML, patches
On Wednesday 02 July 2014 11:06:38 Tanmay Inamdar wrote:
> On Tue, Jul 1, 2014 at 11:43 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> > This is a useful function and we should make it visible outside the
> > generic PCI code. Export it as a GPL symbol.
> >
> > Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> > Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> > ---
> > drivers/pci/host-bridge.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
> > index 0e5f3c9..36c669e 100644
> > --- a/drivers/pci/host-bridge.c
> > +++ b/drivers/pci/host-bridge.c
> > @@ -16,12 +16,13 @@ static struct pci_bus *find_pci_root_bus(struct pci_bus *bus)
> > return bus;
> > }
> >
> > -static struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
> > +struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
> > {
> > struct pci_bus *root_bus = find_pci_root_bus(bus);
> >
> > return to_pci_host_bridge(root_bus->bridge);
> > }
> > +EXPORT_SYMBOL_GPL(find_pci_host_bridge);
>
> Is there any specific reason behind making this symbol GPL? I think
> the other functions in this file are just EXPORT_SYMBOL. Ultimately
> companies which have non gpl Linux modules (nvidia) will face issue
> using this API.
>
> The same applies to 'of_create_pci_host_bridge'.
I think EXPORT_SYMBOL_GPL() is better here. The new symbols are unlikely
to be used by a peripheral device driver, and PCI host controllers are
already restricted by EXPORT_SYMBOL_GPL.
nvidia will certainly not do a PCI host controller driver that is not
upstream or not GPL-compatible.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 2/9] pci: Export find_pci_host_bridge() function.
2014-07-02 19:12 ` Arnd Bergmann
@ 2014-07-02 20:43 ` Tanmay Inamdar
2014-07-03 9:53 ` Liviu Dudau
2014-07-03 10:26 ` Arnd Bergmann
0 siblings, 2 replies; 92+ messages in thread
From: Tanmay Inamdar @ 2014-07-02 20:43 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Liviu Dudau, linux-pci, Bjorn Helgaas, Catalin Marinas,
Will Deacon, Benjamin Herrenschmidt, linaro-kernel, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML, patches
On Wed, Jul 2, 2014 at 12:12 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wednesday 02 July 2014 11:06:38 Tanmay Inamdar wrote:
>> On Tue, Jul 1, 2014 at 11:43 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
>> > This is a useful function and we should make it visible outside the
>> > generic PCI code. Export it as a GPL symbol.
>> >
>> > Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
>> > Tested-by: Tanmay Inamdar <tinamdar@apm.com>
>> > ---
>> > drivers/pci/host-bridge.c | 3 ++-
>> > 1 file changed, 2 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
>> > index 0e5f3c9..36c669e 100644
>> > --- a/drivers/pci/host-bridge.c
>> > +++ b/drivers/pci/host-bridge.c
>> > @@ -16,12 +16,13 @@ static struct pci_bus *find_pci_root_bus(struct pci_bus *bus)
>> > return bus;
>> > }
>> >
>> > -static struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
>> > +struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
>> > {
>> > struct pci_bus *root_bus = find_pci_root_bus(bus);
>> >
>> > return to_pci_host_bridge(root_bus->bridge);
>> > }
>> > +EXPORT_SYMBOL_GPL(find_pci_host_bridge);
>>
>> Is there any specific reason behind making this symbol GPL? I think
>> the other functions in this file are just EXPORT_SYMBOL. Ultimately
>> companies which have non gpl Linux modules (nvidia) will face issue
>> using this API.
>>
>> The same applies to 'of_create_pci_host_bridge'.
>
> I think EXPORT_SYMBOL_GPL() is better here. The new symbols are unlikely
> to be used by a peripheral device driver, and PCI host controllers are
> already restricted by EXPORT_SYMBOL_GPL.
>
You are right as long as the functions are not used directly. But what
if GPL functions are called indirectly. For example, 'pci_domain_nr'
implementation in Liviu's V7 series calls 'find_pci_host_bridge'.
> nvidia will certainly not do a PCI host controller driver that is not
> upstream or not GPL-compatible.
>
> Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 2/9] pci: Export find_pci_host_bridge() function.
2014-07-02 20:43 ` Tanmay Inamdar
@ 2014-07-03 9:53 ` Liviu Dudau
2014-07-03 10:26 ` Arnd Bergmann
1 sibling, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-03 9:53 UTC (permalink / raw)
To: Tanmay Inamdar
Cc: Arnd Bergmann, linux-pci, Bjorn Helgaas, Catalin Marinas,
Will Deacon, Benjamin Herrenschmidt, linaro-kernel, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML, patches
On Wed, Jul 02, 2014 at 09:43:41PM +0100, Tanmay Inamdar wrote:
> On Wed, Jul 2, 2014 at 12:12 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> > On Wednesday 02 July 2014 11:06:38 Tanmay Inamdar wrote:
> >> On Tue, Jul 1, 2014 at 11:43 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> >> > This is a useful function and we should make it visible outside the
> >> > generic PCI code. Export it as a GPL symbol.
> >> >
> >> > Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> >> > Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> >> > ---
> >> > drivers/pci/host-bridge.c | 3 ++-
> >> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >> >
> >> > diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
> >> > index 0e5f3c9..36c669e 100644
> >> > --- a/drivers/pci/host-bridge.c
> >> > +++ b/drivers/pci/host-bridge.c
> >> > @@ -16,12 +16,13 @@ static struct pci_bus *find_pci_root_bus(struct pci_bus *bus)
> >> > return bus;
> >> > }
> >> >
> >> > -static struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
> >> > +struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
> >> > {
> >> > struct pci_bus *root_bus = find_pci_root_bus(bus);
> >> >
> >> > return to_pci_host_bridge(root_bus->bridge);
> >> > }
> >> > +EXPORT_SYMBOL_GPL(find_pci_host_bridge);
> >>
> >> Is there any specific reason behind making this symbol GPL? I think
> >> the other functions in this file are just EXPORT_SYMBOL. Ultimately
> >> companies which have non gpl Linux modules (nvidia) will face issue
> >> using this API.
> >>
> >> The same applies to 'of_create_pci_host_bridge'.
> >
> > I think EXPORT_SYMBOL_GPL() is better here. The new symbols are unlikely
> > to be used by a peripheral device driver, and PCI host controllers are
> > already restricted by EXPORT_SYMBOL_GPL.
> >
>
> You are right as long as the functions are not used directly. But what
> if GPL functions are called indirectly. For example, 'pci_domain_nr'
> implementation in Liviu's V7 series calls 'find_pci_host_bridge'.
I will not be drawn into the discussion of EXPORT_SYMBOL vs EXPORT_SYMBOL_GPL()
other than to say that I don't understand what is so secret in implementing
a standard. I do not want to support host bridge drivers that are not open
source.
Best regards,
Liviu
>
> > nvidia will certainly not do a PCI host controller driver that is not
> > upstream or not GPL-compatible.
> >
> > Arnd
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 2/9] pci: Export find_pci_host_bridge() function.
2014-07-02 20:43 ` Tanmay Inamdar
2014-07-03 9:53 ` Liviu Dudau
@ 2014-07-03 10:26 ` Arnd Bergmann
1 sibling, 0 replies; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-03 10:26 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Tanmay Inamdar, Sinan Kaya, linaro-kernel, Catalin Marinas,
Device Tree ML, linux-pci, Jingoo Han, Liviu Dudau, LKML,
Will Deacon, Grant Likely, patches, Kukjin Kim,
Suravee Suthikulanit, Benjamin Herrenschmidt, Bjorn Helgaas
On Wednesday 02 July 2014 13:43:41 Tanmay Inamdar wrote:
> On Wed, Jul 2, 2014 at 12:12 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> > I think EXPORT_SYMBOL_GPL() is better here. The new symbols are unlikely
> > to be used by a peripheral device driver, and PCI host controllers are
> > already restricted by EXPORT_SYMBOL_GPL.
> >
>
> You are right as long as the functions are not used directly. But what
> if GPL functions are called indirectly. For example, 'pci_domain_nr'
> implementation in Liviu's V7 series calls 'find_pci_host_bridge'.
Good point. If pci_domain_nr() doesn't require access to an EXPORT_SYMBOL_GPL
symbol, it should not start doing that after this patch.
For of_create_pci_host_bridge() however, I can't think of any reason to use
a legacy EXPORT_SYMBOL.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
[parent not found: <1404240214-9804-3-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>]
* Re: [PATCH v8 2/9] pci: Export find_pci_host_bridge() function.
[not found] ` <1404240214-9804-3-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
@ 2014-07-07 23:27 ` Bjorn Helgaas
[not found] ` <20140707232749.GA22939-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-07 23:27 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 01, 2014 at 07:43:27PM +0100, Liviu Dudau wrote:
> This is a useful function and we should make it visible outside the
> generic PCI code. Export it as a GPL symbol.
>
> Signed-off-by: Liviu Dudau <Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
> Tested-by: Tanmay Inamdar <tinamdar-qTEPVZfXA3Y@public.gmane.org>
> ---
> drivers/pci/host-bridge.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
> index 0e5f3c9..36c669e 100644
> --- a/drivers/pci/host-bridge.c
> +++ b/drivers/pci/host-bridge.c
> @@ -16,12 +16,13 @@ static struct pci_bus *find_pci_root_bus(struct pci_bus *bus)
> return bus;
> }
>
> -static struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
> +struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
> {
> struct pci_bus *root_bus = find_pci_root_bus(bus);
>
> return to_pci_host_bridge(root_bus->bridge);
> }
> +EXPORT_SYMBOL_GPL(find_pci_host_bridge);
There's nothing in this series that uses find_pci_host_bridge(), so
how about we just wait until we have something that needs it?
Also, if/when we export this, I'd prefer a name that starts with "pci_"
as most of the other interfaces do.
> void pci_set_host_bridge_release(struct pci_host_bridge *bridge,
> void (*release_fn)(struct pci_host_bridge *),
> --
> 2.0.0
>
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 92+ messages in thread
* [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
2014-07-01 18:43 ` [PATCH v8 1/9] Fix ioport_map() for !CONFIG_GENERIC_IOMAP cases Liviu Dudau
2014-07-01 18:43 ` [PATCH v8 2/9] pci: Export find_pci_host_bridge() function Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
2014-07-01 19:36 ` Arnd Bergmann
` (3 more replies)
2014-07-01 18:43 ` [PATCH v8 4/9] pci: OF: Fix the conversion of IO ranges into IO resources Liviu Dudau
` (6 subsequent siblings)
9 siblings, 4 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: Device Tree ML, LKML, LAKML
Some architectures do not have a simple view of the PCI I/O space
and instead use a range of CPU addresses that map to bus addresses. For
some architectures these ranges will be expressed by OF bindings
in a device tree file.
Introduce a pci_register_io_range() helper function with a generic
implementation that can be used by such architectures to keep track
of the I/O ranges described by the PCI bindings. If the PCI_IOBASE
macro is not defined that signals lack of support for PCI and we
return an error.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
---
drivers/of/address.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++
include/linux/of_address.h | 1 +
2 files changed, 62 insertions(+)
diff --git a/drivers/of/address.c b/drivers/of/address.c
index 5edfcb0..1345733 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -5,6 +5,7 @@
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/pci_regs.h>
+#include <linux/slab.h>
#include <linux/string.h>
/* Max address size we deal with */
@@ -601,12 +602,72 @@ const __be32 *of_get_address(struct device_node *dev, int index, u64 *size,
}
EXPORT_SYMBOL(of_get_address);
+struct io_range {
+ struct list_head list;
+ phys_addr_t start;
+ resource_size_t size;
+};
+
+static LIST_HEAD(io_range_list);
+
+/*
+ * Record the PCI IO range (expressed as CPU physical address + size).
+ * Return a negative value if an error has occured, zero otherwise
+ */
+int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
+{
+#ifdef PCI_IOBASE
+ struct io_range *res;
+ resource_size_t allocated_size = 0;
+
+ /* check if the range hasn't been previously recorded */
+ list_for_each_entry(res, &io_range_list, list) {
+ if (addr >= res->start && addr + size <= res->start + size)
+ return 0;
+ allocated_size += res->size;
+ }
+
+ /* range not registed yet, check for available space */
+ if (allocated_size + size - 1 > IO_SPACE_LIMIT)
+ return -E2BIG;
+
+ /* add the range to the list */
+ res = kzalloc(sizeof(*res), GFP_KERNEL);
+ if (!res)
+ return -ENOMEM;
+
+ res->start = addr;
+ res->size = size;
+
+ list_add_tail(&res->list, &io_range_list);
+
+ return 0;
+#else
+ return -EINVAL;
+#endif
+}
+
unsigned long __weak pci_address_to_pio(phys_addr_t address)
{
+#ifdef PCI_IOBASE
+ struct io_range *res;
+ resource_size_t offset = 0;
+
+ list_for_each_entry(res, &io_range_list, list) {
+ if (address >= res->start &&
+ address < res->start + res->size) {
+ return res->start - address + offset;
+ }
+ offset += res->size;
+ }
+
+ return (unsigned long)-1;
+#else
if (address > IO_SPACE_LIMIT)
return (unsigned long)-1;
return (unsigned long) address;
+#endif
}
static int __of_address_to_resource(struct device_node *dev,
diff --git a/include/linux/of_address.h b/include/linux/of_address.h
index c13b878..ac4aac4 100644
--- a/include/linux/of_address.h
+++ b/include/linux/of_address.h
@@ -55,6 +55,7 @@ extern void __iomem *of_iomap(struct device_node *device, int index);
extern const __be32 *of_get_address(struct device_node *dev, int index,
u64 *size, unsigned int *flags);
+extern int pci_register_io_range(phys_addr_t addr, resource_size_t size);
extern unsigned long pci_address_to_pio(phys_addr_t addr);
extern int of_pci_range_parser_init(struct of_pci_range_parser *parser,
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-01 18:43 ` [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function Liviu Dudau
@ 2014-07-01 19:36 ` Arnd Bergmann
2014-07-01 20:45 ` Liviu Dudau
2014-07-02 11:22 ` Will Deacon
` (2 subsequent siblings)
3 siblings, 1 reply; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-01 19:36 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Liviu Dudau, linux-pci, Bjorn Helgaas, Catalin Marinas,
Will Deacon, Benjamin Herrenschmidt, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, Device Tree ML, LKML
On Tuesday 01 July 2014 19:43:28 Liviu Dudau wrote:
> +/*
> + * Record the PCI IO range (expressed as CPU physical address + size).
> + * Return a negative value if an error has occured, zero otherwise
> + */
> +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> +{
> +#ifdef PCI_IOBASE
> + struct io_range *res;
> + resource_size_t allocated_size = 0;
> +
> + /* check if the range hasn't been previously recorded */
> + list_for_each_entry(res, &io_range_list, list) {
> + if (addr >= res->start && addr + size <= res->start + size)
> + return 0;
> + allocated_size += res->size;
> + }
> +
> + /* range not registed yet, check for available space */
> + if (allocated_size + size - 1 > IO_SPACE_LIMIT)
> + return -E2BIG;
> +
> + /* add the range to the list */
> + res = kzalloc(sizeof(*res), GFP_KERNEL);
> + if (!res)
> + return -ENOMEM;
> +
> + res->start = addr;
> + res->size = size;
> +
> + list_add_tail(&res->list, &io_range_list);
> +
> + return 0;
> +#else
> + return -EINVAL;
> +#endif
> +}
> +
> unsigned long __weak pci_address_to_pio(phys_addr_t address)
> {
> +#ifdef PCI_IOBASE
> + struct io_range *res;
> + resource_size_t offset = 0;
> +
> + list_for_each_entry(res, &io_range_list, list) {
> + if (address >= res->start &&
> + address < res->start + res->size) {
> + return res->start - address + offset;
> + }
> + offset += res->size;
> + }
> +
> + return (unsigned long)-1;
> +#else
> if (address > IO_SPACE_LIMIT)
> return (unsigned long)-1;
>
> return (unsigned long) address;
> +#endif
> }
This still conflicts with the other allocator you have in patch 9
for pci_remap_iospace: nothing guarantees that the mapping is the
same for both.
Also, this is a completely pointless exercise at this moment, because
nobody cares about the result of pci_address_to_pio on architectures
that don't already provide this function. If we ever get a proper
Open Firmware implementation that wants to put hardcoded PCI devices
into DT, we can add an implementation, but for now this seems overkill.
The allocator in pci_register_io_range seems reasonable, why not merge
this function with pci_remap_iospace() as I have asked you multiple
times before? Just make it return the io_offset so the caller can
put that into the PCI host resources.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-01 19:36 ` Arnd Bergmann
@ 2014-07-01 20:45 ` Liviu Dudau
2014-07-02 12:30 ` Arnd Bergmann
0 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 20:45 UTC (permalink / raw)
To: Arnd Bergmann
Cc: linux-arm-kernel, Liviu Dudau, linux-pci, Bjorn Helgaas,
Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
linaro-kernel, Tanmay Inamdar, Grant Likely, Sinan Kaya,
Jingoo Han, Kukjin Kim, Suravee Suthikulanit, Device Tree ML,
LKML
On Tue, Jul 01, 2014 at 09:36:10PM +0200, Arnd Bergmann wrote:
> On Tuesday 01 July 2014 19:43:28 Liviu Dudau wrote:
> > +/*
> > + * Record the PCI IO range (expressed as CPU physical address + size).
> > + * Return a negative value if an error has occured, zero otherwise
> > + */
> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> > +{
> > +#ifdef PCI_IOBASE
> > + struct io_range *res;
> > + resource_size_t allocated_size = 0;
> > +
> > + /* check if the range hasn't been previously recorded */
> > + list_for_each_entry(res, &io_range_list, list) {
> > + if (addr >= res->start && addr + size <= res->start + size)
> > + return 0;
> > + allocated_size += res->size;
> > + }
> > +
> > + /* range not registed yet, check for available space */
> > + if (allocated_size + size - 1 > IO_SPACE_LIMIT)
> > + return -E2BIG;
> > +
> > + /* add the range to the list */
> > + res = kzalloc(sizeof(*res), GFP_KERNEL);
> > + if (!res)
> > + return -ENOMEM;
> > +
> > + res->start = addr;
> > + res->size = size;
> > +
> > + list_add_tail(&res->list, &io_range_list);
> > +
> > + return 0;
> > +#else
> > + return -EINVAL;
> > +#endif
> > +}
> > +
> > unsigned long __weak pci_address_to_pio(phys_addr_t address)
> > {
> > +#ifdef PCI_IOBASE
> > + struct io_range *res;
> > + resource_size_t offset = 0;
> > +
> > + list_for_each_entry(res, &io_range_list, list) {
> > + if (address >= res->start &&
> > + address < res->start + res->size) {
> > + return res->start - address + offset;
> > + }
> > + offset += res->size;
> > + }
> > +
> > + return (unsigned long)-1;
> > +#else
> > if (address > IO_SPACE_LIMIT)
> > return (unsigned long)-1;
> >
> > return (unsigned long) address;
> > +#endif
> > }
>
> This still conflicts with the other allocator you have in patch 9
> for pci_remap_iospace: nothing guarantees that the mapping is the
> same for both.
>
> Also, this is a completely pointless exercise at this moment, because
> nobody cares about the result of pci_address_to_pio on architectures
> that don't already provide this function. If we ever get a proper
> Open Firmware implementation that wants to put hardcoded PCI devices
> into DT, we can add an implementation, but for now this seems overkill.
>
> The allocator in pci_register_io_range seems reasonable, why not merge
> this function with pci_remap_iospace() as I have asked you multiple
> times before? Just make it return the io_offset so the caller can
> put that into the PCI host resources.
Hi Arnd,
While I agree with you that at some moment the allocators were inconsistent
wrt each other, for this version I would respectfully disagree on this.
The allocator in pci_register_io_range() only makes sure that the ranges
are not overlapping, it doesn't do any mapping whatsoever, while
pci_remap_iospace() does only an ioremap_page_range(). The idea is that
you get the offset out of pci_address_to_pio() and apply it to
pci_remap_iospace().
Why do you think there are conflicts?
Best regards,
Liviu
>
> Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
-------------------
.oooO
( )
\ ( Oooo.
\_) ( )
) /
(_/
One small step
for me ...
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-01 20:45 ` Liviu Dudau
@ 2014-07-02 12:30 ` Arnd Bergmann
2014-07-02 14:23 ` Liviu Dudau
0 siblings, 1 reply; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-02 12:30 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Liviu Dudau, Sinan Kaya, linaro-kernel, Benjamin Herrenschmidt,
Device Tree ML, linux-pci, Jingoo Han, Liviu Dudau, LKML,
Will Deacon, Grant Likely, Kukjin Kim, Tanmay Inamdar,
Suravee Suthikulanit, Catalin Marinas, Bjorn Helgaas
On Tuesday 01 July 2014 21:45:09 Liviu Dudau wrote:
> On Tue, Jul 01, 2014 at 09:36:10PM +0200, Arnd Bergmann wrote:
> > On Tuesday 01 July 2014 19:43:28 Liviu Dudau wrote:
> >
> > This still conflicts with the other allocator you have in patch 9
> > for pci_remap_iospace: nothing guarantees that the mapping is the
> > same for both.
> >
> > Also, this is a completely pointless exercise at this moment, because
> > nobody cares about the result of pci_address_to_pio on architectures
> > that don't already provide this function. If we ever get a proper
> > Open Firmware implementation that wants to put hardcoded PCI devices
> > into DT, we can add an implementation, but for now this seems overkill.
> >
> > The allocator in pci_register_io_range seems reasonable, why not merge
> > this function with pci_remap_iospace() as I have asked you multiple
> > times before? Just make it return the io_offset so the caller can
> > put that into the PCI host resources.
>
> Hi Arnd,
>
> While I agree with you that at some moment the allocators were inconsistent
> wrt each other, for this version I would respectfully disagree on this.
> The allocator in pci_register_io_range() only makes sure that the ranges
> are not overlapping, it doesn't do any mapping whatsoever, while
> pci_remap_iospace() does only an ioremap_page_range(). The idea is that
> you get the offset out of pci_address_to_pio() and apply it to
> pci_remap_iospace().
Ok, got it now, I'm sorry I didn't read this properly at first.
Your solution looks correct to me, just using different
tradeoffs to what I was expecting: You get a working pci_address_to_pio()
function, which is probably never needed, but in turn you need to
keep the state of each host bridge in a global list.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-02 12:30 ` Arnd Bergmann
@ 2014-07-02 14:23 ` Liviu Dudau
2014-07-02 14:58 ` Arnd Bergmann
0 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-02 14:23 UTC (permalink / raw)
To: Arnd Bergmann
Cc: linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
Liviu Dudau, Sinan Kaya, linaro-kernel, Benjamin Herrenschmidt,
Device Tree ML, linux-pci, Jingoo Han, LKML, Will Deacon,
Grant Likely, Kukjin Kim, Tanmay Inamdar, Suravee Suthikulanit,
Catalin Marinas, Bjorn Helgaas
On Wed, Jul 02, 2014 at 01:30:31PM +0100, Arnd Bergmann wrote:
> On Tuesday 01 July 2014 21:45:09 Liviu Dudau wrote:
> > On Tue, Jul 01, 2014 at 09:36:10PM +0200, Arnd Bergmann wrote:
> > > On Tuesday 01 July 2014 19:43:28 Liviu Dudau wrote:
> > >
> > > This still conflicts with the other allocator you have in patch 9
> > > for pci_remap_iospace: nothing guarantees that the mapping is the
> > > same for both.
> > >
> > > Also, this is a completely pointless exercise at this moment, because
> > > nobody cares about the result of pci_address_to_pio on architectures
> > > that don't already provide this function. If we ever get a proper
> > > Open Firmware implementation that wants to put hardcoded PCI devices
> > > into DT, we can add an implementation, but for now this seems overkill.
> > >
> > > The allocator in pci_register_io_range seems reasonable, why not merge
> > > this function with pci_remap_iospace() as I have asked you multiple
> > > times before? Just make it return the io_offset so the caller can
> > > put that into the PCI host resources.
> >
> > Hi Arnd,
> >
> > While I agree with you that at some moment the allocators were inconsistent
> > wrt each other, for this version I would respectfully disagree on this.
> > The allocator in pci_register_io_range() only makes sure that the ranges
> > are not overlapping, it doesn't do any mapping whatsoever, while
> > pci_remap_iospace() does only an ioremap_page_range(). The idea is that
> > you get the offset out of pci_address_to_pio() and apply it to
> > pci_remap_iospace().
>
> Ok, got it now, I'm sorry I didn't read this properly at first.
>
> Your solution looks correct to me, just using different
> tradeoffs to what I was expecting: You get a working pci_address_to_pio()
> function, which is probably never needed, but in turn you need to
> keep the state of each host bridge in a global list.
Just a reminder that with my patchset I *do* start using pci_address_to_pio()
in order to correctly parse the IO ranges from DT.
Best regards,
Liviu
>
> Arnd
>
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-02 14:23 ` Liviu Dudau
@ 2014-07-02 14:58 ` Arnd Bergmann
0 siblings, 0 replies; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-02 14:58 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-arm-kernel@lists.infradead.org, Liviu Dudau, Sinan Kaya,
linaro-kernel, Benjamin Herrenschmidt, Device Tree ML, linux-pci,
Jingoo Han, LKML, Will Deacon, Grant Likely, Kukjin Kim,
Tanmay Inamdar, Suravee Suthikulanit, Catalin Marinas,
Bjorn Helgaas
On Wednesday 02 July 2014 15:23:03 Liviu Dudau wrote:
> >
> > Your solution looks correct to me, just using different
> > tradeoffs to what I was expecting: You get a working pci_address_to_pio()
> > function, which is probably never needed, but in turn you need to
> > keep the state of each host bridge in a global list.
>
> Just a reminder that with my patchset I *do* start using pci_address_to_pio()
> in order to correctly parse the IO ranges from DT.
Yes, what I meant is that it would be easier not to do that. All existing
drivers expect of_pci_range_to_resource() to return the CPU address for
an I/O space register, not the Linux I/O port number that we want to
pass to the PCI core. This is suboptimal because it's not obvious how
it works, but it lets us get away without an extra registration step.
Once all probe functions in PCI host drivers have been changed to the
of_create_pci_host_bridge, that should not matter any more, because
there is only one place left that calls it and we only have to get it
right once.
Also, when you change that of_pci_range_to_resource, you also have to
audit all callers of that function and ensure they can deal with the new
behavior.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-01 18:43 ` [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function Liviu Dudau
2014-07-01 19:36 ` Arnd Bergmann
@ 2014-07-02 11:22 ` Will Deacon
2014-07-02 16:00 ` Liviu Dudau
2014-07-02 12:38 ` Arnd Bergmann
2014-07-08 0:14 ` Bjorn Helgaas
3 siblings, 1 reply; 92+ messages in thread
From: Will Deacon @ 2014-07-02 11:22 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> Some architectures do not have a simple view of the PCI I/O space
> and instead use a range of CPU addresses that map to bus addresses. For
> some architectures these ranges will be expressed by OF bindings
> in a device tree file.
>
> Introduce a pci_register_io_range() helper function with a generic
> implementation that can be used by such architectures to keep track
> of the I/O ranges described by the PCI bindings. If the PCI_IOBASE
> macro is not defined that signals lack of support for PCI and we
> return an error.
[...]
> +/*
> + * Record the PCI IO range (expressed as CPU physical address + size).
> + * Return a negative value if an error has occured, zero otherwise
> + */
> +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> +{
> +#ifdef PCI_IOBASE
> + struct io_range *res;
> + resource_size_t allocated_size = 0;
> +
> + /* check if the range hasn't been previously recorded */
> + list_for_each_entry(res, &io_range_list, list) {
> + if (addr >= res->start && addr + size <= res->start + size)
> + return 0;
> + allocated_size += res->size;
> + }
> +
> + /* range not registed yet, check for available space */
> + if (allocated_size + size - 1 > IO_SPACE_LIMIT)
> + return -E2BIG;
> +
> + /* add the range to the list */
> + res = kzalloc(sizeof(*res), GFP_KERNEL);
> + if (!res)
> + return -ENOMEM;
> +
> + res->start = addr;
> + res->size = size;
> +
> + list_add_tail(&res->list, &io_range_list);
> +
> + return 0;
Hopefully a stupid question, but how is this serialised? I'm just surprised
that adding to and searching a list are sufficient, unless there's a big
lock somewhere.
Will
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-02 11:22 ` Will Deacon
@ 2014-07-02 16:00 ` Liviu Dudau
0 siblings, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-02 16:00 UTC (permalink / raw)
To: Will Deacon
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Wed, Jul 02, 2014 at 12:22:22PM +0100, Will Deacon wrote:
> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> > Some architectures do not have a simple view of the PCI I/O space
> > and instead use a range of CPU addresses that map to bus addresses. For
> > some architectures these ranges will be expressed by OF bindings
> > in a device tree file.
> >
> > Introduce a pci_register_io_range() helper function with a generic
> > implementation that can be used by such architectures to keep track
> > of the I/O ranges described by the PCI bindings. If the PCI_IOBASE
> > macro is not defined that signals lack of support for PCI and we
> > return an error.
>
> [...]
>
> > +/*
> > + * Record the PCI IO range (expressed as CPU physical address + size).
> > + * Return a negative value if an error has occured, zero otherwise
> > + */
> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> > +{
> > +#ifdef PCI_IOBASE
> > + struct io_range *res;
> > + resource_size_t allocated_size = 0;
> > +
> > + /* check if the range hasn't been previously recorded */
> > + list_for_each_entry(res, &io_range_list, list) {
> > + if (addr >= res->start && addr + size <= res->start + size)
> > + return 0;
> > + allocated_size += res->size;
> > + }
> > +
> > + /* range not registed yet, check for available space */
> > + if (allocated_size + size - 1 > IO_SPACE_LIMIT)
> > + return -E2BIG;
> > +
> > + /* add the range to the list */
> > + res = kzalloc(sizeof(*res), GFP_KERNEL);
> > + if (!res)
> > + return -ENOMEM;
> > +
> > + res->start = addr;
> > + res->size = size;
> > +
> > + list_add_tail(&res->list, &io_range_list);
> > +
> > + return 0;
>
> Hopefully a stupid question, but how is this serialised? I'm just surprised
> that adding to and searching a list are sufficient, unless there's a big
> lock somewhere.
Sorry, tripped into my own filters!
You are right, there is no serialisation here, will add one.
Best regards,
Liviu
>
> Will
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-01 18:43 ` [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function Liviu Dudau
2014-07-01 19:36 ` Arnd Bergmann
2014-07-02 11:22 ` Will Deacon
@ 2014-07-02 12:38 ` Arnd Bergmann
2014-07-02 13:20 ` Liviu Dudau
2014-07-08 0:14 ` Bjorn Helgaas
3 siblings, 1 reply; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-02 12:38 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
Some more detailed comments now
On Tuesday 01 July 2014 19:43:28 Liviu Dudau wrote:
> +/*
> + * Record the PCI IO range (expressed as CPU physical address + size).
> + * Return a negative value if an error has occured, zero otherwise
> + */
> +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> +{
> +#ifdef PCI_IOBASE
> + struct io_range *res;
I was confused by the variable naming here: A variable named 'res' is
normally a 'struct resource'. Maybe better call this 'range'.
> + resource_size_t allocated_size = 0;
> +
> + /* check if the range hasn't been previously recorded */
> + list_for_each_entry(res, &io_range_list, list) {
> + if (addr >= res->start && addr + size <= res->start + size)
> + return 0;
> + allocated_size += res->size;
> + }
A spin_lock around the list lookup should be sufficient to get around
the race that Will mentioned.
> + /* range not registed yet, check for available space */
> + if (allocated_size + size - 1 > IO_SPACE_LIMIT)
> + return -E2BIG;
It might be better to limit the size to 64K if it doesn't fit at first.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-02 12:38 ` Arnd Bergmann
@ 2014-07-02 13:20 ` Liviu Dudau
0 siblings, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-02 13:20 UTC (permalink / raw)
To: Arnd Bergmann
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Wed, Jul 02, 2014 at 01:38:04PM +0100, Arnd Bergmann wrote:
> Some more detailed comments now
>
> On Tuesday 01 July 2014 19:43:28 Liviu Dudau wrote:
> > +/*
> > + * Record the PCI IO range (expressed as CPU physical address + size).
> > + * Return a negative value if an error has occured, zero otherwise
> > + */
> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> > +{
> > +#ifdef PCI_IOBASE
> > + struct io_range *res;
>
> I was confused by the variable naming here: A variable named 'res' is
> normally a 'struct resource'. Maybe better call this 'range'.
>
> > + resource_size_t allocated_size = 0;
> > +
> > + /* check if the range hasn't been previously recorded */
> > + list_for_each_entry(res, &io_range_list, list) {
> > + if (addr >= res->start && addr + size <= res->start + size)
> > + return 0;
> > + allocated_size += res->size;
> > + }
>
> A spin_lock around the list lookup should be sufficient to get around
> the race that Will mentioned.
>
> > + /* range not registed yet, check for available space */
> > + if (allocated_size + size - 1 > IO_SPACE_LIMIT)
> > + return -E2BIG;
>
> It might be better to limit the size to 64K if it doesn't fit at first.
Thanks Arnd for review. Will update and post a new patch soon if I don't
get any other comments.
Best regards,
Liviu
>
>
> Arnd
>
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-01 18:43 ` [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function Liviu Dudau
` (2 preceding siblings ...)
2014-07-02 12:38 ` Arnd Bergmann
@ 2014-07-08 0:14 ` Bjorn Helgaas
2014-07-08 7:00 ` Arnd Bergmann
2014-07-08 10:40 ` Liviu Dudau
3 siblings, 2 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-08 0:14 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> Some architectures do not have a simple view of the PCI I/O space
> and instead use a range of CPU addresses that map to bus addresses. For
> some architectures these ranges will be expressed by OF bindings
> in a device tree file.
>
> Introduce a pci_register_io_range() helper function with a generic
> implementation that can be used by such architectures to keep track
> of the I/O ranges described by the PCI bindings. If the PCI_IOBASE
> macro is not defined that signals lack of support for PCI and we
> return an error.
>
> Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> ---
> drivers/of/address.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++
> include/linux/of_address.h | 1 +
> 2 files changed, 62 insertions(+)
>
> diff --git a/drivers/of/address.c b/drivers/of/address.c
> index 5edfcb0..1345733 100644
> --- a/drivers/of/address.c
> +++ b/drivers/of/address.c
> @@ -5,6 +5,7 @@
> #include <linux/module.h>
> #include <linux/of_address.h>
> #include <linux/pci_regs.h>
> +#include <linux/slab.h>
> #include <linux/string.h>
>
> /* Max address size we deal with */
> @@ -601,12 +602,72 @@ const __be32 *of_get_address(struct device_node *dev, int index, u64 *size,
> }
> EXPORT_SYMBOL(of_get_address);
>
> +struct io_range {
> + struct list_head list;
> + phys_addr_t start;
> + resource_size_t size;
> +};
> +
> +static LIST_HEAD(io_range_list);
> +
> +/*
> + * Record the PCI IO range (expressed as CPU physical address + size).
> + * Return a negative value if an error has occured, zero otherwise
> + */
> +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
I don't understand the interface here. What's the mapping from CPU
physical address to bus I/O port? For example, I have the following
machine in mind:
HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
HWP0002:00: host bridge window [io 0x0000-0x0fff]
HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
HWP0002:09: host bridge window [io 0x1000000-0x1000fff] (PCI address [0x0-0xfff])
The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00. Drivers use,
e.g., "inb(0)" to access it.
Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00. Drivers use
"inb(0x1000000)" to access it.
pci_register_io_range() seems sort of like it's intended to track the
memory-mapped IO port spaces, e.g., [mem 0xf8010000000-0xf8010000fff].
But I would think you'd want to keep track of at least the base port
number on the PCI bus, too. Or is that why it's weak?
Here's what these look like in /proc/iomem and /proc/ioports (note that
there are two resource structs for each memory-mapped IO port space: one
IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
driver), and one IORESOURCE_IO for the I/O port space (this becomes the
parent of a region used by a regular device driver):
/proc/iomem:
PCI Bus 0000:00 I/O Ports 00000000-00000fff
PCI Bus 0001:00 I/O Ports 01000000-01000fff
/proc/ioports:
00000000-00000fff : PCI Bus 0000:00
01000000-01000fff : PCI Bus 0001:00
> +{
> +#ifdef PCI_IOBASE
> + struct io_range *res;
> + resource_size_t allocated_size = 0;
> +
> + /* check if the range hasn't been previously recorded */
> + list_for_each_entry(res, &io_range_list, list) {
> + if (addr >= res->start && addr + size <= res->start + size)
> + return 0;
> + allocated_size += res->size;
> + }
> +
> + /* range not registed yet, check for available space */
> + if (allocated_size + size - 1 > IO_SPACE_LIMIT)
> + return -E2BIG;
> +
> + /* add the range to the list */
> + res = kzalloc(sizeof(*res), GFP_KERNEL);
> + if (!res)
> + return -ENOMEM;
> +
> + res->start = addr;
> + res->size = size;
> +
> + list_add_tail(&res->list, &io_range_list);
> +
> + return 0;
> +#else
> + return -EINVAL;
> +#endif
> +}
> +
> unsigned long __weak pci_address_to_pio(phys_addr_t address)
> {
> +#ifdef PCI_IOBASE
> + struct io_range *res;
> + resource_size_t offset = 0;
> +
> + list_for_each_entry(res, &io_range_list, list) {
> + if (address >= res->start &&
> + address < res->start + res->size) {
> + return res->start - address + offset;
> + }
> + offset += res->size;
> + }
> +
> + return (unsigned long)-1;
> +#else
> if (address > IO_SPACE_LIMIT)
> return (unsigned long)-1;
>
> return (unsigned long) address;
> +#endif
> }
>
> static int __of_address_to_resource(struct device_node *dev,
> diff --git a/include/linux/of_address.h b/include/linux/of_address.h
> index c13b878..ac4aac4 100644
> --- a/include/linux/of_address.h
> +++ b/include/linux/of_address.h
> @@ -55,6 +55,7 @@ extern void __iomem *of_iomap(struct device_node *device, int index);
> extern const __be32 *of_get_address(struct device_node *dev, int index,
> u64 *size, unsigned int *flags);
>
> +extern int pci_register_io_range(phys_addr_t addr, resource_size_t size);
> extern unsigned long pci_address_to_pio(phys_addr_t addr);
>
> extern int of_pci_range_parser_init(struct of_pci_range_parser *parser,
> --
> 2.0.0
>
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-08 0:14 ` Bjorn Helgaas
@ 2014-07-08 7:00 ` Arnd Bergmann
2014-07-08 21:29 ` Bjorn Helgaas
2014-07-08 10:40 ` Liviu Dudau
1 sibling, 1 reply; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-08 7:00 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: Liviu Dudau, linux-pci, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Tuesday 08 July 2014, Bjorn Helgaas wrote:
> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> > +static LIST_HEAD(io_range_list);
> > +
> > +/*
> > + * Record the PCI IO range (expressed as CPU physical address + size).
> > + * Return a negative value if an error has occured, zero otherwise
> > + */
> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
>
> I don't understand the interface here. What's the mapping from CPU
> physical address to bus I/O port? For example, I have the following
> machine in mind:
>
> HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
> HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
> HWP0002:00: host bridge window [io 0x0000-0x0fff]
>
> HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
> HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
> HWP0002:09: host bridge window [io 0x1000000-0x1000fff] (PCI address [0x0-0xfff])
>
> The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
> the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00. Drivers use,
> e.g., "inb(0)" to access it.
>
> Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
> bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00. Drivers use
> "inb(0x1000000)" to access it.
I guess you are thinking of the IA64 model here where you keep the virtual
I/O port numbers in a per-bus lookup table that gets accessed for each
inb() call. I've thought about this some more, and I believe there are good
reasons for sticking with the model used on arm32 and powerpc for the
generic OF implementation.
The idea is that there is a single virtual memory range for all I/O port
mappings and we use the MMU to do the translation rather than computing
it manually in the inb() implemnetation. The main advantage is that all
functions used in device drivers to (potentially) access I/O ports
become trivial this way, which helps for code size and in some cases
(e.g. SoC-internal registers with a low latency) it may even be performance
relevant.
What this scheme gives you is a set of functions that literally do:
/* architecture specific virtual address */
#define PCI_IOBASE (void __iomem *)0xabcd00000000000
static inline u32 inl(unsigned long port)
{
return readl(port + PCI_IOBASE);
}
static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
{
return port + PCI_IOBASE;
}
static inline unsigned int ioread32(void __iomem *p)
{
return readl(p);
}
Since we want this to work on 32-bit machines, the virtual I/O space has
to be rather tightly packed, so Liviu's algorithm just picks the next
available address for each new I/O space.
> pci_register_io_range() seems sort of like it's intended to track the
> memory-mapped IO port spaces, e.g., [mem 0xf8010000000-0xf8010000fff].
> But I would think you'd want to keep track of at least the base port
> number on the PCI bus, too. Or is that why it's weak?
The PCI bus start address only gets factored in when the window is registered
with the PCI core in patch 8/9, where we go over all ranges doing
+ pci_add_resource_offset(resources, res,
+ res->start - range.pci_addr);
With Liviu's patch, this can be done in exactly the same way for both
MMIO and PIO spaces.
> Here's what these look like in /proc/iomem and /proc/ioports (note that
> there are two resource structs for each memory-mapped IO port space: one
> IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
> driver), and one IORESOURCE_IO for the I/O port space (this becomes the
> parent of a region used by a regular device driver):
>
> /proc/iomem:
> PCI Bus 0000:00 I/O Ports 00000000-00000fff
> PCI Bus 0001:00 I/O Ports 01000000-01000fff
>
> /proc/ioports:
> 00000000-00000fff : PCI Bus 0000:00
> 01000000-01000fff : PCI Bus 0001:00
The only difference I'd expect here is that the last line would make it
packed more tightly, so it's instead
/proc/ioports:
00000000-00000fff : PCI Bus 0000:00
00001000-00001fff : PCI Bus 0001:00
In practice we'd probably have 64KB per host controller, and each of them
would be a separate domain. I think we normally don't register the
IORESOURCE_MEM resource, but I agree it's a good idea and we should
always do that.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-08 7:00 ` Arnd Bergmann
@ 2014-07-08 21:29 ` Bjorn Helgaas
2014-07-08 22:45 ` Liviu Dudau
2014-07-09 6:20 ` Arnd Bergmann
0 siblings, 2 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-08 21:29 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Liviu Dudau, linux-pci, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Tue, Jul 8, 2014 at 1:00 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Tuesday 08 July 2014, Bjorn Helgaas wrote:
>> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
>> > +static LIST_HEAD(io_range_list);
>> > +
>> > +/*
>> > + * Record the PCI IO range (expressed as CPU physical address + size).
>> > + * Return a negative value if an error has occured, zero otherwise
>> > + */
>> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
>>
>> I don't understand the interface here. What's the mapping from CPU
>> physical address to bus I/O port? For example, I have the following
>> machine in mind:
>>
>> HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
>> HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
>> HWP0002:00: host bridge window [io 0x0000-0x0fff]
>>
>> HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
>> HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
>> HWP0002:09: host bridge window [io 0x1000000-0x1000fff] (PCI address [0x0-0xfff])
>>
>> The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
>> the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00. Drivers use,
>> e.g., "inb(0)" to access it.
>>
>> Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
>> bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00. Drivers use
>> "inb(0x1000000)" to access it.
>
> I guess you are thinking of the IA64 model here where you keep the virtual
> I/O port numbers in a per-bus lookup table that gets accessed for each
> inb() call. I've thought about this some more, and I believe there are good
> reasons for sticking with the model used on arm32 and powerpc for the
> generic OF implementation.
>
> The idea is that there is a single virtual memory range for all I/O port
> mappings and we use the MMU to do the translation rather than computing
> it manually in the inb() implemnetation. The main advantage is that all
> functions used in device drivers to (potentially) access I/O ports
> become trivial this way, which helps for code size and in some cases
> (e.g. SoC-internal registers with a low latency) it may even be performance
> relevant.
My example is from ia64, but I'm not advocating for the lookup table.
The point is that the hardware works similarly (at least for dense ia64
I/O port spaces) in terms of mapping CPU physical addresses to PCI I/O
space.
I think my confusion is because your pci_register_io_range() and
pci_addess_to_pci() implementations assume that every io_range starts at
I/O port 0 on PCI (correct me if I'm wrong). I suspect that's why you
don't save the I/O port number in struct io_range.
Maybe that assumption is guaranteed by OF, but it doesn't hold for ACPI;
ACPI can describe several I/O port apertures for a single bridge, each
associated with a different CPU physical memory region.
If my speculation here is correct, a comment to the effect that each
io_range corresponds to a PCI I/O space range that starts at 0 might be
enough.
If you did add a PCI I/O port number argument to pci_register_io_range(),
we might be able to make an ACPI-based implementation of it. But I guess
that could be done if/when anybody ever wants to do that.
>> Here's what these look like in /proc/iomem and /proc/ioports (note that
>> there are two resource structs for each memory-mapped IO port space: one
>> IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
>> driver), and one IORESOURCE_IO for the I/O port space (this becomes the
>> parent of a region used by a regular device driver):
>>
>> /proc/iomem:
>> PCI Bus 0000:00 I/O Ports 00000000-00000fff
>> PCI Bus 0001:00 I/O Ports 01000000-01000fff
Oops, I forgot the actual physical memory addresses here, but you got
the idea anyway. It should have been something like this:
/proc/iomem:
f8010000000-f8010000fff PCI Bus 0000:00 I/O Ports 00000000-00000fff
f8110000000-f8110000fff PCI Bus 0001:00 I/O Ports 01000000-01000fff
Bjorn
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-08 21:29 ` Bjorn Helgaas
@ 2014-07-08 22:45 ` Liviu Dudau
[not found] ` <20140708224529.GB4980-2JSQmVVBSi7ZROr8t4l/smS4ubULX0JqMm0uRHvK7Nw@public.gmane.org>
2014-07-09 6:20 ` Arnd Bergmann
1 sibling, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-08 22:45 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: Arnd Bergmann, linux-pci, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Tue, Jul 08, 2014 at 10:29:51PM +0100, Bjorn Helgaas wrote:
> On Tue, Jul 8, 2014 at 1:00 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> > On Tuesday 08 July 2014, Bjorn Helgaas wrote:
> >> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> >> > +static LIST_HEAD(io_range_list);
> >> > +
> >> > +/*
> >> > + * Record the PCI IO range (expressed as CPU physical address + size).
> >> > + * Return a negative value if an error has occured, zero otherwise
> >> > + */
> >> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> >>
> >> I don't understand the interface here. What's the mapping from CPU
> >> physical address to bus I/O port? For example, I have the following
> >> machine in mind:
> >>
> >> HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
> >> HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
> >> HWP0002:00: host bridge window [io 0x0000-0x0fff]
> >>
> >> HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
> >> HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
> >> HWP0002:09: host bridge window [io 0x1000000-0x1000fff] (PCI address [0x0-0xfff])
> >>
> >> The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
> >> the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00. Drivers use,
> >> e.g., "inb(0)" to access it.
> >>
> >> Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
> >> bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00. Drivers use
> >> "inb(0x1000000)" to access it.
> >
> > I guess you are thinking of the IA64 model here where you keep the virtual
> > I/O port numbers in a per-bus lookup table that gets accessed for each
> > inb() call. I've thought about this some more, and I believe there are good
> > reasons for sticking with the model used on arm32 and powerpc for the
> > generic OF implementation.
> >
> > The idea is that there is a single virtual memory range for all I/O port
> > mappings and we use the MMU to do the translation rather than computing
> > it manually in the inb() implemnetation. The main advantage is that all
> > functions used in device drivers to (potentially) access I/O ports
> > become trivial this way, which helps for code size and in some cases
> > (e.g. SoC-internal registers with a low latency) it may even be performance
> > relevant.
>
> My example is from ia64, but I'm not advocating for the lookup table.
> The point is that the hardware works similarly (at least for dense ia64
> I/O port spaces) in terms of mapping CPU physical addresses to PCI I/O
> space.
>
> I think my confusion is because your pci_register_io_range() and
> pci_addess_to_pci() implementations assume that every io_range starts at
> I/O port 0 on PCI (correct me if I'm wrong). I suspect that's why you
> don't save the I/O port number in struct io_range.
>
> Maybe that assumption is guaranteed by OF, but it doesn't hold for ACPI;
> ACPI can describe several I/O port apertures for a single bridge, each
> associated with a different CPU physical memory region.
That is actually a good catch, I've completely missed the fact that
io_range->pci_addr could be non-zero.
>
> If my speculation here is correct, a comment to the effect that each
> io_range corresponds to a PCI I/O space range that starts at 0 might be
> enough.
>
> If you did add a PCI I/O port number argument to pci_register_io_range(),
> we might be able to make an ACPI-based implementation of it. But I guess
> that could be done if/when anybody ever wants to do that.
No, I think you are right, the PCI I/O port number needs to be recorded. I
need to add that to pci_register_io_range().
>
> >> Here's what these look like in /proc/iomem and /proc/ioports (note that
> >> there are two resource structs for each memory-mapped IO port space: one
> >> IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
> >> driver), and one IORESOURCE_IO for the I/O port space (this becomes the
> >> parent of a region used by a regular device driver):
> >>
> >> /proc/iomem:
> >> PCI Bus 0000:00 I/O Ports 00000000-00000fff
> >> PCI Bus 0001:00 I/O Ports 01000000-01000fff
>
> Oops, I forgot the actual physical memory addresses here, but you got
> the idea anyway. It should have been something like this:
>
> /proc/iomem:
> f8010000000-f8010000fff PCI Bus 0000:00 I/O Ports 00000000-00000fff
> f8110000000-f8110000fff PCI Bus 0001:00 I/O Ports 01000000-01000fff
>
> Bjorn
>
Thanks for being thorough with your review.
Best regards,
Liviu
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-08 21:29 ` Bjorn Helgaas
2014-07-08 22:45 ` Liviu Dudau
@ 2014-07-09 6:20 ` Arnd Bergmann
2014-07-09 9:14 ` Liviu Dudau
2014-07-09 15:21 ` Bjorn Helgaas
1 sibling, 2 replies; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-09 6:20 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: Liviu Dudau, linux-pci, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Tuesday 08 July 2014, Bjorn Helgaas wrote:
> On Tue, Jul 8, 2014 at 1:00 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> > On Tuesday 08 July 2014, Bjorn Helgaas wrote:
> >> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> >> > +static LIST_HEAD(io_range_list);
> >> > +
> >> > +/*
> >> > + * Record the PCI IO range (expressed as CPU physical address + size).
> >> > + * Return a negative value if an error has occured, zero otherwise
> >> > + */
> >> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> >>
> >> I don't understand the interface here. What's the mapping from CPU
> >> physical address to bus I/O port? For example, I have the following
> >> machine in mind:
> >>
> >> HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
> >> HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
> >> HWP0002:00: host bridge window [io 0x0000-0x0fff]
> >>
> >> HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
> >> HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
> >> HWP0002:09: host bridge window [io 0x1000000-0x1000fff] (PCI address [0x0-0xfff])
> >>
> >> The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
> >> the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00. Drivers use,
> >> e.g., "inb(0)" to access it.
> >>
> >> Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
> >> bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00. Drivers use
> >> "inb(0x1000000)" to access it.
> >
> > I guess you are thinking of the IA64 model here where you keep the virtual
> > I/O port numbers in a per-bus lookup table that gets accessed for each
> > inb() call. I've thought about this some more, and I believe there are good
> > reasons for sticking with the model used on arm32 and powerpc for the
> > generic OF implementation.
> >
> > The idea is that there is a single virtual memory range for all I/O port
> > mappings and we use the MMU to do the translation rather than computing
> > it manually in the inb() implemnetation. The main advantage is that all
> > functions used in device drivers to (potentially) access I/O ports
> > become trivial this way, which helps for code size and in some cases
> > (e.g. SoC-internal registers with a low latency) it may even be performance
> > relevant.
>
> My example is from ia64, but I'm not advocating for the lookup table.
> The point is that the hardware works similarly (at least for dense ia64
> I/O port spaces) in terms of mapping CPU physical addresses to PCI I/O
> space.
>
> I think my confusion is because your pci_register_io_range() and
> pci_addess_to_pci() implementations assume that every io_range starts at
> I/O port 0 on PCI (correct me if I'm wrong). I suspect that's why you
> don't save the I/O port number in struct io_range.
I think you are just misreading the code, but I agree it's hard to
understand and I made the same mistake in my initial reply to the
first version.
pci_register_io_range and pci_address_to_pci only worry about the mapping
between CPU physical and Linux I/O address, they do not care which PCI
port numbers are behind that. The mapping between PCI port numbers and
Linux port numbers is done correctly in patch 8/9 in the
pci_host_bridge_of_get_ranges() function.
> Maybe that assumption is guaranteed by OF, but it doesn't hold for ACPI;
> ACPI can describe several I/O port apertures for a single bridge, each
> associated with a different CPU physical memory region.
DT can have the same, although the common case is that each PCI host
bridge has 64KB of I/O ports starting at address 0. Most driver writers
get it wrong for the case where it starts at a different address, so
I really want to have a generic implementation that gets it right.
> If my speculation here is correct, a comment to the effect that each
> io_range corresponds to a PCI I/O space range that starts at 0 might be
> enough.
>
> If you did add a PCI I/O port number argument to pci_register_io_range(),
> we might be able to make an ACPI-based implementation of it. But I guess
> that could be done if/when anybody ever wants to do that.
I think we shoulnd't worry about it before we actually need it. As far as
I understand, the only user of that code (unless someone wants to convert
ia64) would be ARM64 with ACPI, but that uses the SBSA hardware model that
recommends having no I/O space at all.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-09 6:20 ` Arnd Bergmann
@ 2014-07-09 9:14 ` Liviu Dudau
2014-07-09 15:21 ` Bjorn Helgaas
1 sibling, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-09 9:14 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Bjorn Helgaas, linux-pci, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Wed, Jul 09, 2014 at 07:20:49AM +0100, Arnd Bergmann wrote:
> On Tuesday 08 July 2014, Bjorn Helgaas wrote:
> > On Tue, Jul 8, 2014 at 1:00 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> > > On Tuesday 08 July 2014, Bjorn Helgaas wrote:
> > >> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> > >> > +static LIST_HEAD(io_range_list);
> > >> > +
> > >> > +/*
> > >> > + * Record the PCI IO range (expressed as CPU physical address + size).
> > >> > + * Return a negative value if an error has occured, zero otherwise
> > >> > + */
> > >> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> > >>
> > >> I don't understand the interface here. What's the mapping from CPU
> > >> physical address to bus I/O port? For example, I have the following
> > >> machine in mind:
> > >>
> > >> HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
> > >> HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
> > >> HWP0002:00: host bridge window [io 0x0000-0x0fff]
> > >>
> > >> HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
> > >> HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
> > >> HWP0002:09: host bridge window [io 0x1000000-0x1000fff] (PCI address [0x0-0xfff])
> > >>
> > >> The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
> > >> the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00. Drivers use,
> > >> e.g., "inb(0)" to access it.
> > >>
> > >> Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
> > >> bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00. Drivers use
> > >> "inb(0x1000000)" to access it.
> > >
> > > I guess you are thinking of the IA64 model here where you keep the virtual
> > > I/O port numbers in a per-bus lookup table that gets accessed for each
> > > inb() call. I've thought about this some more, and I believe there are good
> > > reasons for sticking with the model used on arm32 and powerpc for the
> > > generic OF implementation.
> > >
> > > The idea is that there is a single virtual memory range for all I/O port
> > > mappings and we use the MMU to do the translation rather than computing
> > > it manually in the inb() implemnetation. The main advantage is that all
> > > functions used in device drivers to (potentially) access I/O ports
> > > become trivial this way, which helps for code size and in some cases
> > > (e.g. SoC-internal registers with a low latency) it may even be performance
> > > relevant.
> >
> > My example is from ia64, but I'm not advocating for the lookup table.
> > The point is that the hardware works similarly (at least for dense ia64
> > I/O port spaces) in terms of mapping CPU physical addresses to PCI I/O
> > space.
> >
> > I think my confusion is because your pci_register_io_range() and
> > pci_addess_to_pci() implementations assume that every io_range starts at
> > I/O port 0 on PCI (correct me if I'm wrong). I suspect that's why you
> > don't save the I/O port number in struct io_range.
>
> I think you are just misreading the code, but I agree it's hard to
> understand and I made the same mistake in my initial reply to the
> first version.
I am willing to make the code more easy to understand and validate. Proof that
things are not that easy to check is that I've also got confused last night
without having all the code in front of me. Any suggestions?
Best regards,
Liviu
>
> pci_register_io_range and pci_address_to_pci only worry about the mapping
> between CPU physical and Linux I/O address, they do not care which PCI
> port numbers are behind that. The mapping between PCI port numbers and
> Linux port numbers is done correctly in patch 8/9 in the
> pci_host_bridge_of_get_ranges() function.
>
> > Maybe that assumption is guaranteed by OF, but it doesn't hold for ACPI;
> > ACPI can describe several I/O port apertures for a single bridge, each
> > associated with a different CPU physical memory region.
>
> DT can have the same, although the common case is that each PCI host
> bridge has 64KB of I/O ports starting at address 0. Most driver writers
> get it wrong for the case where it starts at a different address, so
> I really want to have a generic implementation that gets it right.
>
> > If my speculation here is correct, a comment to the effect that each
> > io_range corresponds to a PCI I/O space range that starts at 0 might be
> > enough.
> >
> > If you did add a PCI I/O port number argument to pci_register_io_range(),
> > we might be able to make an ACPI-based implementation of it. But I guess
> > that could be done if/when anybody ever wants to do that.
>
> I think we shoulnd't worry about it before we actually need it. As far as
> I understand, the only user of that code (unless someone wants to convert
> ia64) would be ARM64 with ACPI, but that uses the SBSA hardware model that
> recommends having no I/O space at all.
>
> Arnd
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-09 6:20 ` Arnd Bergmann
2014-07-09 9:14 ` Liviu Dudau
@ 2014-07-09 15:21 ` Bjorn Helgaas
1 sibling, 0 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-09 15:21 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Liviu Dudau, linux-pci, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Wed, Jul 9, 2014 at 12:20 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Tuesday 08 July 2014, Bjorn Helgaas wrote:
>> I think my confusion is because your pci_register_io_range() and
>> pci_addess_to_pci() implementations assume that every io_range starts at
>> I/O port 0 on PCI (correct me if I'm wrong). I suspect that's why you
>> don't save the I/O port number in struct io_range.
>
> I think you are just misreading the code, but I agree it's hard to
> understand and I made the same mistake in my initial reply to the
> first version.
>
> pci_register_io_range and pci_address_to_pci only worry about the mapping
> between CPU physical and Linux I/O address, they do not care which PCI
> port numbers are behind that. The mapping between PCI port numbers and
> Linux port numbers is done correctly in patch 8/9 in the
> pci_host_bridge_of_get_ranges() function.
Ah, I see now. Thanks for explaining this again (I see you explained
it earlier; I just didn't understand it). Now that I see it, it *is*
very slick to handle both MMIO and PIO spaces the same way.
Bjorn
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-08 0:14 ` Bjorn Helgaas
2014-07-08 7:00 ` Arnd Bergmann
@ 2014-07-08 10:40 ` Liviu Dudau
2014-07-08 14:14 ` Arnd Bergmann
1 sibling, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-08 10:40 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 08, 2014 at 01:14:18AM +0100, Bjorn Helgaas wrote:
> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
> > Some architectures do not have a simple view of the PCI I/O space
> > and instead use a range of CPU addresses that map to bus addresses. For
> > some architectures these ranges will be expressed by OF bindings
> > in a device tree file.
> >
> > Introduce a pci_register_io_range() helper function with a generic
> > implementation that can be used by such architectures to keep track
> > of the I/O ranges described by the PCI bindings. If the PCI_IOBASE
> > macro is not defined that signals lack of support for PCI and we
> > return an error.
> >
> > Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> > ---
> > drivers/of/address.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++
> > include/linux/of_address.h | 1 +
> > 2 files changed, 62 insertions(+)
> >
> > diff --git a/drivers/of/address.c b/drivers/of/address.c
> > index 5edfcb0..1345733 100644
> > --- a/drivers/of/address.c
> > +++ b/drivers/of/address.c
> > @@ -5,6 +5,7 @@
> > #include <linux/module.h>
> > #include <linux/of_address.h>
> > #include <linux/pci_regs.h>
> > +#include <linux/slab.h>
> > #include <linux/string.h>
> >
> > /* Max address size we deal with */
> > @@ -601,12 +602,72 @@ const __be32 *of_get_address(struct device_node *dev, int index, u64 *size,
> > }
> > EXPORT_SYMBOL(of_get_address);
> >
> > +struct io_range {
> > + struct list_head list;
> > + phys_addr_t start;
> > + resource_size_t size;
> > +};
> > +
> > +static LIST_HEAD(io_range_list);
> > +
> > +/*
> > + * Record the PCI IO range (expressed as CPU physical address + size).
> > + * Return a negative value if an error has occured, zero otherwise
> > + */
> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
>
> I don't understand the interface here. What's the mapping from CPU
> physical address to bus I/O port? For example, I have the following
> machine in mind:
>
> HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
> HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
> HWP0002:00: host bridge window [io 0x0000-0x0fff]
>
> HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
> HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
> HWP0002:09: host bridge window [io 0x1000000-0x1000fff] (PCI address [0x0-0xfff])
>
> The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
> the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00. Drivers use,
> e.g., "inb(0)" to access it.
>
> Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
> bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00. Drivers use
> "inb(0x1000000)" to access it.
>
> pci_register_io_range() seems sort of like it's intended to track the
> memory-mapped IO port spaces, e.g., [mem 0xf8010000000-0xf8010000fff].
> But I would think you'd want to keep track of at least the base port
> number on the PCI bus, too. Or is that why it's weak?
It's weak in case the default implementation doesn't fit someones requirements.
And yes, it is trying to track the memory-mapped IO port spaces. When
calling pci_address_to_pio() - which takes the CPU address - it will
return the port number (0x0000 - 0x0fff and 0x1000000 - 0x1000fff respectively).
pci_address_to_pio() uses the list built by calling pci_register_io_range()
to calculate the correct offsets (although in this case it would move your
second host bridge io ports to [io 0x1000 - 0x1fff] as it tries not to leave
gaps in the reservations).
>
> Here's what these look like in /proc/iomem and /proc/ioports (note that
> there are two resource structs for each memory-mapped IO port space: one
> IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
> driver), and one IORESOURCE_IO for the I/O port space (this becomes the
> parent of a region used by a regular device driver):
>
> /proc/iomem:
> PCI Bus 0000:00 I/O Ports 00000000-00000fff
> PCI Bus 0001:00 I/O Ports 01000000-01000fff
>
> /proc/ioports:
> 00000000-00000fff : PCI Bus 0000:00
> 01000000-01000fff : PCI Bus 0001:00
OK, I have a question that might be ovbious to you but I have missed the answer
so far: how does the IORESOURCE_MEM area gets created? Is it the host bridge
driver's job to do it? Is it something that the framework should do when it
notices that the IORESOURCE_IO is memory mapped?
Many thanks,
Liviu
>
> > +{
> > +#ifdef PCI_IOBASE
> > + struct io_range *res;
> > + resource_size_t allocated_size = 0;
> > +
> > + /* check if the range hasn't been previously recorded */
> > + list_for_each_entry(res, &io_range_list, list) {
> > + if (addr >= res->start && addr + size <= res->start + size)
> > + return 0;
> > + allocated_size += res->size;
> > + }
> > +
> > + /* range not registed yet, check for available space */
> > + if (allocated_size + size - 1 > IO_SPACE_LIMIT)
> > + return -E2BIG;
> > +
> > + /* add the range to the list */
> > + res = kzalloc(sizeof(*res), GFP_KERNEL);
> > + if (!res)
> > + return -ENOMEM;
> > +
> > + res->start = addr;
> > + res->size = size;
> > +
> > + list_add_tail(&res->list, &io_range_list);
> > +
> > + return 0;
> > +#else
> > + return -EINVAL;
> > +#endif
> > +}
> > +
> > unsigned long __weak pci_address_to_pio(phys_addr_t address)
> > {
> > +#ifdef PCI_IOBASE
> > + struct io_range *res;
> > + resource_size_t offset = 0;
> > +
> > + list_for_each_entry(res, &io_range_list, list) {
> > + if (address >= res->start &&
> > + address < res->start + res->size) {
> > + return res->start - address + offset;
> > + }
> > + offset += res->size;
> > + }
> > +
> > + return (unsigned long)-1;
> > +#else
> > if (address > IO_SPACE_LIMIT)
> > return (unsigned long)-1;
> >
> > return (unsigned long) address;
> > +#endif
> > }
> >
> > static int __of_address_to_resource(struct device_node *dev,
> > diff --git a/include/linux/of_address.h b/include/linux/of_address.h
> > index c13b878..ac4aac4 100644
> > --- a/include/linux/of_address.h
> > +++ b/include/linux/of_address.h
> > @@ -55,6 +55,7 @@ extern void __iomem *of_iomap(struct device_node *device, int index);
> > extern const __be32 *of_get_address(struct device_node *dev, int index,
> > u64 *size, unsigned int *flags);
> >
> > +extern int pci_register_io_range(phys_addr_t addr, resource_size_t size);
> > extern unsigned long pci_address_to_pio(phys_addr_t addr);
> >
> > extern int of_pci_range_parser_init(struct of_pci_range_parser *parser,
> > --
> > 2.0.0
> >
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-08 10:40 ` Liviu Dudau
@ 2014-07-08 14:14 ` Arnd Bergmann
2014-07-09 8:59 ` Liviu Dudau
0 siblings, 1 reply; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-08 14:14 UTC (permalink / raw)
To: Liviu Dudau
Cc: Bjorn Helgaas, linux-pci, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Tuesday 08 July 2014, Liviu Dudau wrote:
> > Here's what these look like in /proc/iomem and /proc/ioports (note that
> > there are two resource structs for each memory-mapped IO port space: one
> > IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
> > driver), and one IORESOURCE_IO for the I/O port space (this becomes the
> > parent of a region used by a regular device driver):
> >
> > /proc/iomem:
> > PCI Bus 0000:00 I/O Ports 00000000-00000fff
> > PCI Bus 0001:00 I/O Ports 01000000-01000fff
> >
> > /proc/ioports:
> > 00000000-00000fff : PCI Bus 0000:00
> > 01000000-01000fff : PCI Bus 0001:00
>
> OK, I have a question that might be ovbious to you but I have missed the answer
> so far: how does the IORESOURCE_MEM area gets created? Is it the host bridge
> driver's job to do it? Is it something that the framework should do when it
> notices that the IORESOURCE_IO is memory mapped?
The host bridge driver should either register the IORESOURCE_MEM resource
itself from its probe or setup function, or it should get registered behind
the covers in drivers using of_create_pci_host_bridge().
Your new pci_host_bridge_of_get_ranges already loops over all the
resources, so it would be a good place to put that.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function.
2014-07-08 14:14 ` Arnd Bergmann
@ 2014-07-09 8:59 ` Liviu Dudau
0 siblings, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-09 8:59 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Bjorn Helgaas, linux-pci, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Tue, Jul 08, 2014 at 03:14:17PM +0100, Arnd Bergmann wrote:
> On Tuesday 08 July 2014, Liviu Dudau wrote:
> > > Here's what these look like in /proc/iomem and /proc/ioports (note that
> > > there are two resource structs for each memory-mapped IO port space: one
> > > IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
> > > driver), and one IORESOURCE_IO for the I/O port space (this becomes the
> > > parent of a region used by a regular device driver):
> > >
> > > /proc/iomem:
> > > PCI Bus 0000:00 I/O Ports 00000000-00000fff
> > > PCI Bus 0001:00 I/O Ports 01000000-01000fff
> > >
> > > /proc/ioports:
> > > 00000000-00000fff : PCI Bus 0000:00
> > > 01000000-01000fff : PCI Bus 0001:00
> >
> > OK, I have a question that might be ovbious to you but I have missed the answer
> > so far: how does the IORESOURCE_MEM area gets created? Is it the host bridge
> > driver's job to do it? Is it something that the framework should do when it
> > notices that the IORESOURCE_IO is memory mapped?
>
> The host bridge driver should either register the IORESOURCE_MEM resource
> itself from its probe or setup function, or it should get registered behind
> the covers in drivers using of_create_pci_host_bridge().
>
> Your new pci_host_bridge_of_get_ranges already loops over all the
> resources, so it would be a good place to put that.
OK, so it is not something that I've missed, just something that x86-64 does and
my version doesn't yet.
Thanks for confirming that.
Liviu
>
> Arnd
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* [PATCH v8 4/9] pci: OF: Fix the conversion of IO ranges into IO resources.
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
` (2 preceding siblings ...)
2014-07-01 18:43 ` [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper function Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
[not found] ` <1404240214-9804-5-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
2014-07-01 18:43 ` [PATCH v8 5/9] pci: Create pci_host_bridge before its associated bus in pci_create_root_bus Liviu Dudau
` (5 subsequent siblings)
9 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: Device Tree ML, LKML, LAKML
The ranges property for a host bridge controller in DT describes
the mapping between the PCI bus address and the CPU physical address.
The resources framework however expects that the IO resources start
at a pseudo "port" address 0 (zero) and have a maximum size of IO_SPACE_LIMIT.
The conversion from pci ranges to resources failed to take that into account.
In the process move the function into drivers/of/address.c as it now
depends on pci_address_to_pio() code and make it return an error message.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Tested-by: Tanmay Inamdar <tinamdar@apm.com>
---
drivers/of/address.c | 47 ++++++++++++++++++++++++++++++++++++++++++++++
include/linux/of_address.h | 13 ++-----------
2 files changed, 49 insertions(+), 11 deletions(-)
diff --git a/drivers/of/address.c b/drivers/of/address.c
index 1345733..cbbaed2 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -872,3 +872,50 @@ bool of_dma_is_coherent(struct device_node *np)
return false;
}
EXPORT_SYMBOL_GPL(of_dma_is_coherent);
+
+/*
+ * of_pci_range_to_resource - Create a resource from an of_pci_range
+ * @range: the PCI range that describes the resource
+ * @np: device node where the range belongs to
+ * @res: pointer to a valid resource that will be updated to
+ * reflect the values contained in the range.
+ *
+ * Returns EINVAL if the range cannot be converted to resource.
+ *
+ * Note that if the range is an IO range, the resource will be converted
+ * using pci_address_to_pio() which can fail if it is called too early or
+ * if the range cannot be matched to any host bridge IO space (our case here).
+ * To guard against that we try to register the IO range first.
+ * If that fails we know that pci_address_to_pio() will do too.
+ */
+int of_pci_range_to_resource(struct of_pci_range *range,
+ struct device_node *np, struct resource *res)
+{
+ int err;
+ res->flags = range->flags;
+ res->parent = res->child = res->sibling = NULL;
+ res->name = np->full_name;
+
+ if (res->flags & IORESOURCE_IO) {
+ unsigned long port = -1;
+ err = pci_register_io_range(range->cpu_addr, range->size);
+ if (err)
+ goto invalid_range;
+ port = pci_address_to_pio(range->cpu_addr);
+ if (port == (unsigned long)-1) {
+ err = -EINVAL;
+ goto invalid_range;
+ }
+ res->start = port;
+ } else {
+ res->start = range->cpu_addr;
+ }
+ res->end = res->start + range->size - 1;
+ return 0;
+
+invalid_range:
+ res->start = (resource_size_t)OF_BAD_ADDR;
+ res->end = (resource_size_t)OF_BAD_ADDR;
+ return err;
+}
+
diff --git a/include/linux/of_address.h b/include/linux/of_address.h
index ac4aac4..33c0420 100644
--- a/include/linux/of_address.h
+++ b/include/linux/of_address.h
@@ -23,17 +23,8 @@ struct of_pci_range {
#define for_each_of_pci_range(parser, range) \
for (; of_pci_range_parser_one(parser, range);)
-static inline void of_pci_range_to_resource(struct of_pci_range *range,
- struct device_node *np,
- struct resource *res)
-{
- res->flags = range->flags;
- res->start = range->cpu_addr;
- res->end = range->cpu_addr + range->size - 1;
- res->parent = res->child = res->sibling = NULL;
- res->name = np->full_name;
-}
-
+extern int of_pci_range_to_resource(struct of_pci_range *range,
+ struct device_node *np, struct resource *res);
/* Translate a DMA address from device space to CPU space */
extern u64 of_translate_dma_address(struct device_node *dev,
const __be32 *in_addr);
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
* [PATCH v8 5/9] pci: Create pci_host_bridge before its associated bus in pci_create_root_bus.
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
` (3 preceding siblings ...)
2014-07-01 18:43 ` [PATCH v8 4/9] pci: OF: Fix the conversion of IO ranges into IO resources Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
2014-07-01 18:43 ` [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge Liviu Dudau
` (4 subsequent siblings)
9 siblings, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: Device Tree ML, LKML, LAKML
Before commit 7b5436635800 the pci_host_bridge was created before the root bus.
As that commit has added a needless dependency on the bus for pci_alloc_host_bridge()
the creation order has been changed for no good reason. Revert the order of
creation as we are going to depend on the pci_host_bridge structure to retrieve the
domain number of the root bus.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Acked-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Tanmay Inamdar <tinamdar@apm.com>
---
drivers/pci/probe.c | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index e3cf8a2..2c92662 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -515,7 +515,7 @@ static void pci_release_host_bridge_dev(struct device *dev)
kfree(bridge);
}
-static struct pci_host_bridge *pci_alloc_host_bridge(struct pci_bus *b)
+static struct pci_host_bridge *pci_alloc_host_bridge(void)
{
struct pci_host_bridge *bridge;
@@ -524,7 +524,6 @@ static struct pci_host_bridge *pci_alloc_host_bridge(struct pci_bus *b)
return NULL;
INIT_LIST_HEAD(&bridge->windows);
- bridge->bus = b;
return bridge;
}
@@ -1761,9 +1760,16 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
char bus_addr[64];
char *fmt;
+ bridge = pci_alloc_host_bridge();
+ if (!bridge)
+ return NULL;
+
+ bridge->dev.parent = parent;
+ bridge->dev.release = pci_release_host_bridge_dev;
+
b = pci_alloc_bus();
if (!b)
- return NULL;
+ goto err_out;
b->sysdata = sysdata;
b->ops = ops;
@@ -1772,26 +1778,19 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
if (b2) {
/* If we already got to this bus through a different bridge, ignore it */
dev_dbg(&b2->dev, "bus already known\n");
- goto err_out;
+ goto err_bus_out;
}
- bridge = pci_alloc_host_bridge(b);
- if (!bridge)
- goto err_out;
-
- bridge->dev.parent = parent;
- bridge->dev.release = pci_release_host_bridge_dev;
+ bridge->bus = b;
dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(b), bus);
error = pcibios_root_bridge_prepare(bridge);
- if (error) {
- kfree(bridge);
+ if (error)
goto err_out;
- }
error = device_register(&bridge->dev);
if (error) {
put_device(&bridge->dev);
- goto err_out;
+ goto err_bus_out;
}
b->bridge = get_device(&bridge->dev);
device_enable_async_suspend(b->bridge);
@@ -1848,8 +1847,10 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
class_dev_reg_err:
put_device(&bridge->dev);
device_unregister(&bridge->dev);
-err_out:
+err_bus_out:
kfree(b);
+err_out:
+ kfree(bridge);
return NULL;
}
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
* [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
` (4 preceding siblings ...)
2014-07-01 18:43 ` [PATCH v8 5/9] pci: Create pci_host_bridge before its associated bus in pci_create_root_bus Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
2014-07-08 0:59 ` Bjorn Helgaas
2014-07-01 18:43 ` [PATCH v8 7/9] pci: of: Parse and map the IRQ when adding the PCI device Liviu Dudau
` (3 subsequent siblings)
9 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: Device Tree ML, LKML, LAKML
Make it easier to discover the domain number of a bus by storing
the number in pci_host_bridge for the root bus. Several architectures
have their own way of storing this information, so it makes sense
to try to unify the code. While at this, add a new function that
creates a root bus in a given domain and make pci_create_root_bus()
a wrapper around this function.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Tested-by: Tanmay Inamdar <tinamdar@apm.com>
---
drivers/pci/probe.c | 41 +++++++++++++++++++++++++++++++++--------
include/linux/pci.h | 4 ++++
2 files changed, 37 insertions(+), 8 deletions(-)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 2c92662..abf5e82 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -1748,8 +1748,9 @@ void __weak pcibios_remove_bus(struct pci_bus *bus)
{
}
-struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
- struct pci_ops *ops, void *sysdata, struct list_head *resources)
+struct pci_bus *pci_create_root_bus_in_domain(struct device *parent,
+ int domain, int bus, struct pci_ops *ops, void *sysdata,
+ struct list_head *resources)
{
int error;
struct pci_host_bridge *bridge;
@@ -1762,27 +1763,31 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
bridge = pci_alloc_host_bridge();
if (!bridge)
- return NULL;
+ return ERR_PTR(-ENOMEM);
bridge->dev.parent = parent;
bridge->dev.release = pci_release_host_bridge_dev;
+ bridge->domain_nr = domain;
b = pci_alloc_bus();
- if (!b)
+ if (!b) {
+ error = -ENOMEM;
goto err_out;
+ }
b->sysdata = sysdata;
b->ops = ops;
b->number = b->busn_res.start = bus;
- b2 = pci_find_bus(pci_domain_nr(b), bus);
+ b2 = pci_find_bus(bridge->domain_nr, bus);
if (b2) {
/* If we already got to this bus through a different bridge, ignore it */
dev_dbg(&b2->dev, "bus already known\n");
+ error = -EEXIST;
goto err_bus_out;
}
bridge->bus = b;
- dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(b), bus);
+ dev_set_name(&bridge->dev, "pci%04x:%02x", bridge->domain_nr, bus);
error = pcibios_root_bridge_prepare(bridge);
if (error)
goto err_out;
@@ -1801,7 +1806,7 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
b->dev.class = &pcibus_class;
b->dev.parent = b->bridge;
- dev_set_name(&b->dev, "%04x:%02x", pci_domain_nr(b), bus);
+ dev_set_name(&b->dev, "%04x:%02x", bridge->domain_nr, bus);
error = device_register(&b->dev);
if (error)
goto class_dev_reg_err;
@@ -1851,7 +1856,27 @@ err_bus_out:
kfree(b);
err_out:
kfree(bridge);
- return NULL;
+ return ERR_PTR(error);
+}
+
+struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
+ struct pci_ops *ops, void *sysdata, struct list_head *resources)
+{
+ int domain_nr;
+ struct pci_bus *b = pci_alloc_bus();
+ if (!b)
+ return NULL;
+
+ b->sysdata = sysdata;
+ domain_nr = pci_domain_nr(b);
+ kfree(b);
+
+ b = pci_create_root_bus_in_domain(parent, domain_nr, bus,
+ ops, sysdata, resources);
+ if (IS_ERR(b))
+ return NULL;
+
+ return b;
}
int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int bus_max)
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 466bcd1..7e7b939 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -401,6 +401,7 @@ struct pci_host_bridge_window {
struct pci_host_bridge {
struct device dev;
struct pci_bus *bus; /* root bus */
+ int domain_nr;
struct list_head windows; /* pci_host_bridge_windows */
void (*release_fn)(struct pci_host_bridge *);
void *release_data;
@@ -769,6 +770,9 @@ struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, void *sysdata);
struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
struct pci_ops *ops, void *sysdata,
struct list_head *resources);
+struct pci_bus *pci_create_root_bus_in_domain(struct device *parent,
+ int domain, int bus, struct pci_ops *ops,
+ void *sysdata, struct list_head *resources);
int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int busmax);
int pci_bus_update_busn_res_end(struct pci_bus *b, int busmax);
void pci_bus_release_busn_res(struct pci_bus *b);
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-01 18:43 ` [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge Liviu Dudau
@ 2014-07-08 0:59 ` Bjorn Helgaas
2014-07-08 10:46 ` Liviu Dudau
0 siblings, 1 reply; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-08 0:59 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 01, 2014 at 07:43:31PM +0100, Liviu Dudau wrote:
> Make it easier to discover the domain number of a bus by storing
> the number in pci_host_bridge for the root bus. Several architectures
> have their own way of storing this information, so it makes sense
> to try to unify the code. While at this, add a new function that
> creates a root bus in a given domain and make pci_create_root_bus()
> a wrapper around this function.
"While at this" is always a good clue that maybe something should be
split into a separate patch :) This is a very good example, since it
adds a new interface that deserves its own changelog.
> Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> ---
> drivers/pci/probe.c | 41 +++++++++++++++++++++++++++++++++--------
> include/linux/pci.h | 4 ++++
> 2 files changed, 37 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> index 2c92662..abf5e82 100644
> --- a/drivers/pci/probe.c
> +++ b/drivers/pci/probe.c
> @@ -1748,8 +1748,9 @@ void __weak pcibios_remove_bus(struct pci_bus *bus)
> {
> }
>
> -struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
> - struct pci_ops *ops, void *sysdata, struct list_head *resources)
> +struct pci_bus *pci_create_root_bus_in_domain(struct device *parent,
> + int domain, int bus, struct pci_ops *ops, void *sysdata,
> + struct list_head *resources)
I don't think we should do it this way; this makes it possible to have a
host bridge where "bridge->domain_nr != pci_domain_nr(bridge->bus)".
I wonder if it would help to make a weak pci_domain_nr() function that
returns "bridge->domain_nr". Then each arch could individually drop its
pci_domain_nr() definition as it was converted, e.g., something like this:
- Convert every arch pci_domain_nr() from a #define to a non-inline
function
- Add bridge.domain_nr, initialized from pci_domain_nr()
- Add a weak generic pci_domain_nr() that returns bridge.domain_nr
- Add a way to create a host bridge in a specified domain, so we can
initialize bridge.domain_nr without using pci_domain_nr()
- Convert each arch to use the new creation mechanism and drop its
pci_domain_nr() implementation
> {
> int error;
> struct pci_host_bridge *bridge;
> @@ -1762,27 +1763,31 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
>
> bridge = pci_alloc_host_bridge();
> if (!bridge)
> - return NULL;
> + return ERR_PTR(-ENOMEM);
>
> bridge->dev.parent = parent;
> bridge->dev.release = pci_release_host_bridge_dev;
> + bridge->domain_nr = domain;
>
> b = pci_alloc_bus();
> - if (!b)
> + if (!b) {
> + error = -ENOMEM;
> goto err_out;
> + }
>
> b->sysdata = sysdata;
> b->ops = ops;
> b->number = b->busn_res.start = bus;
> - b2 = pci_find_bus(pci_domain_nr(b), bus);
> + b2 = pci_find_bus(bridge->domain_nr, bus);
> if (b2) {
> /* If we already got to this bus through a different bridge, ignore it */
> dev_dbg(&b2->dev, "bus already known\n");
> + error = -EEXIST;
> goto err_bus_out;
> }
>
> bridge->bus = b;
> - dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(b), bus);
> + dev_set_name(&bridge->dev, "pci%04x:%02x", bridge->domain_nr, bus);
> error = pcibios_root_bridge_prepare(bridge);
> if (error)
> goto err_out;
> @@ -1801,7 +1806,7 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
>
> b->dev.class = &pcibus_class;
> b->dev.parent = b->bridge;
> - dev_set_name(&b->dev, "%04x:%02x", pci_domain_nr(b), bus);
> + dev_set_name(&b->dev, "%04x:%02x", bridge->domain_nr, bus);
> error = device_register(&b->dev);
> if (error)
> goto class_dev_reg_err;
> @@ -1851,7 +1856,27 @@ err_bus_out:
> kfree(b);
> err_out:
> kfree(bridge);
> - return NULL;
> + return ERR_PTR(error);
> +}
> +
> +struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
> + struct pci_ops *ops, void *sysdata, struct list_head *resources)
> +{
> + int domain_nr;
> + struct pci_bus *b = pci_alloc_bus();
> + if (!b)
> + return NULL;
> +
> + b->sysdata = sysdata;
> + domain_nr = pci_domain_nr(b);
> + kfree(b);
> +
> + b = pci_create_root_bus_in_domain(parent, domain_nr, bus,
> + ops, sysdata, resources);
> + if (IS_ERR(b))
> + return NULL;
> +
> + return b;
> }
>
> int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int bus_max)
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 466bcd1..7e7b939 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -401,6 +401,7 @@ struct pci_host_bridge_window {
> struct pci_host_bridge {
> struct device dev;
> struct pci_bus *bus; /* root bus */
> + int domain_nr;
> struct list_head windows; /* pci_host_bridge_windows */
> void (*release_fn)(struct pci_host_bridge *);
> void *release_data;
> @@ -769,6 +770,9 @@ struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, void *sysdata);
> struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
> struct pci_ops *ops, void *sysdata,
> struct list_head *resources);
> +struct pci_bus *pci_create_root_bus_in_domain(struct device *parent,
> + int domain, int bus, struct pci_ops *ops,
> + void *sysdata, struct list_head *resources);
> int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int busmax);
> int pci_bus_update_busn_res_end(struct pci_bus *b, int busmax);
> void pci_bus_release_busn_res(struct pci_bus *b);
> --
> 2.0.0
>
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-08 0:59 ` Bjorn Helgaas
@ 2014-07-08 10:46 ` Liviu Dudau
2014-07-08 18:41 ` Bjorn Helgaas
0 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-08 10:46 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 08, 2014 at 01:59:54AM +0100, Bjorn Helgaas wrote:
> On Tue, Jul 01, 2014 at 07:43:31PM +0100, Liviu Dudau wrote:
> > Make it easier to discover the domain number of a bus by storing
> > the number in pci_host_bridge for the root bus. Several architectures
> > have their own way of storing this information, so it makes sense
> > to try to unify the code. While at this, add a new function that
> > creates a root bus in a given domain and make pci_create_root_bus()
> > a wrapper around this function.
>
> "While at this" is always a good clue that maybe something should be
> split into a separate patch :) This is a very good example, since it
> adds a new interface that deserves its own changelog.
Yes, I'm coming to the same conclusion. :)
>
> > Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> > Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> > ---
> > drivers/pci/probe.c | 41 +++++++++++++++++++++++++++++++++--------
> > include/linux/pci.h | 4 ++++
> > 2 files changed, 37 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> > index 2c92662..abf5e82 100644
> > --- a/drivers/pci/probe.c
> > +++ b/drivers/pci/probe.c
> > @@ -1748,8 +1748,9 @@ void __weak pcibios_remove_bus(struct pci_bus *bus)
> > {
> > }
> >
> > -struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
> > - struct pci_ops *ops, void *sysdata, struct list_head *resources)
> > +struct pci_bus *pci_create_root_bus_in_domain(struct device *parent,
> > + int domain, int bus, struct pci_ops *ops, void *sysdata,
> > + struct list_head *resources)
>
> I don't think we should do it this way; this makes it possible to have a
> host bridge where "bridge->domain_nr != pci_domain_nr(bridge->bus)".
>
> I wonder if it would help to make a weak pci_domain_nr() function that
> returns "bridge->domain_nr". Then each arch could individually drop its
> pci_domain_nr() definition as it was converted, e.g., something like this:
>
> - Convert every arch pci_domain_nr() from a #define to a non-inline
> function
> - Add bridge.domain_nr, initialized from pci_domain_nr()
> - Add a weak generic pci_domain_nr() that returns bridge.domain_nr
> - Add a way to create a host bridge in a specified domain, so we can
> initialize bridge.domain_nr without using pci_domain_nr()
> - Convert each arch to use the new creation mechanism and drop its
> pci_domain_nr() implementation
I will try to propose a patch implementing this.
Best regards,
Liviu
>
> > {
> > int error;
> > struct pci_host_bridge *bridge;
> > @@ -1762,27 +1763,31 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
> >
> > bridge = pci_alloc_host_bridge();
> > if (!bridge)
> > - return NULL;
> > + return ERR_PTR(-ENOMEM);
> >
> > bridge->dev.parent = parent;
> > bridge->dev.release = pci_release_host_bridge_dev;
> > + bridge->domain_nr = domain;
> >
> > b = pci_alloc_bus();
> > - if (!b)
> > + if (!b) {
> > + error = -ENOMEM;
> > goto err_out;
> > + }
> >
> > b->sysdata = sysdata;
> > b->ops = ops;
> > b->number = b->busn_res.start = bus;
> > - b2 = pci_find_bus(pci_domain_nr(b), bus);
> > + b2 = pci_find_bus(bridge->domain_nr, bus);
> > if (b2) {
> > /* If we already got to this bus through a different bridge, ignore it */
> > dev_dbg(&b2->dev, "bus already known\n");
> > + error = -EEXIST;
> > goto err_bus_out;
> > }
> >
> > bridge->bus = b;
> > - dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(b), bus);
> > + dev_set_name(&bridge->dev, "pci%04x:%02x", bridge->domain_nr, bus);
> > error = pcibios_root_bridge_prepare(bridge);
> > if (error)
> > goto err_out;
> > @@ -1801,7 +1806,7 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
> >
> > b->dev.class = &pcibus_class;
> > b->dev.parent = b->bridge;
> > - dev_set_name(&b->dev, "%04x:%02x", pci_domain_nr(b), bus);
> > + dev_set_name(&b->dev, "%04x:%02x", bridge->domain_nr, bus);
> > error = device_register(&b->dev);
> > if (error)
> > goto class_dev_reg_err;
> > @@ -1851,7 +1856,27 @@ err_bus_out:
> > kfree(b);
> > err_out:
> > kfree(bridge);
> > - return NULL;
> > + return ERR_PTR(error);
> > +}
> > +
> > +struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
> > + struct pci_ops *ops, void *sysdata, struct list_head *resources)
> > +{
> > + int domain_nr;
> > + struct pci_bus *b = pci_alloc_bus();
> > + if (!b)
> > + return NULL;
> > +
> > + b->sysdata = sysdata;
> > + domain_nr = pci_domain_nr(b);
> > + kfree(b);
> > +
> > + b = pci_create_root_bus_in_domain(parent, domain_nr, bus,
> > + ops, sysdata, resources);
> > + if (IS_ERR(b))
> > + return NULL;
> > +
> > + return b;
> > }
> >
> > int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int bus_max)
> > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > index 466bcd1..7e7b939 100644
> > --- a/include/linux/pci.h
> > +++ b/include/linux/pci.h
> > @@ -401,6 +401,7 @@ struct pci_host_bridge_window {
> > struct pci_host_bridge {
> > struct device dev;
> > struct pci_bus *bus; /* root bus */
> > + int domain_nr;
> > struct list_head windows; /* pci_host_bridge_windows */
> > void (*release_fn)(struct pci_host_bridge *);
> > void *release_data;
> > @@ -769,6 +770,9 @@ struct pci_bus *pci_scan_bus(int bus, struct pci_ops *ops, void *sysdata);
> > struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
> > struct pci_ops *ops, void *sysdata,
> > struct list_head *resources);
> > +struct pci_bus *pci_create_root_bus_in_domain(struct device *parent,
> > + int domain, int bus, struct pci_ops *ops,
> > + void *sysdata, struct list_head *resources);
> > int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int busmax);
> > int pci_bus_update_busn_res_end(struct pci_bus *b, int busmax);
> > void pci_bus_release_busn_res(struct pci_bus *b);
> > --
> > 2.0.0
> >
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-08 10:46 ` Liviu Dudau
@ 2014-07-08 18:41 ` Bjorn Helgaas
2014-07-08 22:48 ` Liviu Dudau
0 siblings, 1 reply; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-08 18:41 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 8, 2014 at 4:46 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> On Tue, Jul 08, 2014 at 01:59:54AM +0100, Bjorn Helgaas wrote:
>> I wonder if it would help to make a weak pci_domain_nr() function that
>> returns "bridge->domain_nr". Then each arch could individually drop its
>> pci_domain_nr() definition as it was converted, e.g., something like this:
>>
>> - Convert every arch pci_domain_nr() from a #define to a non-inline
>> function
>> - Add bridge.domain_nr, initialized from pci_domain_nr()
>> - Add a weak generic pci_domain_nr() that returns bridge.domain_nr
>> - Add a way to create a host bridge in a specified domain, so we can
>> initialize bridge.domain_nr without using pci_domain_nr()
>> - Convert each arch to use the new creation mechanism and drop its
>> pci_domain_nr() implementation
>
> I will try to propose a patch implementing this.
I think this is more of an extra credit, cleanup sort of thing. I
don't think it advances your primary goal of (I think) getting arm64
PCI support in. So my advice is to not worry about unifying domain
handling until later.
Bjorn
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-08 18:41 ` Bjorn Helgaas
@ 2014-07-08 22:48 ` Liviu Dudau
2014-07-09 15:10 ` Bjorn Helgaas
0 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-08 22:48 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 08, 2014 at 07:41:50PM +0100, Bjorn Helgaas wrote:
> On Tue, Jul 8, 2014 at 4:46 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> > On Tue, Jul 08, 2014 at 01:59:54AM +0100, Bjorn Helgaas wrote:
>
> >> I wonder if it would help to make a weak pci_domain_nr() function that
> >> returns "bridge->domain_nr". Then each arch could individually drop its
> >> pci_domain_nr() definition as it was converted, e.g., something like this:
> >>
> >> - Convert every arch pci_domain_nr() from a #define to a non-inline
> >> function
> >> - Add bridge.domain_nr, initialized from pci_domain_nr()
> >> - Add a weak generic pci_domain_nr() that returns bridge.domain_nr
> >> - Add a way to create a host bridge in a specified domain, so we can
> >> initialize bridge.domain_nr without using pci_domain_nr()
> >> - Convert each arch to use the new creation mechanism and drop its
> >> pci_domain_nr() implementation
> >
> > I will try to propose a patch implementing this.
>
> I think this is more of an extra credit, cleanup sort of thing. I
> don't think it advances your primary goal of (I think) getting arm64
> PCI support in. So my advice is to not worry about unifying domain
> handling until later.
Getting arm64 supported *is* my main goal. But like you have stated in your
review of v7, you wanted to see another architecture converted as a guarantee
of "genericity" (for lack of a better word) for my patches. The one architecture
I've set my eyes on is microblaze, and that one uses pci_scan_root_bus()
rather than pci_create_root_bus() so I don't have any opportunity to pass the
domain number or any additional info (like the sysdata pointer that we were
talking about) to the pci_host_bridge structure unless I do this cleanup.
Best regards,
Liviu
>
> Bjorn
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-08 22:48 ` Liviu Dudau
@ 2014-07-09 15:10 ` Bjorn Helgaas
2014-07-10 9:47 ` Liviu Dudau
0 siblings, 1 reply; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-09 15:10 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 8, 2014 at 4:48 PM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> On Tue, Jul 08, 2014 at 07:41:50PM +0100, Bjorn Helgaas wrote:
>> On Tue, Jul 8, 2014 at 4:46 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
>> > On Tue, Jul 08, 2014 at 01:59:54AM +0100, Bjorn Helgaas wrote:
>>
>> >> I wonder if it would help to make a weak pci_domain_nr() function that
>> >> returns "bridge->domain_nr". Then each arch could individually drop its
>> >> pci_domain_nr() definition as it was converted, e.g., something like this:
>> >>
>> >> - Convert every arch pci_domain_nr() from a #define to a non-inline
>> >> function
>> >> - Add bridge.domain_nr, initialized from pci_domain_nr()
>> >> - Add a weak generic pci_domain_nr() that returns bridge.domain_nr
>> >> - Add a way to create a host bridge in a specified domain, so we can
>> >> initialize bridge.domain_nr without using pci_domain_nr()
>> >> - Convert each arch to use the new creation mechanism and drop its
>> >> pci_domain_nr() implementation
>> >
>> > I will try to propose a patch implementing this.
>>
>> I think this is more of an extra credit, cleanup sort of thing. I
>> don't think it advances your primary goal of (I think) getting arm64
>> PCI support in. So my advice is to not worry about unifying domain
>> handling until later.
>
> Getting arm64 supported *is* my main goal. But like you have stated in your
> review of v7, you wanted to see another architecture converted as a guarantee
> of "genericity" (for lack of a better word) for my patches. The one architecture
> I've set my eyes on is microblaze, and that one uses pci_scan_root_bus()
> rather than pci_create_root_bus() so I don't have any opportunity to pass the
> domain number or any additional info (like the sysdata pointer that we were
> talking about) to the pci_host_bridge structure unless I do this cleanup.
I think maybe I was too harsh about that, or maybe we had different
ideas about what "conversion" involved. My comment was in response to
"pci: Introduce pci_register_io_range() helper function", and I don't
remember why I was concerned about that; it's not even in drivers/pci,
and it doesn't have an obvious connection to putting the domain number
in struct pci_host_bridge.
The thing I'm more concerned about is adding new PCI interfaces, e.g.,
pci_create_root_bus_in_domain(), that are only used by one
architecture. Then it's hard to be sure that it's going to be useful
for other arches. If you can add arm64 using the existing PCI
interfaces, I don't any problem with that.
Bjorn
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-09 15:10 ` Bjorn Helgaas
@ 2014-07-10 9:47 ` Liviu Dudau
2014-07-10 22:36 ` Bjorn Helgaas
0 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-10 9:47 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Wed, Jul 09, 2014 at 04:10:04PM +0100, Bjorn Helgaas wrote:
> On Tue, Jul 8, 2014 at 4:48 PM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> > On Tue, Jul 08, 2014 at 07:41:50PM +0100, Bjorn Helgaas wrote:
> >> On Tue, Jul 8, 2014 at 4:46 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> >> > On Tue, Jul 08, 2014 at 01:59:54AM +0100, Bjorn Helgaas wrote:
> >>
> >> >> I wonder if it would help to make a weak pci_domain_nr() function that
> >> >> returns "bridge->domain_nr". Then each arch could individually drop its
> >> >> pci_domain_nr() definition as it was converted, e.g., something like this:
> >> >>
> >> >> - Convert every arch pci_domain_nr() from a #define to a non-inline
> >> >> function
> >> >> - Add bridge.domain_nr, initialized from pci_domain_nr()
> >> >> - Add a weak generic pci_domain_nr() that returns bridge.domain_nr
> >> >> - Add a way to create a host bridge in a specified domain, so we can
> >> >> initialize bridge.domain_nr without using pci_domain_nr()
> >> >> - Convert each arch to use the new creation mechanism and drop its
> >> >> pci_domain_nr() implementation
> >> >
> >> > I will try to propose a patch implementing this.
> >>
> >> I think this is more of an extra credit, cleanup sort of thing. I
> >> don't think it advances your primary goal of (I think) getting arm64
> >> PCI support in. So my advice is to not worry about unifying domain
> >> handling until later.
> >
> > Getting arm64 supported *is* my main goal. But like you have stated in your
> > review of v7, you wanted to see another architecture converted as a guarantee
> > of "genericity" (for lack of a better word) for my patches. The one architecture
> > I've set my eyes on is microblaze, and that one uses pci_scan_root_bus()
> > rather than pci_create_root_bus() so I don't have any opportunity to pass the
> > domain number or any additional info (like the sysdata pointer that we were
> > talking about) to the pci_host_bridge structure unless I do this cleanup.
>
> I think maybe I was too harsh about that, or maybe we had different
> ideas about what "conversion" involved. My comment was in response to
> "pci: Introduce pci_register_io_range() helper function", and I don't
> remember why I was concerned about that; it's not even in drivers/pci,
> and it doesn't have an obvious connection to putting the domain number
> in struct pci_host_bridge.
Well, to be honest I did move some of the code (as mentioned in the Changelog) from
drivers/pci into drivers/of. It makes more sense to be in OF, as it mostly concerns
architectures that use it.
>
> The thing I'm more concerned about is adding new PCI interfaces, e.g.,
> pci_create_root_bus_in_domain(), that are only used by one
> architecture. Then it's hard to be sure that it's going to be useful
> for other arches. If you can add arm64 using the existing PCI
> interfaces, I don't any problem with that.
(No blame here or reproaches, I'm just restating the situation:) I (mostly) did try
that in my v7 series but it also got NAK-ed by Arnd and Catalin as it had too much
arm64 specific code in there.
I don't see a way out of adding new PCI interfaces if we want to have support in
the PCI framework for unifying existing architectures. Of course, there is the painful
alternative of changing the existing APIs and fixing arches in one go, but like you've
said is going to be messy. I don't think I (or the people and companies wanting PCIe
on arm64) should cop out and pick a quick fix that adds sysdata structure into arm64
just to avoid new APIs, as this is not going to help anyone in long term. What I can
do is to create a set of parallel APIs for pci_{scan,create}_root_bus() that take
a pci_host_bridge pointer and start converting architectures one by one to that API
while deprecating the existing one. That way we can add arm64 easily as it would be
the first architecture to use new code without breaking things *and* we provide a
migration path.
Best regards,
Liviu
>
> Bjorn
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-10 9:47 ` Liviu Dudau
@ 2014-07-10 22:36 ` Bjorn Helgaas
[not found] ` <CAErSpo5ZNCY31NztT0cLFbRVsBZqMsZ0LbTFNOTQCZUU3F7qJA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-07-11 14:11 ` Catalin Marinas
0 siblings, 2 replies; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-10 22:36 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Thu, Jul 10, 2014 at 3:47 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> I don't see a way out of adding new PCI interfaces if we want to have support in
> the PCI framework for unifying existing architectures. Of course, there is the painful
> alternative of changing the existing APIs and fixing arches in one go, but like you've
> said is going to be messy. I don't think I (or the people and companies wanting PCIe
> on arm64) should cop out and pick a quick fix that adds sysdata structure into arm64
> just to avoid new APIs, as this is not going to help anyone in long term. What I can
> do is to create a set of parallel APIs for pci_{scan,create}_root_bus() that take
> a pci_host_bridge pointer and start converting architectures one by one to that API
> while deprecating the existing one. That way we can add arm64 easily as it would be
> the first architecture to use new code without breaking things *and* we provide a
> migration path.
A lot of the v7 discussion was about pci_register_io_range(). I
apologize, because I think I really derailed things there and it was
unwarranted. Arnd was right that migrating other arches should be a
separate effort. I *think* I was probably thinking about the proposal
of adding pci_create_root_bus_in_domain(), and my reservations about
that got transferred to the pci_register_io_range() discussion. In
any case, I'm completely fine with pci_register_io_range() now.
Most of the rest of the v7 discussion was about "Introduce a domain
number for pci_host_bridge." I think we should add arm64 using the
existing pci_scan_root_bus() and keep the domain number in the arm64
sysdata structure like every other arch does. Isn't that feasible?
We can worry about domain unification later.
I haven't followed closely enough to know what other objections people had.
Bjorn
^ permalink raw reply [flat|nested] 92+ messages in thread
[parent not found: <CAErSpo5ZNCY31NztT0cLFbRVsBZqMsZ0LbTFNOTQCZUU3F7qJA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
[not found] ` <CAErSpo5ZNCY31NztT0cLFbRVsBZqMsZ0LbTFNOTQCZUU3F7qJA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-07-11 9:30 ` Liviu Dudau
0 siblings, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-11 9:30 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Thu, Jul 10, 2014 at 11:36:10PM +0100, Bjorn Helgaas wrote:
> On Thu, Jul 10, 2014 at 3:47 AM, Liviu Dudau <Liviu.Dudau-5wv7dgnIgG8@public.gmane.org> wrote:
>
> > I don't see a way out of adding new PCI interfaces if we want to have support in
> > the PCI framework for unifying existing architectures. Of course, there is the painful
> > alternative of changing the existing APIs and fixing arches in one go, but like you've
> > said is going to be messy. I don't think I (or the people and companies wanting PCIe
> > on arm64) should cop out and pick a quick fix that adds sysdata structure into arm64
> > just to avoid new APIs, as this is not going to help anyone in long term. What I can
> > do is to create a set of parallel APIs for pci_{scan,create}_root_bus() that take
> > a pci_host_bridge pointer and start converting architectures one by one to that API
> > while deprecating the existing one. That way we can add arm64 easily as it would be
> > the first architecture to use new code without breaking things *and* we provide a
> > migration path.
>
> A lot of the v7 discussion was about pci_register_io_range(). I
> apologize, because I think I really derailed things there and it was
> unwarranted. Arnd was right that migrating other arches should be a
> separate effort. I *think* I was probably thinking about the proposal
> of adding pci_create_root_bus_in_domain(), and my reservations about
> that got transferred to the pci_register_io_range() discussion. In
> any case, I'm completely fine with pci_register_io_range() now.
>
> Most of the rest of the v7 discussion was about "Introduce a domain
> number for pci_host_bridge." I think we should add arm64 using the
> existing pci_scan_root_bus() and keep the domain number in the arm64
> sysdata structure like every other arch does. Isn't that feasible?
> We can worry about domain unification later.
Thanks!
I'm really not that keen to add sysdata support in the arch code as it
requires initialisation code that I have tried to eliminate. What I'm
going to suggest for my v9 is a parallel set of APIs that arm64 will
be the first to use without changing the existing pci_{scan,create}_bus()
functions and then the conversion process will migrate arches to the new API.
Best regards,
Liviu
>
> I haven't followed closely enough to know what other objections people had.
>
> Bjorn
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-10 22:36 ` Bjorn Helgaas
[not found] ` <CAErSpo5ZNCY31NztT0cLFbRVsBZqMsZ0LbTFNOTQCZUU3F7qJA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-07-11 14:11 ` Catalin Marinas
2014-07-11 15:08 ` Liviu Dudau
2014-07-11 17:02 ` Bjorn Helgaas
1 sibling, 2 replies; 92+ messages in thread
From: Catalin Marinas @ 2014-07-11 14:11 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: Liviu Dudau, linux-pci, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Thu, Jul 10, 2014 at 11:36:10PM +0100, Bjorn Helgaas wrote:
> Most of the rest of the v7 discussion was about "Introduce a domain
> number for pci_host_bridge." I think we should add arm64 using the
> existing pci_scan_root_bus() and keep the domain number in the arm64
> sysdata structure like every other arch does. Isn't that feasible?
> We can worry about domain unification later.
I think that's what we were trying to avoid, adding an arm64-specific
pci_sys_data structure (and arm64-specific API). IIUC, avoiding this
would allow the host controller drivers to use the sysdata pointer for
their own private data structures.
Also since you can specify the domain number via DT (and in Liviu's
v8 patches read by of_create_pci_host_bridge), I think it would make
sense to have it stored in some generic data structures (e.g.
pci_host_bridge) rather than in an arm64 private sysdata.
(Liviu is thinking of an alternative API but maybe he could briefly
describe it here before posting a new series)
--
Catalin
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-11 14:11 ` Catalin Marinas
@ 2014-07-11 15:08 ` Liviu Dudau
2014-07-11 16:09 ` Catalin Marinas
2014-07-11 17:02 ` Bjorn Helgaas
1 sibling, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-11 15:08 UTC (permalink / raw)
To: Catalin Marinas
Cc: Bjorn Helgaas, linux-pci, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Fri, Jul 11, 2014 at 03:11:16PM +0100, Catalin Marinas wrote:
> On Thu, Jul 10, 2014 at 11:36:10PM +0100, Bjorn Helgaas wrote:
> > Most of the rest of the v7 discussion was about "Introduce a domain
> > number for pci_host_bridge." I think we should add arm64 using the
> > existing pci_scan_root_bus() and keep the domain number in the arm64
> > sysdata structure like every other arch does. Isn't that feasible?
> > We can worry about domain unification later.
>
> I think that's what we were trying to avoid, adding an arm64-specific
> pci_sys_data structure (and arm64-specific API). IIUC, avoiding this
> would allow the host controller drivers to use the sysdata pointer for
> their own private data structures.
>
> Also since you can specify the domain number via DT (and in Liviu's
> v8 patches read by of_create_pci_host_bridge), I think it would make
> sense to have it stored in some generic data structures (e.g.
> pci_host_bridge) rather than in an arm64 private sysdata.
>
> (Liviu is thinking of an alternative API but maybe he could briefly
> describe it here before posting a new series)
>
> --
> Catalin
My plan is to keep the domain number in the pci_host_bridge and split
the creation of the pci_host_bridge out of the pci_create_root_bus().
The new function (tentatively called pci_create_new_root_bus()) will
no longer call pci_alloc_host_bridge() but will accept it as a
parameter, allowing one to be able to set the domain_nr ahead of the
root bus creation.
Best regards,
Liviu
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-11 15:08 ` Liviu Dudau
@ 2014-07-11 16:09 ` Catalin Marinas
0 siblings, 0 replies; 92+ messages in thread
From: Catalin Marinas @ 2014-07-11 16:09 UTC (permalink / raw)
To: Liviu Dudau
Cc: Bjorn Helgaas, linux-pci, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Fri, Jul 11, 2014 at 04:08:23PM +0100, Liviu Dudau wrote:
> On Fri, Jul 11, 2014 at 03:11:16PM +0100, Catalin Marinas wrote:
> > On Thu, Jul 10, 2014 at 11:36:10PM +0100, Bjorn Helgaas wrote:
> > > Most of the rest of the v7 discussion was about "Introduce a domain
> > > number for pci_host_bridge." I think we should add arm64 using the
> > > existing pci_scan_root_bus() and keep the domain number in the arm64
> > > sysdata structure like every other arch does. Isn't that feasible?
> > > We can worry about domain unification later.
> >
> > I think that's what we were trying to avoid, adding an arm64-specific
> > pci_sys_data structure (and arm64-specific API). IIUC, avoiding this
> > would allow the host controller drivers to use the sysdata pointer for
> > their own private data structures.
> >
> > Also since you can specify the domain number via DT (and in Liviu's
> > v8 patches read by of_create_pci_host_bridge), I think it would make
> > sense to have it stored in some generic data structures (e.g.
> > pci_host_bridge) rather than in an arm64 private sysdata.
> >
> > (Liviu is thinking of an alternative API but maybe he could briefly
> > describe it here before posting a new series)
>
> My plan is to keep the domain number in the pci_host_bridge and split
> the creation of the pci_host_bridge out of the pci_create_root_bus().
Wouldn't it make more sense to add domain_nr to the pci_bus structure
(well, only needed for the root bus)? It would simplify pci_domain_nr()
as well which only takes a pci_bus parameter.
> The new function (tentatively called pci_create_new_root_bus()) will
> no longer call pci_alloc_host_bridge() but will accept it as a
> parameter, allowing one to be able to set the domain_nr ahead of the
> root bus creation.
If we place domain_nr in pci_bus, this split wouldn't help but we still
need your original pci_create_root_bus_in_domain(). Are there other uses
of your proposal above?
Yet another alternative is to ignore PCI domains altogether (domain 0
always).
--
Catalin
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-11 14:11 ` Catalin Marinas
2014-07-11 15:08 ` Liviu Dudau
@ 2014-07-11 17:02 ` Bjorn Helgaas
2014-07-11 18:02 ` Catalin Marinas
1 sibling, 1 reply; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-11 17:02 UTC (permalink / raw)
To: Catalin Marinas
Cc: Liviu Dudau, linux-pci, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Fri, Jul 11, 2014 at 8:11 AM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> On Thu, Jul 10, 2014 at 11:36:10PM +0100, Bjorn Helgaas wrote:
>> Most of the rest of the v7 discussion was about "Introduce a domain
>> number for pci_host_bridge." I think we should add arm64 using the
>> existing pci_scan_root_bus() and keep the domain number in the arm64
>> sysdata structure like every other arch does. Isn't that feasible?
>> We can worry about domain unification later.
>
> I think that's what we were trying to avoid, adding an arm64-specific
> pci_sys_data structure (and arm64-specific API). IIUC, avoiding this
> would allow the host controller drivers to use the sysdata pointer for
> their own private data structures.
>
> Also since you can specify the domain number via DT (and in Liviu's
> v8 patches read by of_create_pci_host_bridge), I think it would make
> sense to have it stored in some generic data structures (e.g.
> pci_host_bridge) rather than in an arm64 private sysdata.
It would definitely be nice to keep the domain in a generic data
structure rather than an arm64-specific one. But every other arch
keeps it in an arch-specific structure today, and I think following
that existing pattern is the quickest way forward.
But you mentioned arm64-specific API, too. What do you have in mind
there? I know there will be arm64 implementations of various
pcibios_*() functions (just like for every other arch), but it sounds
like there might be something else?
Bjorn
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge.
2014-07-11 17:02 ` Bjorn Helgaas
@ 2014-07-11 18:02 ` Catalin Marinas
[not found] ` <20140711180151.GH16321-5wv7dgnIgG8@public.gmane.org>
0 siblings, 1 reply; 92+ messages in thread
From: Catalin Marinas @ 2014-07-11 18:02 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: Liviu Dudau, linux-pci, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Fri, Jul 11, 2014 at 06:02:56PM +0100, Bjorn Helgaas wrote:
> On Fri, Jul 11, 2014 at 8:11 AM, Catalin Marinas
> <catalin.marinas@arm.com> wrote:
> > On Thu, Jul 10, 2014 at 11:36:10PM +0100, Bjorn Helgaas wrote:
> >> Most of the rest of the v7 discussion was about "Introduce a domain
> >> number for pci_host_bridge." I think we should add arm64 using the
> >> existing pci_scan_root_bus() and keep the domain number in the arm64
> >> sysdata structure like every other arch does. Isn't that feasible?
> >> We can worry about domain unification later.
> >
> > I think that's what we were trying to avoid, adding an arm64-specific
> > pci_sys_data structure (and arm64-specific API). IIUC, avoiding this
> > would allow the host controller drivers to use the sysdata pointer for
> > their own private data structures.
> >
> > Also since you can specify the domain number via DT (and in Liviu's
> > v8 patches read by of_create_pci_host_bridge), I think it would make
> > sense to have it stored in some generic data structures (e.g.
> > pci_host_bridge) rather than in an arm64 private sysdata.
>
> It would definitely be nice to keep the domain in a generic data
> structure rather than an arm64-specific one. But every other arch
> keeps it in an arch-specific structure today, and I think following
> that existing pattern is the quickest way forward.
In this case we end up with an arm64-specific struct pci_sys_data and I
assume some API that takes care of this data structure to populate the
domain nr.
In Liviu's implementation, of_create_pci_host_bridge() is called by the
host controller driver directly and reads the domain_nr from the DT. It
also gets a void *host_data which it stores as sysdata in the pci_bus
structure (but that's specific to the host controller driver rather than
arm64). Since sysdata is opaque to of_create_pci_host_bridge(), it
cannot set the domain_nr.
If we go for arm64 sysdata, the host controller driver would have to
call some arm64 pci* function (maybe with its own private data as
argument) which would have to parse the DT, assign the domain nr and
eventually call pci_create_root_bus(). But it means that significant
part of of_create_pci_host_bridge() would be moved under arch/arm64 (an
alternative is for each host driver to implement its own
of_create_pci_host_bridge()).
In addition, host drivers would retrieve their private data from the
arm64 specific sysdata structure and we were trying to make host drivers
depend only on generic data structures and API rather than arch
specific.
> But you mentioned arm64-specific API, too. What do you have in mind
> there? I know there will be arm64 implementations of various
> pcibios_*() functions (just like for every other arch), but it sounds
> like there might be something else?
Not much left which is arm64 specific, just some callbacks:
http://linux-arm.org/git?p=linux-ld.git;a=commitdiff;h=82ebbce34506676528b9a7ae8f8fbc84b6b6248e
AFAICT, there isn't anything that host drivers need to call directly.
Basically we want to decouple the PCI host driver model from the arch
specific data structures/API.
--
Catalin
^ permalink raw reply [flat|nested] 92+ messages in thread
* [PATCH v8 7/9] pci: of: Parse and map the IRQ when adding the PCI device.
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
` (5 preceding siblings ...)
2014-07-01 18:43 ` [PATCH v8 6/9] pci: Introduce a domain number for pci_host_bridge Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
2014-07-02 11:17 ` Will Deacon
2014-07-01 18:43 ` [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree Liviu Dudau
` (2 subsequent siblings)
9 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: Device Tree ML, LKML, LAKML
Enhance the default implementation of pcibios_add_device() to
parse and map the IRQ of the device if a DT binding is available.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
---
drivers/pci/pci.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 63a54a3..8e65dc3 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -17,6 +17,7 @@
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/log2.h>
+#include <linux/of_pci.h>
#include <linux/pci-aspm.h>
#include <linux/pm_wakeup.h>
#include <linux/interrupt.h>
@@ -1453,6 +1454,9 @@ EXPORT_SYMBOL(pcim_pin_device);
*/
int __weak pcibios_add_device(struct pci_dev *dev)
{
+#ifdef CONFIG_OF
+ dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
+#endif
return 0;
}
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
* Re: [PATCH v8 7/9] pci: of: Parse and map the IRQ when adding the PCI device.
2014-07-01 18:43 ` [PATCH v8 7/9] pci: of: Parse and map the IRQ when adding the PCI device Liviu Dudau
@ 2014-07-02 11:17 ` Will Deacon
[not found] ` <20140702111748.GJ18731-5wv7dgnIgG8@public.gmane.org>
0 siblings, 1 reply; 92+ messages in thread
From: Will Deacon @ 2014-07-02 11:17 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 01, 2014 at 07:43:32PM +0100, Liviu Dudau wrote:
> Enhance the default implementation of pcibios_add_device() to
> parse and map the IRQ of the device if a DT binding is available.
>
> Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> ---
> drivers/pci/pci.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 63a54a3..8e65dc3 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -17,6 +17,7 @@
> #include <linux/spinlock.h>
> #include <linux/string.h>
> #include <linux/log2.h>
> +#include <linux/of_pci.h>
> #include <linux/pci-aspm.h>
> #include <linux/pm_wakeup.h>
> #include <linux/interrupt.h>
> @@ -1453,6 +1454,9 @@ EXPORT_SYMBOL(pcim_pin_device);
> */
> int __weak pcibios_add_device(struct pci_dev *dev)
> {
> +#ifdef CONFIG_OF
> + dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
> +#endif
You could avoid the #ifdef by only assigning dev->irq if
of_irq_parse_and_map_pci returned something > 0.
Will
^ permalink raw reply [flat|nested] 92+ messages in thread
* [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
` (6 preceding siblings ...)
2014-07-01 18:43 ` [PATCH v8 7/9] pci: of: Parse and map the IRQ when adding the PCI device Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
[not found] ` <1404240214-9804-9-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
` (3 more replies)
2014-07-01 18:43 ` [PATCH v8 9/9] pci: Remap I/O bus resources into CPU space with pci_remap_iospace() Liviu Dudau
[not found] ` <1404240214-9804-1-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
9 siblings, 4 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: Device Tree ML, LKML, LAKML
Several platforms use a rather generic version of parsing
the device tree to find the host bridge ranges. Move the common code
into the generic PCI code and use it to create a pci_host_bridge
structure that can be used by arch code.
Based on early attempts by Andrew Murray to unify the code.
Used powerpc and microblaze PCI code as starting point.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Tested-by: Tanmay Inamdar <tinamdar@apm.com>
---
drivers/of/of_pci.c | 135 ++++++++++++++++++++++++++++++++++++++++++++++
drivers/pci/host-bridge.c | 18 +++++++
include/linux/of_pci.h | 10 ++++
include/linux/pci.h | 8 +++
4 files changed, 171 insertions(+)
diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
index 8481996..55d8320 100644
--- a/drivers/of/of_pci.c
+++ b/drivers/of/of_pci.c
@@ -89,6 +89,141 @@ int of_pci_parse_bus_range(struct device_node *node, struct resource *res)
}
EXPORT_SYMBOL_GPL(of_pci_parse_bus_range);
+/**
+ * pci_host_bridge_of_get_ranges - Parse PCI host bridge resources from DT
+ * @dev: device node of the host bridge having the range property
+ * @resources: list where the range of resources will be added after DT parsing
+ * @io_base: pointer to a variable that will contain the physical address for
+ * the start of the I/O range.
+ *
+ * It is the callers job to free the @resources list if an error is returned.
+ *
+ * This function will parse the "ranges" property of a PCI host bridge device
+ * node and setup the resource mapping based on its content. It is expected
+ * that the property conforms with the Power ePAPR document.
+ *
+ * Each architecture is then offered the chance of applying their own
+ * filtering of pci_host_bridge_windows based on their own restrictions by
+ * calling pcibios_fixup_bridge_ranges(). The filtered list of windows
+ * can then be used when creating a pci_host_bridge structure.
+ */
+static int pci_host_bridge_of_get_ranges(struct device_node *dev,
+ struct list_head *resources, resource_size_t *io_base)
+{
+ struct resource *res;
+ struct of_pci_range range;
+ struct of_pci_range_parser parser;
+ int err;
+
+ pr_info("PCI host bridge %s ranges:\n", dev->full_name);
+
+ /* Check for ranges property */
+ err = of_pci_range_parser_init(&parser, dev);
+ if (err)
+ return err;
+
+ pr_debug("Parsing ranges property...\n");
+ for_each_of_pci_range(&parser, &range) {
+ /* Read next ranges element */
+ pr_debug("pci_space: 0x%08x pci_addr:0x%016llx cpu_addr:0x%016llx size:0x%016llx\n",
+ range.pci_space, range.pci_addr, range.cpu_addr, range.size);
+
+ /*
+ * If we failed translation or got a zero-sized region
+ * then skip this range
+ */
+ if (range.cpu_addr == OF_BAD_ADDR || range.size == 0)
+ continue;
+
+ res = kzalloc(sizeof(struct resource), GFP_KERNEL);
+ if (!res)
+ return -ENOMEM;
+
+ err = of_pci_range_to_resource(&range, dev, res);
+ if (err)
+ return err;
+
+ if (resource_type(res) == IORESOURCE_IO)
+ *io_base = range.cpu_addr;
+
+ pci_add_resource_offset(resources, res,
+ res->start - range.pci_addr);
+ }
+
+ /* Apply architecture specific fixups for the ranges */
+ return pcibios_fixup_bridge_ranges(resources);
+}
+
+static atomic_t domain_nr = ATOMIC_INIT(-1);
+
+/**
+ * of_create_pci_host_bridge - Create a PCI host bridge structure using
+ * information passed in the DT.
+ * @parent: device owning this host bridge
+ * @ops: pci_ops associated with the host controller
+ * @host_data: opaque data structure used by the host controller.
+ *
+ * returns a pointer to the newly created pci_host_bridge structure, or
+ * NULL if the call failed.
+ *
+ * This function will try to obtain the host bridge domain number by
+ * using of_alias_get_id() call with "pci-domain" as a stem. If that
+ * fails, a local allocator will be used that will put each host bridge
+ * in a new domain.
+ */
+struct pci_host_bridge *
+of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops, void *host_data)
+{
+ int err, domain, busno;
+ struct resource *bus_range;
+ struct pci_bus *root_bus;
+ struct pci_host_bridge *bridge;
+ resource_size_t io_base;
+ LIST_HEAD(res);
+
+ bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL);
+ if (!bus_range)
+ return ERR_PTR(-ENOMEM);
+
+ domain = of_alias_get_id(parent->of_node, "pci-domain");
+ if (domain == -ENODEV)
+ domain = atomic_inc_return(&domain_nr);
+
+ err = of_pci_parse_bus_range(parent->of_node, bus_range);
+ if (err) {
+ dev_info(parent, "No bus range for %s, using default [0-255]\n",
+ parent->of_node->full_name);
+ bus_range->start = 0;
+ bus_range->end = 255;
+ bus_range->flags = IORESOURCE_BUS;
+ }
+ busno = bus_range->start;
+ pci_add_resource(&res, bus_range);
+
+ /* now parse the rest of host bridge bus ranges */
+ err = pci_host_bridge_of_get_ranges(parent->of_node, &res, &io_base);
+ if (err)
+ goto err_create;
+
+ /* then create the root bus */
+ root_bus = pci_create_root_bus_in_domain(parent, domain, busno,
+ ops, host_data, &res);
+ if (IS_ERR(root_bus)) {
+ err = PTR_ERR(root_bus);
+ goto err_create;
+ }
+
+ bridge = to_pci_host_bridge(root_bus->bridge);
+ bridge->io_base = io_base;
+
+ return bridge;
+
+err_create:
+ pci_free_resource_list(&res);
+ return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(of_create_pci_host_bridge);
+
#ifdef CONFIG_PCI_MSI
static LIST_HEAD(of_pci_msi_chip_list);
diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
index 36c669e..cfee5d1 100644
--- a/drivers/pci/host-bridge.c
+++ b/drivers/pci/host-bridge.c
@@ -5,6 +5,9 @@
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_pci.h>
+#include <linux/slab.h>
#include "pci.h"
@@ -83,3 +86,18 @@ void pcibios_bus_to_resource(struct pci_bus *bus, struct resource *res,
res->end = region->end + offset;
}
EXPORT_SYMBOL(pcibios_bus_to_resource);
+
+/**
+ * Simple version of the platform specific code for filtering the list
+ * of resources obtained from the ranges declaration in DT.
+ *
+ * Platforms can override this function in order to impose stronger
+ * constraints onto the list of resources that a host bridge can use.
+ * The filtered list will then be used to create a root bus and associate
+ * it with the host bridge.
+ *
+ */
+int __weak pcibios_fixup_bridge_ranges(struct list_head *resources)
+{
+ return 0;
+}
diff --git a/include/linux/of_pci.h b/include/linux/of_pci.h
index dde3a4a..71e36d0 100644
--- a/include/linux/of_pci.h
+++ b/include/linux/of_pci.h
@@ -15,6 +15,9 @@ struct device_node *of_pci_find_child_device(struct device_node *parent,
int of_pci_get_devfn(struct device_node *np);
int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
+struct pci_host_bridge *of_create_pci_host_bridge(struct device *parent,
+ struct pci_ops *ops, void *host_data);
+
#else
static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
{
@@ -43,6 +46,13 @@ of_pci_parse_bus_range(struct device_node *node, struct resource *res)
{
return -EINVAL;
}
+
+static inline struct pci_host_bridge *
+of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops,
+ void *host_data)
+{
+ return NULL;
+}
#endif
#if defined(CONFIG_OF) && defined(CONFIG_PCI_MSI)
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 7e7b939..556dc5f 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -402,6 +402,7 @@ struct pci_host_bridge {
struct device dev;
struct pci_bus *bus; /* root bus */
int domain_nr;
+ resource_size_t io_base; /* physical address for the start of I/O area */
struct list_head windows; /* pci_host_bridge_windows */
void (*release_fn)(struct pci_host_bridge *);
void *release_data;
@@ -1809,8 +1810,15 @@ static inline void pci_set_of_node(struct pci_dev *dev) { }
static inline void pci_release_of_node(struct pci_dev *dev) { }
static inline void pci_set_bus_of_node(struct pci_bus *bus) { }
static inline void pci_release_bus_of_node(struct pci_bus *bus) { }
+
#endif /* CONFIG_OF */
+/* Used by architecture code to apply any quirks to the list of
+ * pci_host_bridge resource ranges before they are being used
+ * by of_create_pci_host_bridge()
+ */
+extern int pcibios_fixup_bridge_ranges(struct list_head *resources);
+
#ifdef CONFIG_EEH
static inline struct eeh_dev *pci_dev_to_eeh_dev(struct pci_dev *pdev)
{
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
[parent not found: <1404240214-9804-9-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>]
* [RESEND] [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
[not found] ` <1404240214-9804-9-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
@ 2014-07-01 20:50 ` Liviu Dudau
[not found] ` <20140701205050.GB19569-hOhETlTuV5niMG9XS5x8Mg@public.gmane.org>
0 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 20:50 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
Several platforms use a rather generic version of parsing
the device tree to find the host bridge ranges. Move the common code
into the generic PCI code and use it to create a pci_host_bridge
structure that can be used by arch code.
Based on early attempts by Andrew Murray to unify the code.
Used powerpc and microblaze PCI code as starting point.
Signed-off-by: Liviu Dudau <Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
Tested-by: Tanmay Inamdar <tinamdar-qTEPVZfXA3Y@public.gmane.org>
---
drivers/of/of_pci.c | 136 ++++++++++++++++++++++++++++++++++++++++++++++
drivers/pci/host-bridge.c | 15 +++++
include/linux/of_pci.h | 10 ++++
include/linux/pci.h | 8 +++
4 files changed, 169 insertions(+)
diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
index 8481996..e81402a 100644
--- a/drivers/of/of_pci.c
+++ b/drivers/of/of_pci.c
@@ -1,6 +1,7 @@
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/of.h>
+#include <linux/of_address.h>
#include <linux/of_pci.h>
static inline int __of_pci_pci_compare(struct device_node *node,
@@ -89,6 +90,141 @@ int of_pci_parse_bus_range(struct device_node *node, struct resource *res)
}
EXPORT_SYMBOL_GPL(of_pci_parse_bus_range);
+/**
+ * pci_host_bridge_of_get_ranges - Parse PCI host bridge resources from DT
+ * @dev: device node of the host bridge having the range property
+ * @resources: list where the range of resources will be added after DT parsing
+ * @io_base: pointer to a variable that will contain the physical address for
+ * the start of the I/O range.
+ *
+ * It is the callers job to free the @resources list if an error is returned.
+ *
+ * This function will parse the "ranges" property of a PCI host bridge device
+ * node and setup the resource mapping based on its content. It is expected
+ * that the property conforms with the Power ePAPR document.
+ *
+ * Each architecture is then offered the chance of applying their own
+ * filtering of pci_host_bridge_windows based on their own restrictions by
+ * calling pcibios_fixup_bridge_ranges(). The filtered list of windows
+ * can then be used when creating a pci_host_bridge structure.
+ */
+static int pci_host_bridge_of_get_ranges(struct device_node *dev,
+ struct list_head *resources, resource_size_t *io_base)
+{
+ struct resource *res;
+ struct of_pci_range range;
+ struct of_pci_range_parser parser;
+ int err;
+
+ pr_info("PCI host bridge %s ranges:\n", dev->full_name);
+
+ /* Check for ranges property */
+ err = of_pci_range_parser_init(&parser, dev);
+ if (err)
+ return err;
+
+ pr_debug("Parsing ranges property...\n");
+ for_each_of_pci_range(&parser, &range) {
+ /* Read next ranges element */
+ pr_debug("pci_space: 0x%08x pci_addr:0x%016llx cpu_addr:0x%016llx size:0x%016llx\n",
+ range.pci_space, range.pci_addr, range.cpu_addr, range.size);
+
+ /*
+ * If we failed translation or got a zero-sized region
+ * then skip this range
+ */
+ if (range.cpu_addr == OF_BAD_ADDR || range.size == 0)
+ continue;
+
+ res = kzalloc(sizeof(struct resource), GFP_KERNEL);
+ if (!res)
+ return -ENOMEM;
+
+ err = of_pci_range_to_resource(&range, dev, res);
+ if (err)
+ return err;
+
+ if (resource_type(res) == IORESOURCE_IO)
+ *io_base = range.cpu_addr;
+
+ pci_add_resource_offset(resources, res,
+ res->start - range.pci_addr);
+ }
+
+ /* Apply architecture specific fixups for the ranges */
+ return pcibios_fixup_bridge_ranges(resources);
+}
+
+static atomic_t domain_nr = ATOMIC_INIT(-1);
+
+/**
+ * of_create_pci_host_bridge - Create a PCI host bridge structure using
+ * information passed in the DT.
+ * @parent: device owning this host bridge
+ * @ops: pci_ops associated with the host controller
+ * @host_data: opaque data structure used by the host controller.
+ *
+ * returns a pointer to the newly created pci_host_bridge structure, or
+ * NULL if the call failed.
+ *
+ * This function will try to obtain the host bridge domain number by
+ * using of_alias_get_id() call with "pci-domain" as a stem. If that
+ * fails, a local allocator will be used that will put each host bridge
+ * in a new domain.
+ */
+struct pci_host_bridge *
+of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops, void *host_data)
+{
+ int err, domain, busno;
+ struct resource *bus_range;
+ struct pci_bus *root_bus;
+ struct pci_host_bridge *bridge;
+ resource_size_t io_base = 0;
+ LIST_HEAD(res);
+
+ bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL);
+ if (!bus_range)
+ return ERR_PTR(-ENOMEM);
+
+ domain = of_alias_get_id(parent->of_node, "pci-domain");
+ if (domain == -ENODEV)
+ domain = atomic_inc_return(&domain_nr);
+
+ err = of_pci_parse_bus_range(parent->of_node, bus_range);
+ if (err) {
+ dev_info(parent, "No bus range for %s, using default [0-255]\n",
+ parent->of_node->full_name);
+ bus_range->start = 0;
+ bus_range->end = 255;
+ bus_range->flags = IORESOURCE_BUS;
+ }
+ busno = bus_range->start;
+ pci_add_resource(&res, bus_range);
+
+ /* now parse the rest of host bridge bus ranges */
+ err = pci_host_bridge_of_get_ranges(parent->of_node, &res, &io_base);
+ if (err)
+ goto err_create;
+
+ /* then create the root bus */
+ root_bus = pci_create_root_bus_in_domain(parent, domain, busno,
+ ops, host_data, &res);
+ if (IS_ERR(root_bus)) {
+ err = PTR_ERR(root_bus);
+ goto err_create;
+ }
+
+ bridge = to_pci_host_bridge(root_bus->bridge);
+ bridge->io_base = io_base;
+
+ return bridge;
+
+err_create:
+ pci_free_resource_list(&res);
+ return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(of_create_pci_host_bridge);
+
#ifdef CONFIG_PCI_MSI
static LIST_HEAD(of_pci_msi_chip_list);
diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
index 36c669e..54ceafd 100644
--- a/drivers/pci/host-bridge.c
+++ b/drivers/pci/host-bridge.c
@@ -83,3 +83,18 @@ void pcibios_bus_to_resource(struct pci_bus *bus, struct resource *res,
res->end = region->end + offset;
}
EXPORT_SYMBOL(pcibios_bus_to_resource);
+
+/**
+ * Simple version of the platform specific code for filtering the list
+ * of resources obtained from the ranges declaration in DT.
+ *
+ * Platforms can override this function in order to impose stronger
+ * constraints onto the list of resources that a host bridge can use.
+ * The filtered list will then be used to create a root bus and associate
+ * it with the host bridge.
+ *
+ */
+int __weak pcibios_fixup_bridge_ranges(struct list_head *resources)
+{
+ return 0;
+}
diff --git a/include/linux/of_pci.h b/include/linux/of_pci.h
index dde3a4a..71e36d0 100644
--- a/include/linux/of_pci.h
+++ b/include/linux/of_pci.h
@@ -15,6 +15,9 @@ struct device_node *of_pci_find_child_device(struct device_node *parent,
int of_pci_get_devfn(struct device_node *np);
int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
+struct pci_host_bridge *of_create_pci_host_bridge(struct device *parent,
+ struct pci_ops *ops, void *host_data);
+
#else
static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
{
@@ -43,6 +46,13 @@ of_pci_parse_bus_range(struct device_node *node, struct resource *res)
{
return -EINVAL;
}
+
+static inline struct pci_host_bridge *
+of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops,
+ void *host_data)
+{
+ return NULL;
+}
#endif
#if defined(CONFIG_OF) && defined(CONFIG_PCI_MSI)
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 7e7b939..556dc5f 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -402,6 +402,7 @@ struct pci_host_bridge {
struct device dev;
struct pci_bus *bus; /* root bus */
int domain_nr;
+ resource_size_t io_base; /* physical address for the start of I/O area */
struct list_head windows; /* pci_host_bridge_windows */
void (*release_fn)(struct pci_host_bridge *);
void *release_data;
@@ -1809,8 +1810,15 @@ static inline void pci_set_of_node(struct pci_dev *dev) { }
static inline void pci_release_of_node(struct pci_dev *dev) { }
static inline void pci_set_bus_of_node(struct pci_bus *bus) { }
static inline void pci_release_bus_of_node(struct pci_bus *bus) { }
+
#endif /* CONFIG_OF */
+/* Used by architecture code to apply any quirks to the list of
+ * pci_host_bridge resource ranges before they are being used
+ * by of_create_pci_host_bridge()
+ */
+extern int pcibios_fixup_bridge_ranges(struct list_head *resources);
+
#ifdef CONFIG_EEH
static inline struct eeh_dev *pci_dev_to_eeh_dev(struct pci_dev *pdev)
{
--
2.0.0
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply related [flat|nested] 92+ messages in thread
* Re: [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
2014-07-01 18:43 ` [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree Liviu Dudau
[not found] ` <1404240214-9804-9-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
@ 2014-07-02 11:22 ` Will Deacon
[not found] ` <20140702112230.GL18731-5wv7dgnIgG8@public.gmane.org>
2014-07-08 1:01 ` Bjorn Helgaas
2014-07-11 7:43 ` Jingoo Han
3 siblings, 1 reply; 92+ messages in thread
From: Will Deacon @ 2014-07-02 11:22 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
Hi Liviu,
On Tue, Jul 01, 2014 at 07:43:33PM +0100, Liviu Dudau wrote:
> Several platforms use a rather generic version of parsing
> the device tree to find the host bridge ranges. Move the common code
> into the generic PCI code and use it to create a pci_host_bridge
> structure that can be used by arch code.
>
> Based on early attempts by Andrew Murray to unify the code.
> Used powerpc and microblaze PCI code as starting point.
I just had a quick look at this to see how it differs from the parsing in
pci-host-generic.c and there a few small differences worth discussing.
> +static int pci_host_bridge_of_get_ranges(struct device_node *dev,
> + struct list_head *resources, resource_size_t *io_base)
> +{
> + struct resource *res;
> + struct of_pci_range range;
> + struct of_pci_range_parser parser;
> + int err;
> +
> + pr_info("PCI host bridge %s ranges:\n", dev->full_name);
> +
> + /* Check for ranges property */
> + err = of_pci_range_parser_init(&parser, dev);
> + if (err)
> + return err;
> +
> + pr_debug("Parsing ranges property...\n");
> + for_each_of_pci_range(&parser, &range) {
> + /* Read next ranges element */
> + pr_debug("pci_space: 0x%08x pci_addr:0x%016llx cpu_addr:0x%016llx size:0x%016llx\n",
> + range.pci_space, range.pci_addr, range.cpu_addr, range.size);
> +
> + /*
> + * If we failed translation or got a zero-sized region
> + * then skip this range
> + */
> + if (range.cpu_addr == OF_BAD_ADDR || range.size == 0)
> + continue;
> +
> + res = kzalloc(sizeof(struct resource), GFP_KERNEL);
> + if (!res)
> + return -ENOMEM;
> +
> + err = of_pci_range_to_resource(&range, dev, res);
> + if (err)
> + return err;
> +
> + if (resource_type(res) == IORESOURCE_IO)
> + *io_base = range.cpu_addr;
> +
> + pci_add_resource_offset(resources, res,
> + res->start - range.pci_addr);
Where do you request_resource before adding it?
> + }
> +
> + /* Apply architecture specific fixups for the ranges */
> + return pcibios_fixup_bridge_ranges(resources);
I currently mandate at least one non-prefetchable resource in the
device-tree. Should I simply drop this restriction, or do I have to somehow
hook this into the pcibios callback?
> +}
> +
> +static atomic_t domain_nr = ATOMIC_INIT(-1);
> +
> +/**
> + * of_create_pci_host_bridge - Create a PCI host bridge structure using
> + * information passed in the DT.
> + * @parent: device owning this host bridge
> + * @ops: pci_ops associated with the host controller
> + * @host_data: opaque data structure used by the host controller.
> + *
> + * returns a pointer to the newly created pci_host_bridge structure, or
> + * NULL if the call failed.
> + *
> + * This function will try to obtain the host bridge domain number by
> + * using of_alias_get_id() call with "pci-domain" as a stem. If that
> + * fails, a local allocator will be used that will put each host bridge
> + * in a new domain.
> + */
> +struct pci_host_bridge *
> +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops, void *host_data)
> +{
> + int err, domain, busno;
> + struct resource *bus_range;
> + struct pci_bus *root_bus;
> + struct pci_host_bridge *bridge;
> + resource_size_t io_base;
> + LIST_HEAD(res);
> +
> + bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL);
> + if (!bus_range)
> + return ERR_PTR(-ENOMEM);
> +
> + domain = of_alias_get_id(parent->of_node, "pci-domain");
> + if (domain == -ENODEV)
> + domain = atomic_inc_return(&domain_nr);
> +
> + err = of_pci_parse_bus_range(parent->of_node, bus_range);
> + if (err) {
> + dev_info(parent, "No bus range for %s, using default [0-255]\n",
> + parent->of_node->full_name);
> + bus_range->start = 0;
> + bus_range->end = 255;
> + bus_range->flags = IORESOURCE_BUS;
What about bus_range->name?
> + }
> + busno = bus_range->start;
> + pci_add_resource(&res, bus_range);
I currently truncate the bus range to fit inside the Configuration Space
window I have (in the reg property). How can I continue to do that with this
patch?
Will
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
2014-07-01 18:43 ` [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree Liviu Dudau
[not found] ` <1404240214-9804-9-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
2014-07-02 11:22 ` Will Deacon
@ 2014-07-08 1:01 ` Bjorn Helgaas
2014-07-08 10:29 ` Liviu Dudau
2014-07-11 7:43 ` Jingoo Han
3 siblings, 1 reply; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-08 1:01 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 01, 2014 at 07:43:33PM +0100, Liviu Dudau wrote:
> Several platforms use a rather generic version of parsing
> the device tree to find the host bridge ranges. Move the common code
> into the generic PCI code and use it to create a pci_host_bridge
> structure that can be used by arch code.
>
> Based on early attempts by Andrew Murray to unify the code.
> Used powerpc and microblaze PCI code as starting point.
>
> Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> ---
> drivers/of/of_pci.c | 135 ++++++++++++++++++++++++++++++++++++++++++++++
> drivers/pci/host-bridge.c | 18 +++++++
> include/linux/of_pci.h | 10 ++++
> include/linux/pci.h | 8 +++
> 4 files changed, 171 insertions(+)
>
> diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
> index 8481996..55d8320 100644
> --- a/drivers/of/of_pci.c
> +++ b/drivers/of/of_pci.c
> @@ -89,6 +89,141 @@ int of_pci_parse_bus_range(struct device_node *node, struct resource *res)
> }
> EXPORT_SYMBOL_GPL(of_pci_parse_bus_range);
>
> +/**
> + * pci_host_bridge_of_get_ranges - Parse PCI host bridge resources from DT
> + * @dev: device node of the host bridge having the range property
> + * @resources: list where the range of resources will be added after DT parsing
> + * @io_base: pointer to a variable that will contain the physical address for
> + * the start of the I/O range.
> + *
> + * It is the callers job to free the @resources list if an error is returned.
> + *
> + * This function will parse the "ranges" property of a PCI host bridge device
> + * node and setup the resource mapping based on its content. It is expected
> + * that the property conforms with the Power ePAPR document.
> + *
> + * Each architecture is then offered the chance of applying their own
> + * filtering of pci_host_bridge_windows based on their own restrictions by
> + * calling pcibios_fixup_bridge_ranges(). The filtered list of windows
> + * can then be used when creating a pci_host_bridge structure.
> + */
> +static int pci_host_bridge_of_get_ranges(struct device_node *dev,
> + struct list_head *resources, resource_size_t *io_base)
> +{
> + struct resource *res;
> + struct of_pci_range range;
> + struct of_pci_range_parser parser;
> + int err;
> +
> + pr_info("PCI host bridge %s ranges:\n", dev->full_name);
> +
> + /* Check for ranges property */
> + err = of_pci_range_parser_init(&parser, dev);
> + if (err)
> + return err;
> +
> + pr_debug("Parsing ranges property...\n");
> + for_each_of_pci_range(&parser, &range) {
> + /* Read next ranges element */
> + pr_debug("pci_space: 0x%08x pci_addr:0x%016llx cpu_addr:0x%016llx size:0x%016llx\n",
> + range.pci_space, range.pci_addr, range.cpu_addr, range.size);
If you're not trying to match other printk formats, you could try to match
the %pR format used elsewhere, e.g., "%#010llx-%#010llx" with
range.cpu_addr, range.cpu_addr + range.size - 1.
> +
> + /*
> + * If we failed translation or got a zero-sized region
> + * then skip this range
> + */
> + if (range.cpu_addr == OF_BAD_ADDR || range.size == 0)
> + continue;
> +
> + res = kzalloc(sizeof(struct resource), GFP_KERNEL);
> + if (!res)
> + return -ENOMEM;
> +
> + err = of_pci_range_to_resource(&range, dev, res);
> + if (err)
> + return err;
> +
> + if (resource_type(res) == IORESOURCE_IO)
> + *io_base = range.cpu_addr;
> +
> + pci_add_resource_offset(resources, res,
> + res->start - range.pci_addr);
> + }
> +
> + /* Apply architecture specific fixups for the ranges */
> + return pcibios_fixup_bridge_ranges(resources);
> +}
> +
> +static atomic_t domain_nr = ATOMIC_INIT(-1);
> +
> +/**
> + * of_create_pci_host_bridge - Create a PCI host bridge structure using
> + * information passed in the DT.
> + * @parent: device owning this host bridge
> + * @ops: pci_ops associated with the host controller
> + * @host_data: opaque data structure used by the host controller.
> + *
> + * returns a pointer to the newly created pci_host_bridge structure, or
> + * NULL if the call failed.
> + *
> + * This function will try to obtain the host bridge domain number by
> + * using of_alias_get_id() call with "pci-domain" as a stem. If that
> + * fails, a local allocator will be used that will put each host bridge
> + * in a new domain.
> + */
> +struct pci_host_bridge *
> +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops, void *host_data)
> +{
> + int err, domain, busno;
> + struct resource *bus_range;
> + struct pci_bus *root_bus;
> + struct pci_host_bridge *bridge;
> + resource_size_t io_base;
> + LIST_HEAD(res);
> +
> + bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL);
> + if (!bus_range)
> + return ERR_PTR(-ENOMEM);
> +
> + domain = of_alias_get_id(parent->of_node, "pci-domain");
> + if (domain == -ENODEV)
> + domain = atomic_inc_return(&domain_nr);
> +
> + err = of_pci_parse_bus_range(parent->of_node, bus_range);
> + if (err) {
> + dev_info(parent, "No bus range for %s, using default [0-255]\n",
> + parent->of_node->full_name);
> + bus_range->start = 0;
> + bus_range->end = 255;
> + bus_range->flags = IORESOURCE_BUS;
If you put the dev_info() down here, you can print &bus_range with %pR.
> + }
> + busno = bus_range->start;
> + pci_add_resource(&res, bus_range);
> +
> + /* now parse the rest of host bridge bus ranges */
> + err = pci_host_bridge_of_get_ranges(parent->of_node, &res, &io_base);
> + if (err)
> + goto err_create;
> +
> + /* then create the root bus */
> + root_bus = pci_create_root_bus_in_domain(parent, domain, busno,
> + ops, host_data, &res);
> + if (IS_ERR(root_bus)) {
> + err = PTR_ERR(root_bus);
> + goto err_create;
> + }
> +
> + bridge = to_pci_host_bridge(root_bus->bridge);
> + bridge->io_base = io_base;
> +
> + return bridge;
> +
> +err_create:
> + pci_free_resource_list(&res);
> + return ERR_PTR(err);
> +}
> +EXPORT_SYMBOL_GPL(of_create_pci_host_bridge);
> +
> #ifdef CONFIG_PCI_MSI
>
> static LIST_HEAD(of_pci_msi_chip_list);
> diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
> index 36c669e..cfee5d1 100644
> --- a/drivers/pci/host-bridge.c
> +++ b/drivers/pci/host-bridge.c
> @@ -5,6 +5,9 @@
> #include <linux/kernel.h>
> #include <linux/pci.h>
> #include <linux/module.h>
> +#include <linux/of_address.h>
> +#include <linux/of_pci.h>
> +#include <linux/slab.h>
>
> #include "pci.h"
>
> @@ -83,3 +86,18 @@ void pcibios_bus_to_resource(struct pci_bus *bus, struct resource *res,
> res->end = region->end + offset;
> }
> EXPORT_SYMBOL(pcibios_bus_to_resource);
> +
> +/**
> + * Simple version of the platform specific code for filtering the list
> + * of resources obtained from the ranges declaration in DT.
> + *
> + * Platforms can override this function in order to impose stronger
> + * constraints onto the list of resources that a host bridge can use.
> + * The filtered list will then be used to create a root bus and associate
> + * it with the host bridge.
> + *
> + */
> +int __weak pcibios_fixup_bridge_ranges(struct list_head *resources)
> +{
> + return 0;
> +}
I'd wait to add this until there's a platform that needs to implement it.
Splitting it out will make this patch that much smaller and easier to
understand.
> diff --git a/include/linux/of_pci.h b/include/linux/of_pci.h
> index dde3a4a..71e36d0 100644
> --- a/include/linux/of_pci.h
> +++ b/include/linux/of_pci.h
> @@ -15,6 +15,9 @@ struct device_node *of_pci_find_child_device(struct device_node *parent,
> int of_pci_get_devfn(struct device_node *np);
> int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
> int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
> +struct pci_host_bridge *of_create_pci_host_bridge(struct device *parent,
> + struct pci_ops *ops, void *host_data);
> +
> #else
> static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
> {
> @@ -43,6 +46,13 @@ of_pci_parse_bus_range(struct device_node *node, struct resource *res)
> {
> return -EINVAL;
> }
> +
> +static inline struct pci_host_bridge *
> +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops,
> + void *host_data)
> +{
> + return NULL;
> +}
> #endif
>
> #if defined(CONFIG_OF) && defined(CONFIG_PCI_MSI)
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 7e7b939..556dc5f 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -402,6 +402,7 @@ struct pci_host_bridge {
> struct device dev;
> struct pci_bus *bus; /* root bus */
> int domain_nr;
> + resource_size_t io_base; /* physical address for the start of I/O area */
I don't see where this is used yet.
As far as I know, there's nothing that prevents a host bridge from having
several I/O port apertures (or several memory-mapped I/O port spaces).
> struct list_head windows; /* pci_host_bridge_windows */
> void (*release_fn)(struct pci_host_bridge *);
> void *release_data;
> @@ -1809,8 +1810,15 @@ static inline void pci_set_of_node(struct pci_dev *dev) { }
> static inline void pci_release_of_node(struct pci_dev *dev) { }
> static inline void pci_set_bus_of_node(struct pci_bus *bus) { }
> static inline void pci_release_bus_of_node(struct pci_bus *bus) { }
> +
> #endif /* CONFIG_OF */
>
> +/* Used by architecture code to apply any quirks to the list of
> + * pci_host_bridge resource ranges before they are being used
> + * by of_create_pci_host_bridge()
> + */
> +extern int pcibios_fixup_bridge_ranges(struct list_head *resources);
> +
> #ifdef CONFIG_EEH
> static inline struct eeh_dev *pci_dev_to_eeh_dev(struct pci_dev *pdev)
> {
> --
> 2.0.0
>
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
2014-07-08 1:01 ` Bjorn Helgaas
@ 2014-07-08 10:29 ` Liviu Dudau
2014-07-08 21:33 ` Bjorn Helgaas
0 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-08 10:29 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 08, 2014 at 02:01:04AM +0100, Bjorn Helgaas wrote:
> On Tue, Jul 01, 2014 at 07:43:33PM +0100, Liviu Dudau wrote:
> > Several platforms use a rather generic version of parsing
> > the device tree to find the host bridge ranges. Move the common code
> > into the generic PCI code and use it to create a pci_host_bridge
> > structure that can be used by arch code.
> >
> > Based on early attempts by Andrew Murray to unify the code.
> > Used powerpc and microblaze PCI code as starting point.
> >
> > Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> > Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> > ---
> > drivers/of/of_pci.c | 135 ++++++++++++++++++++++++++++++++++++++++++++++
> > drivers/pci/host-bridge.c | 18 +++++++
> > include/linux/of_pci.h | 10 ++++
> > include/linux/pci.h | 8 +++
> > 4 files changed, 171 insertions(+)
> >
> > diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
> > index 8481996..55d8320 100644
> > --- a/drivers/of/of_pci.c
> > +++ b/drivers/of/of_pci.c
> > @@ -89,6 +89,141 @@ int of_pci_parse_bus_range(struct device_node *node, struct resource *res)
> > }
> > EXPORT_SYMBOL_GPL(of_pci_parse_bus_range);
> >
> > +/**
> > + * pci_host_bridge_of_get_ranges - Parse PCI host bridge resources from DT
> > + * @dev: device node of the host bridge having the range property
> > + * @resources: list where the range of resources will be added after DT parsing
> > + * @io_base: pointer to a variable that will contain the physical address for
> > + * the start of the I/O range.
> > + *
> > + * It is the callers job to free the @resources list if an error is returned.
> > + *
> > + * This function will parse the "ranges" property of a PCI host bridge device
> > + * node and setup the resource mapping based on its content. It is expected
> > + * that the property conforms with the Power ePAPR document.
> > + *
> > + * Each architecture is then offered the chance of applying their own
> > + * filtering of pci_host_bridge_windows based on their own restrictions by
> > + * calling pcibios_fixup_bridge_ranges(). The filtered list of windows
> > + * can then be used when creating a pci_host_bridge structure.
> > + */
> > +static int pci_host_bridge_of_get_ranges(struct device_node *dev,
> > + struct list_head *resources, resource_size_t *io_base)
> > +{
> > + struct resource *res;
> > + struct of_pci_range range;
> > + struct of_pci_range_parser parser;
> > + int err;
> > +
> > + pr_info("PCI host bridge %s ranges:\n", dev->full_name);
> > +
> > + /* Check for ranges property */
> > + err = of_pci_range_parser_init(&parser, dev);
> > + if (err)
> > + return err;
> > +
> > + pr_debug("Parsing ranges property...\n");
> > + for_each_of_pci_range(&parser, &range) {
> > + /* Read next ranges element */
> > + pr_debug("pci_space: 0x%08x pci_addr:0x%016llx cpu_addr:0x%016llx size:0x%016llx\n",
> > + range.pci_space, range.pci_addr, range.cpu_addr, range.size);
>
> If you're not trying to match other printk formats, you could try to match
> the %pR format used elsewhere, e.g., "%#010llx-%#010llx" with
> range.cpu_addr, range.cpu_addr + range.size - 1.
Yes, not a big fan of the ugly output it generates, but the output matches closely the ranges
definition in the device tree file so it is easy to validate that you are parsing the right
entry. I am happy to change it to shorten the cpu range message.
>
> > +
> > + /*
> > + * If we failed translation or got a zero-sized region
> > + * then skip this range
> > + */
> > + if (range.cpu_addr == OF_BAD_ADDR || range.size == 0)
> > + continue;
> > +
> > + res = kzalloc(sizeof(struct resource), GFP_KERNEL);
> > + if (!res)
> > + return -ENOMEM;
> > +
> > + err = of_pci_range_to_resource(&range, dev, res);
> > + if (err)
> > + return err;
> > +
> > + if (resource_type(res) == IORESOURCE_IO)
> > + *io_base = range.cpu_addr;
> > +
> > + pci_add_resource_offset(resources, res,
> > + res->start - range.pci_addr);
> > + }
> > +
> > + /* Apply architecture specific fixups for the ranges */
> > + return pcibios_fixup_bridge_ranges(resources);
> > +}
> > +
> > +static atomic_t domain_nr = ATOMIC_INIT(-1);
> > +
> > +/**
> > + * of_create_pci_host_bridge - Create a PCI host bridge structure using
> > + * information passed in the DT.
> > + * @parent: device owning this host bridge
> > + * @ops: pci_ops associated with the host controller
> > + * @host_data: opaque data structure used by the host controller.
> > + *
> > + * returns a pointer to the newly created pci_host_bridge structure, or
> > + * NULL if the call failed.
> > + *
> > + * This function will try to obtain the host bridge domain number by
> > + * using of_alias_get_id() call with "pci-domain" as a stem. If that
> > + * fails, a local allocator will be used that will put each host bridge
> > + * in a new domain.
> > + */
> > +struct pci_host_bridge *
> > +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops, void *host_data)
> > +{
> > + int err, domain, busno;
> > + struct resource *bus_range;
> > + struct pci_bus *root_bus;
> > + struct pci_host_bridge *bridge;
> > + resource_size_t io_base;
> > + LIST_HEAD(res);
> > +
> > + bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL);
> > + if (!bus_range)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + domain = of_alias_get_id(parent->of_node, "pci-domain");
> > + if (domain == -ENODEV)
> > + domain = atomic_inc_return(&domain_nr);
> > +
> > + err = of_pci_parse_bus_range(parent->of_node, bus_range);
> > + if (err) {
> > + dev_info(parent, "No bus range for %s, using default [0-255]\n",
> > + parent->of_node->full_name);
> > + bus_range->start = 0;
> > + bus_range->end = 255;
> > + bus_range->flags = IORESOURCE_BUS;
>
> If you put the dev_info() down here, you can print &bus_range with %pR.
Sure, will do.
>
> > + }
> > + busno = bus_range->start;
> > + pci_add_resource(&res, bus_range);
> > +
> > + /* now parse the rest of host bridge bus ranges */
> > + err = pci_host_bridge_of_get_ranges(parent->of_node, &res, &io_base);
> > + if (err)
> > + goto err_create;
> > +
> > + /* then create the root bus */
> > + root_bus = pci_create_root_bus_in_domain(parent, domain, busno,
> > + ops, host_data, &res);
> > + if (IS_ERR(root_bus)) {
> > + err = PTR_ERR(root_bus);
> > + goto err_create;
> > + }
> > +
> > + bridge = to_pci_host_bridge(root_bus->bridge);
> > + bridge->io_base = io_base;
> > +
> > + return bridge;
> > +
> > +err_create:
> > + pci_free_resource_list(&res);
> > + return ERR_PTR(err);
> > +}
> > +EXPORT_SYMBOL_GPL(of_create_pci_host_bridge);
> > +
> > #ifdef CONFIG_PCI_MSI
> >
> > static LIST_HEAD(of_pci_msi_chip_list);
> > diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c
> > index 36c669e..cfee5d1 100644
> > --- a/drivers/pci/host-bridge.c
> > +++ b/drivers/pci/host-bridge.c
> > @@ -5,6 +5,9 @@
> > #include <linux/kernel.h>
> > #include <linux/pci.h>
> > #include <linux/module.h>
> > +#include <linux/of_address.h>
> > +#include <linux/of_pci.h>
> > +#include <linux/slab.h>
> >
> > #include "pci.h"
> >
> > @@ -83,3 +86,18 @@ void pcibios_bus_to_resource(struct pci_bus *bus, struct resource *res,
> > res->end = region->end + offset;
> > }
> > EXPORT_SYMBOL(pcibios_bus_to_resource);
> > +
> > +/**
> > + * Simple version of the platform specific code for filtering the list
> > + * of resources obtained from the ranges declaration in DT.
> > + *
> > + * Platforms can override this function in order to impose stronger
> > + * constraints onto the list of resources that a host bridge can use.
> > + * The filtered list will then be used to create a root bus and associate
> > + * it with the host bridge.
> > + *
> > + */
> > +int __weak pcibios_fixup_bridge_ranges(struct list_head *resources)
> > +{
> > + return 0;
> > +}
>
> I'd wait to add this until there's a platform that needs to implement it.
> Splitting it out will make this patch that much smaller and easier to
> understand.
I need this as this is the default implementation (i.e. do nothing). Otherwise the
link phase will give errors.
>
> > diff --git a/include/linux/of_pci.h b/include/linux/of_pci.h
> > index dde3a4a..71e36d0 100644
> > --- a/include/linux/of_pci.h
> > +++ b/include/linux/of_pci.h
> > @@ -15,6 +15,9 @@ struct device_node *of_pci_find_child_device(struct device_node *parent,
> > int of_pci_get_devfn(struct device_node *np);
> > int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
> > int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
> > +struct pci_host_bridge *of_create_pci_host_bridge(struct device *parent,
> > + struct pci_ops *ops, void *host_data);
> > +
> > #else
> > static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
> > {
> > @@ -43,6 +46,13 @@ of_pci_parse_bus_range(struct device_node *node, struct resource *res)
> > {
> > return -EINVAL;
> > }
> > +
> > +static inline struct pci_host_bridge *
> > +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops,
> > + void *host_data)
> > +{
> > + return NULL;
> > +}
> > #endif
> >
> > #if defined(CONFIG_OF) && defined(CONFIG_PCI_MSI)
> > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > index 7e7b939..556dc5f 100644
> > --- a/include/linux/pci.h
> > +++ b/include/linux/pci.h
> > @@ -402,6 +402,7 @@ struct pci_host_bridge {
> > struct device dev;
> > struct pci_bus *bus; /* root bus */
> > int domain_nr;
> > + resource_size_t io_base; /* physical address for the start of I/O area */
>
> I don't see where this is used yet.
It's used in pci_host_bridge_of_get_ranges() (earlier in this patch).
>
> As far as I know, there's nothing that prevents a host bridge from having
> several I/O port apertures (or several memory-mapped I/O port spaces).
The pci_register_io_range() will give different offsets for each apperture.
I just need to make sure I don't overwrite io_base when parsing multiple IO
ranges.
Thanks for reviewing this,
Liviu
>
> > struct list_head windows; /* pci_host_bridge_windows */
> > void (*release_fn)(struct pci_host_bridge *);
> > void *release_data;
> > @@ -1809,8 +1810,15 @@ static inline void pci_set_of_node(struct pci_dev *dev) { }
> > static inline void pci_release_of_node(struct pci_dev *dev) { }
> > static inline void pci_set_bus_of_node(struct pci_bus *bus) { }
> > static inline void pci_release_bus_of_node(struct pci_bus *bus) { }
> > +
> > #endif /* CONFIG_OF */
> >
> > +/* Used by architecture code to apply any quirks to the list of
> > + * pci_host_bridge resource ranges before they are being used
> > + * by of_create_pci_host_bridge()
> > + */
> > +extern int pcibios_fixup_bridge_ranges(struct list_head *resources);
> > +
> > #ifdef CONFIG_EEH
> > static inline struct eeh_dev *pci_dev_to_eeh_dev(struct pci_dev *pdev)
> > {
> > --
> > 2.0.0
> >
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
2014-07-08 10:29 ` Liviu Dudau
@ 2014-07-08 21:33 ` Bjorn Helgaas
2014-07-08 22:27 ` Liviu Dudau
0 siblings, 1 reply; 92+ messages in thread
From: Bjorn Helgaas @ 2014-07-08 21:33 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 08, 2014 at 11:29:40AM +0100, Liviu Dudau wrote:
> On Tue, Jul 08, 2014 at 02:01:04AM +0100, Bjorn Helgaas wrote:
> > On Tue, Jul 01, 2014 at 07:43:33PM +0100, Liviu Dudau wrote:
> > > ...
> > > + for_each_of_pci_range(&parser, &range) {
> > > + /* Read next ranges element */
> > > + pr_debug("pci_space: 0x%08x pci_addr:0x%016llx cpu_addr:0x%016llx size:0x%016llx\n",
> > > + range.pci_space, range.pci_addr, range.cpu_addr, range.size);
> >
> > If you're not trying to match other printk formats, you could try to match
> > the %pR format used elsewhere, e.g., "%#010llx-%#010llx" with
> > range.cpu_addr, range.cpu_addr + range.size - 1.
>
> Yes, not a big fan of the ugly output it generates, but the output matches closely the ranges
> definition in the device tree file so it is easy to validate that you are parsing the right
> entry. I am happy to change it to shorten the cpu range message.
If it already matches other device tree stuff, that's perfect. I'm not
familiar with that.
> > > +int __weak pcibios_fixup_bridge_ranges(struct list_head *resources)
> > > +{
> > > + return 0;
> > > +}
> >
> > I'd wait to add this until there's a platform that needs to implement it.
> > Splitting it out will make this patch that much smaller and easier to
> > understand.
>
> I need this as this is the default implementation (i.e. do nothing). Otherwise the
> link phase will give errors.
I meant that you could remove the default implementation *and* the call of
it, since it currently does nothing.
> > > diff --git a/include/linux/of_pci.h b/include/linux/of_pci.h
> > > index dde3a4a..71e36d0 100644
> > > --- a/include/linux/of_pci.h
> > > +++ b/include/linux/of_pci.h
> > > @@ -15,6 +15,9 @@ struct device_node *of_pci_find_child_device(struct device_node *parent,
> > > int of_pci_get_devfn(struct device_node *np);
> > > int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
> > > int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
> > > +struct pci_host_bridge *of_create_pci_host_bridge(struct device *parent,
> > > + struct pci_ops *ops, void *host_data);
> > > +
> > > #else
> > > static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
> > > {
> > > @@ -43,6 +46,13 @@ of_pci_parse_bus_range(struct device_node *node, struct resource *res)
> > > {
> > > return -EINVAL;
> > > }
> > > +
> > > +static inline struct pci_host_bridge *
> > > +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops,
> > > + void *host_data)
> > > +{
> > > + return NULL;
> > > +}
> > > #endif
> > >
> > > #if defined(CONFIG_OF) && defined(CONFIG_PCI_MSI)
> > > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > > index 7e7b939..556dc5f 100644
> > > --- a/include/linux/pci.h
> > > +++ b/include/linux/pci.h
> > > @@ -402,6 +402,7 @@ struct pci_host_bridge {
> > > struct device dev;
> > > struct pci_bus *bus; /* root bus */
> > > int domain_nr;
> > > + resource_size_t io_base; /* physical address for the start of I/O area */
> >
> > I don't see where this is used yet.
>
> It's used in pci_host_bridge_of_get_ranges() (earlier in this patch).
of_create_pci_host_bridge() fills in bridge->io_base, but I don't see
anything that ever *reads* bridge->io_base.
> > As far as I know, there's nothing that prevents a host bridge from having
> > several I/O port apertures (or several memory-mapped I/O port spaces).
>
> The pci_register_io_range() will give different offsets for each apperture.
> I just need to make sure I don't overwrite io_base when parsing multiple IO
> ranges.
Let's continue this in the other thread where I'm trying to understand the
I/O apertures. Obviously I'm still missing something if you can indeed
have multiple I/O apertures per bridge (because then only one of them can
start at I/O address 0 on PCI).
Bjorn
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
2014-07-08 21:33 ` Bjorn Helgaas
@ 2014-07-08 22:27 ` Liviu Dudau
[not found] ` <20140708222738.GA4980-2JSQmVVBSi7ZROr8t4l/smS4ubULX0JqMm0uRHvK7Nw@public.gmane.org>
0 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-08 22:27 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: linux-pci, Catalin Marinas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 08, 2014 at 10:33:05PM +0100, Bjorn Helgaas wrote:
> On Tue, Jul 08, 2014 at 11:29:40AM +0100, Liviu Dudau wrote:
> > On Tue, Jul 08, 2014 at 02:01:04AM +0100, Bjorn Helgaas wrote:
> > > On Tue, Jul 01, 2014 at 07:43:33PM +0100, Liviu Dudau wrote:
> > > > ...
> > > > + for_each_of_pci_range(&parser, &range) {
> > > > + /* Read next ranges element */
> > > > + pr_debug("pci_space: 0x%08x pci_addr:0x%016llx cpu_addr:0x%016llx size:0x%016llx\n",
> > > > + range.pci_space, range.pci_addr, range.cpu_addr, range.size);
> > >
> > > If you're not trying to match other printk formats, you could try to match
> > > the %pR format used elsewhere, e.g., "%#010llx-%#010llx" with
> > > range.cpu_addr, range.cpu_addr + range.size - 1.
> >
> > Yes, not a big fan of the ugly output it generates, but the output matches closely the ranges
> > definition in the device tree file so it is easy to validate that you are parsing the right
> > entry. I am happy to change it to shorten the cpu range message.
>
> If it already matches other device tree stuff, that's perfect. I'm not
> familiar with that.
>
> > > > +int __weak pcibios_fixup_bridge_ranges(struct list_head *resources)
> > > > +{
> > > > + return 0;
> > > > +}
> > >
> > > I'd wait to add this until there's a platform that needs to implement it.
> > > Splitting it out will make this patch that much smaller and easier to
> > > understand.
> >
> > I need this as this is the default implementation (i.e. do nothing). Otherwise the
> > link phase will give errors.
>
> I meant that you could remove the default implementation *and* the call of
> it, since it currently does nothing.
True. But it looks like converting Will's pci-host-generic.c driver will make use of this.
>
> > > > diff --git a/include/linux/of_pci.h b/include/linux/of_pci.h
> > > > index dde3a4a..71e36d0 100644
> > > > --- a/include/linux/of_pci.h
> > > > +++ b/include/linux/of_pci.h
> > > > @@ -15,6 +15,9 @@ struct device_node *of_pci_find_child_device(struct device_node *parent,
> > > > int of_pci_get_devfn(struct device_node *np);
> > > > int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
> > > > int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
> > > > +struct pci_host_bridge *of_create_pci_host_bridge(struct device *parent,
> > > > + struct pci_ops *ops, void *host_data);
> > > > +
> > > > #else
> > > > static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
> > > > {
> > > > @@ -43,6 +46,13 @@ of_pci_parse_bus_range(struct device_node *node, struct resource *res)
> > > > {
> > > > return -EINVAL;
> > > > }
> > > > +
> > > > +static inline struct pci_host_bridge *
> > > > +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops,
> > > > + void *host_data)
> > > > +{
> > > > + return NULL;
> > > > +}
> > > > #endif
> > > >
> > > > #if defined(CONFIG_OF) && defined(CONFIG_PCI_MSI)
> > > > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > > > index 7e7b939..556dc5f 100644
> > > > --- a/include/linux/pci.h
> > > > +++ b/include/linux/pci.h
> > > > @@ -402,6 +402,7 @@ struct pci_host_bridge {
> > > > struct device dev;
> > > > struct pci_bus *bus; /* root bus */
> > > > int domain_nr;
> > > > + resource_size_t io_base; /* physical address for the start of I/O area */
> > >
> > > I don't see where this is used yet.
> >
> > It's used in pci_host_bridge_of_get_ranges() (earlier in this patch).
>
> of_create_pci_host_bridge() fills in bridge->io_base, but I don't see
> anything that ever *reads* bridge->io_base.
Ah, understood. It is used by the host bridge drivers to set their ATR registers to the
correct CPU address values.
>
> > > As far as I know, there's nothing that prevents a host bridge from having
> > > several I/O port apertures (or several memory-mapped I/O port spaces).
> >
> > The pci_register_io_range() will give different offsets for each apperture.
> > I just need to make sure I don't overwrite io_base when parsing multiple IO
> > ranges.
>
> Let's continue this in the other thread where I'm trying to understand the
> I/O apertures. Obviously I'm still missing something if you can indeed
> have multiple I/O apertures per bridge (because then only one of them can
> start at I/O address 0 on PCI).
Sure.
Thanks,
Liviu
>
> Bjorn
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
2014-07-01 18:43 ` [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree Liviu Dudau
` (2 preceding siblings ...)
2014-07-08 1:01 ` Bjorn Helgaas
@ 2014-07-11 7:43 ` Jingoo Han
2014-07-11 9:08 ` Liviu Dudau
3 siblings, 1 reply; 92+ messages in thread
From: Jingoo Han @ 2014-07-11 7:43 UTC (permalink / raw)
To: 'Liviu Dudau'
Cc: 'linux-pci', 'Bjorn Helgaas',
'Catalin Marinas', 'Will Deacon',
'Benjamin Herrenschmidt', 'Arnd Bergmann',
'linaro-kernel', 'Tanmay Inamdar',
'Grant Likely', 'Sinan Kaya',
'Kukjin Kim', 'Suravee Suthikulanit',
'LKML', 'Device Tree ML', 'LAKML',
'Jingoo Han'
On Wednesday, July 02, 2014 3:44 AM, Liviu Dudau wrote:
>
> Several platforms use a rather generic version of parsing
> the device tree to find the host bridge ranges. Move the common code
> into the generic PCI code and use it to create a pci_host_bridge
> structure that can be used by arch code.
>
> Based on early attempts by Andrew Murray to unify the code.
> Used powerpc and microblaze PCI code as starting point.
>
> Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> ---
> drivers/of/of_pci.c | 135 ++++++++++++++++++++++++++++++++++++++++++++++
> drivers/pci/host-bridge.c | 18 +++++++
> include/linux/of_pci.h | 10 ++++
> include/linux/pci.h | 8 +++
> 4 files changed, 171 insertions(+)
>
> diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
> index 8481996..55d8320 100644
> --- a/drivers/of/of_pci.c
> +++ b/drivers/of/of_pci.c
[.....]
> +struct pci_host_bridge *
> +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops, void *host_data)
> +{
> + int err, domain, busno;
> + struct resource *bus_range;
> + struct pci_bus *root_bus;
> + struct pci_host_bridge *bridge;
> + resource_size_t io_base;
> + LIST_HEAD(res);
> +
> + bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL);
> + if (!bus_range)
> + return ERR_PTR(-ENOMEM);
> +
> + domain = of_alias_get_id(parent->of_node, "pci-domain");
> + if (domain == -ENODEV)
> + domain = atomic_inc_return(&domain_nr);
> +
> + err = of_pci_parse_bus_range(parent->of_node, bus_range);
> + if (err) {
> + dev_info(parent, "No bus range for %s, using default [0-255]\n",
> + parent->of_node->full_name);
> + bus_range->start = 0;
> + bus_range->end = 255;
> + bus_range->flags = IORESOURCE_BUS;
> + }
> + busno = bus_range->start;
> + pci_add_resource(&res, bus_range);
> +
> + /* now parse the rest of host bridge bus ranges */
> + err = pci_host_bridge_of_get_ranges(parent->of_node, &res, &io_base);
> + if (err)
> + goto err_create;
> +
> + /* then create the root bus */
> + root_bus = pci_create_root_bus_in_domain(parent, domain, busno,
> + ops, host_data, &res);
> + if (IS_ERR(root_bus)) {
> + err = PTR_ERR(root_bus);
> + goto err_create;
> + }
> +
> + bridge = to_pci_host_bridge(root_bus->bridge);
> + bridge->io_base = io_base;
Hi Liviu Dudau,
Would you fix the following warning?
drivers/of/of_pci.c: In function 'of_create_pci_host_bridge'
drivers/of/of_pci.c:218:18: warning: 'of_base' may be used uninitialized in this function [-Wmaybe-uninitialized]
bridge->io_base = io_base;
Best regards,
Jingoo Han
[.....]
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree
2014-07-11 7:43 ` Jingoo Han
@ 2014-07-11 9:08 ` Liviu Dudau
0 siblings, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-11 9:08 UTC (permalink / raw)
To: Jingoo Han
Cc: 'linux-pci', 'Bjorn Helgaas', Catalin Marinas,
Will Deacon, 'Benjamin Herrenschmidt',
'Arnd Bergmann', 'linaro-kernel',
'Tanmay Inamdar', 'Grant Likely',
'Sinan Kaya', 'Kukjin Kim',
'Suravee Suthikulanit', 'LKML',
'Device Tree ML', 'LAKML'
On Fri, Jul 11, 2014 at 08:43:21AM +0100, Jingoo Han wrote:
> On Wednesday, July 02, 2014 3:44 AM, Liviu Dudau wrote:
> >
> > Several platforms use a rather generic version of parsing
> > the device tree to find the host bridge ranges. Move the common code
> > into the generic PCI code and use it to create a pci_host_bridge
> > structure that can be used by arch code.
> >
> > Based on early attempts by Andrew Murray to unify the code.
> > Used powerpc and microblaze PCI code as starting point.
> >
> > Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
> > Tested-by: Tanmay Inamdar <tinamdar@apm.com>
> > ---
> > drivers/of/of_pci.c | 135 ++++++++++++++++++++++++++++++++++++++++++++++
> > drivers/pci/host-bridge.c | 18 +++++++
> > include/linux/of_pci.h | 10 ++++
> > include/linux/pci.h | 8 +++
> > 4 files changed, 171 insertions(+)
> >
> > diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
> > index 8481996..55d8320 100644
> > --- a/drivers/of/of_pci.c
> > +++ b/drivers/of/of_pci.c
>
> [.....]
>
> > +struct pci_host_bridge *
> > +of_create_pci_host_bridge(struct device *parent, struct pci_ops *ops, void *host_data)
> > +{
> > + int err, domain, busno;
> > + struct resource *bus_range;
> > + struct pci_bus *root_bus;
> > + struct pci_host_bridge *bridge;
> > + resource_size_t io_base;
> > + LIST_HEAD(res);
> > +
> > + bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL);
> > + if (!bus_range)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + domain = of_alias_get_id(parent->of_node, "pci-domain");
> > + if (domain == -ENODEV)
> > + domain = atomic_inc_return(&domain_nr);
> > +
> > + err = of_pci_parse_bus_range(parent->of_node, bus_range);
> > + if (err) {
> > + dev_info(parent, "No bus range for %s, using default [0-255]\n",
> > + parent->of_node->full_name);
> > + bus_range->start = 0;
> > + bus_range->end = 255;
> > + bus_range->flags = IORESOURCE_BUS;
> > + }
> > + busno = bus_range->start;
> > + pci_add_resource(&res, bus_range);
> > +
> > + /* now parse the rest of host bridge bus ranges */
> > + err = pci_host_bridge_of_get_ranges(parent->of_node, &res, &io_base);
> > + if (err)
> > + goto err_create;
> > +
> > + /* then create the root bus */
> > + root_bus = pci_create_root_bus_in_domain(parent, domain, busno,
> > + ops, host_data, &res);
> > + if (IS_ERR(root_bus)) {
> > + err = PTR_ERR(root_bus);
> > + goto err_create;
> > + }
> > +
> > + bridge = to_pci_host_bridge(root_bus->bridge);
> > + bridge->io_base = io_base;
>
> Hi Liviu Dudau,
>
> Would you fix the following warning?
>
> drivers/of/of_pci.c: In function 'of_create_pci_host_bridge'
> drivers/of/of_pci.c:218:18: warning: 'of_base' may be used uninitialized in this function [-Wmaybe-uninitialized]
> bridge->io_base = io_base;
Yes, I have a simple fix which is to set the initial value to zero when declaring the variable.
Best regards,
Liviu
>
> Best regards,
> Jingoo Han
>
> [.....]
>
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* [PATCH v8 9/9] pci: Remap I/O bus resources into CPU space with pci_remap_iospace()
2014-07-01 18:43 [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Liviu Dudau
` (7 preceding siblings ...)
2014-07-01 18:43 ` [PATCH v8 8/9] pci: Add support for creating a generic host_bridge from device tree Liviu Dudau
@ 2014-07-01 18:43 ` Liviu Dudau
2014-07-14 16:54 ` Catalin Marinas
[not found] ` <1404240214-9804-1-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
9 siblings, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-01 18:43 UTC (permalink / raw)
To: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit
Cc: LKML, Device Tree ML, LAKML
Introduce a default implementation for remapping PCI bus I/O resources
onto the CPU address space. Architectures with special needs may
provide their own version, but most should be able to use this one.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
---
drivers/pci/pci.c | 33 +++++++++++++++++++++++++++++++++
include/linux/pci.h | 3 +++
2 files changed, 36 insertions(+)
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 8e65dc3..a90df97 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -2708,6 +2708,39 @@ int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name)
}
EXPORT_SYMBOL(pci_request_regions_exclusive);
+/**
+ * pci_remap_iospace - Remap the memory mapped I/O space
+ * @res: Resource describing the I/O space
+ * @phys_addr: physical address where the range will be mapped.
+ *
+ * Remap the memory mapped I/O space described by the @res
+ * into the CPU physical address space. Only architectures
+ * that have memory mapped IO defined (and hence PCI_IOBASE)
+ * should call this function.
+ */
+int __weak pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
+{
+ int err = -ENODEV;
+
+#ifdef PCI_IOBASE
+ if (!(res->flags & IORESOURCE_IO))
+ return -EINVAL;
+
+ if (res->end > IO_SPACE_LIMIT)
+ return -EINVAL;
+
+ err = ioremap_page_range(res->start + (unsigned long)PCI_IOBASE,
+ res->end + 1 + (unsigned long)PCI_IOBASE,
+ phys_addr, __pgprot(PROT_DEVICE_nGnRE));
+#else
+ /* this architecture does not have memory mapped I/O space,
+ so this function should never be called */
+ WARN_ON(1);
+#endif
+
+ return err;
+}
+
static void __pci_set_master(struct pci_dev *dev, bool enable)
{
u16 old_cmd, cmd;
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 556dc5f..65fb1fc 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1100,6 +1100,9 @@ int __must_check pci_bus_alloc_resource(struct pci_bus *bus,
resource_size_t),
void *alignf_data);
+
+int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr);
+
static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar)
{
struct pci_bus_region region;
--
2.0.0
^ permalink raw reply related [flat|nested] 92+ messages in thread
* Re: [PATCH v8 9/9] pci: Remap I/O bus resources into CPU space with pci_remap_iospace()
2014-07-01 18:43 ` [PATCH v8 9/9] pci: Remap I/O bus resources into CPU space with pci_remap_iospace() Liviu Dudau
@ 2014-07-14 16:54 ` Catalin Marinas
2014-07-14 16:56 ` Liviu Dudau
2014-07-14 18:15 ` Arnd Bergmann
0 siblings, 2 replies; 92+ messages in thread
From: Catalin Marinas @ 2014-07-14 16:54 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Bjorn Helgaas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Tue, Jul 01, 2014 at 07:43:34PM +0100, Liviu Dudau wrote:
> Introduce a default implementation for remapping PCI bus I/O resources
> onto the CPU address space. Architectures with special needs may
> provide their own version, but most should be able to use this one.
[...]
> +/**
> + * pci_remap_iospace - Remap the memory mapped I/O space
> + * @res: Resource describing the I/O space
> + * @phys_addr: physical address where the range will be mapped.
> + *
> + * Remap the memory mapped I/O space described by the @res
> + * into the CPU physical address space. Only architectures
> + * that have memory mapped IO defined (and hence PCI_IOBASE)
> + * should call this function.
> + */
> +int __weak pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
> +{
> + int err = -ENODEV;
> +
> +#ifdef PCI_IOBASE
> + if (!(res->flags & IORESOURCE_IO))
> + return -EINVAL;
> +
> + if (res->end > IO_SPACE_LIMIT)
> + return -EINVAL;
> +
> + err = ioremap_page_range(res->start + (unsigned long)PCI_IOBASE,
> + res->end + 1 + (unsigned long)PCI_IOBASE,
> + phys_addr, __pgprot(PROT_DEVICE_nGnRE));
Except that PROT_DEVICE_nGnRE is arm64 only. I think that's a function
that should remain arch specific.
--
Catalin
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 9/9] pci: Remap I/O bus resources into CPU space with pci_remap_iospace()
2014-07-14 16:54 ` Catalin Marinas
@ 2014-07-14 16:56 ` Liviu Dudau
2014-07-14 18:15 ` Arnd Bergmann
1 sibling, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-14 16:56 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-pci, Bjorn Helgaas, Will Deacon, Benjamin Herrenschmidt,
Arnd Bergmann, linaro-kernel, Tanmay Inamdar, Grant Likely,
Sinan Kaya, Jingoo Han, Kukjin Kim, Suravee Suthikulanit, LKML,
Device Tree ML, LAKML
On Mon, Jul 14, 2014 at 05:54:43PM +0100, Catalin Marinas wrote:
> On Tue, Jul 01, 2014 at 07:43:34PM +0100, Liviu Dudau wrote:
> > Introduce a default implementation for remapping PCI bus I/O resources
> > onto the CPU address space. Architectures with special needs may
> > provide their own version, but most should be able to use this one.
> [...]
> > +/**
> > + * pci_remap_iospace - Remap the memory mapped I/O space
> > + * @res: Resource describing the I/O space
> > + * @phys_addr: physical address where the range will be mapped.
> > + *
> > + * Remap the memory mapped I/O space described by the @res
> > + * into the CPU physical address space. Only architectures
> > + * that have memory mapped IO defined (and hence PCI_IOBASE)
> > + * should call this function.
> > + */
> > +int __weak pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
> > +{
> > + int err = -ENODEV;
> > +
> > +#ifdef PCI_IOBASE
> > + if (!(res->flags & IORESOURCE_IO))
> > + return -EINVAL;
> > +
> > + if (res->end > IO_SPACE_LIMIT)
> > + return -EINVAL;
> > +
> > + err = ioremap_page_range(res->start + (unsigned long)PCI_IOBASE,
> > + res->end + 1 + (unsigned long)PCI_IOBASE,
> > + phys_addr, __pgprot(PROT_DEVICE_nGnRE));
>
> Except that PROT_DEVICE_nGnRE is arm64 only. I think that's a function
> that should remain arch specific.
Yes, I was following Arnd's suggestion and lost track of the fact that
PROT_DEVICE_nGnRE is arm64 specific.
Best regards,
Liviu
>
> --
> Catalin
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 9/9] pci: Remap I/O bus resources into CPU space with pci_remap_iospace()
2014-07-14 16:54 ` Catalin Marinas
2014-07-14 16:56 ` Liviu Dudau
@ 2014-07-14 18:15 ` Arnd Bergmann
2014-07-15 0:14 ` Liviu Dudau
2014-07-15 9:09 ` Catalin Marinas
1 sibling, 2 replies; 92+ messages in thread
From: Arnd Bergmann @ 2014-07-14 18:15 UTC (permalink / raw)
To: Catalin Marinas
Cc: Sinan Kaya, linaro-kernel, Device Tree ML, linux-pci, Jingoo Han,
Liviu Dudau, LKML, Will Deacon, Grant Likely, Kukjin Kim,
Tanmay Inamdar, Suravee Suthikulanit, Benjamin Herrenschmidt,
Bjorn Helgaas, LAKML
On Monday 14 July 2014 17:54:43 Catalin Marinas wrote:
> On Tue, Jul 01, 2014 at 07:43:34PM +0100, Liviu Dudau wrote:
> > Introduce a default implementation for remapping PCI bus I/O resources
> > onto the CPU address space. Architectures with special needs may
> > provide their own version, but most should be able to use this one.
> [...]
> > +/**
> > + * pci_remap_iospace - Remap the memory mapped I/O space
> > + * @res: Resource describing the I/O space
> > + * @phys_addr: physical address where the range will be mapped.
> > + *
> > + * Remap the memory mapped I/O space described by the @res
> > + * into the CPU physical address space. Only architectures
> > + * that have memory mapped IO defined (and hence PCI_IOBASE)
> > + * should call this function.
> > + */
> > +int __weak pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
> > +{
> > + int err = -ENODEV;
> > +
> > +#ifdef PCI_IOBASE
> > + if (!(res->flags & IORESOURCE_IO))
> > + return -EINVAL;
> > +
> > + if (res->end > IO_SPACE_LIMIT)
> > + return -EINVAL;
> > +
> > + err = ioremap_page_range(res->start + (unsigned long)PCI_IOBASE,
> > + res->end + 1 + (unsigned long)PCI_IOBASE,
> > + phys_addr, __pgprot(PROT_DEVICE_nGnRE));
>
> Except that PROT_DEVICE_nGnRE is arm64 only. I think that's a function
> that should remain arch specific.
>
How about #defining a macro with the correct pgprot value in asm/pci.h
or asm/pgtable.h?
We can provide a default for that in another architecture independent
location.
Arnd
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 9/9] pci: Remap I/O bus resources into CPU space with pci_remap_iospace()
2014-07-14 18:15 ` Arnd Bergmann
@ 2014-07-15 0:14 ` Liviu Dudau
2014-07-15 9:09 ` Catalin Marinas
1 sibling, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-15 0:14 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Catalin Marinas, Liviu Dudau, linux-pci, Bjorn Helgaas,
Will Deacon, Benjamin Herrenschmidt, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Mon, Jul 14, 2014 at 08:15:48PM +0200, Arnd Bergmann wrote:
> On Monday 14 July 2014 17:54:43 Catalin Marinas wrote:
> > On Tue, Jul 01, 2014 at 07:43:34PM +0100, Liviu Dudau wrote:
> > > Introduce a default implementation for remapping PCI bus I/O resources
> > > onto the CPU address space. Architectures with special needs may
> > > provide their own version, but most should be able to use this one.
> > [...]
> > > +/**
> > > + * pci_remap_iospace - Remap the memory mapped I/O space
> > > + * @res: Resource describing the I/O space
> > > + * @phys_addr: physical address where the range will be mapped.
> > > + *
> > > + * Remap the memory mapped I/O space described by the @res
> > > + * into the CPU physical address space. Only architectures
> > > + * that have memory mapped IO defined (and hence PCI_IOBASE)
> > > + * should call this function.
> > > + */
> > > +int __weak pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
> > > +{
> > > + int err = -ENODEV;
> > > +
> > > +#ifdef PCI_IOBASE
> > > + if (!(res->flags & IORESOURCE_IO))
> > > + return -EINVAL;
> > > +
> > > + if (res->end > IO_SPACE_LIMIT)
> > > + return -EINVAL;
> > > +
> > > + err = ioremap_page_range(res->start + (unsigned long)PCI_IOBASE,
> > > + res->end + 1 + (unsigned long)PCI_IOBASE,
> > > + phys_addr, __pgprot(PROT_DEVICE_nGnRE));
> >
> > Except that PROT_DEVICE_nGnRE is arm64 only. I think that's a function
> > that should remain arch specific.
> >
>
> How about #defining a macro with the correct pgprot value in asm/pci.h
> or asm/pgtable.h?
> We can provide a default for that in another architecture independent
> location.
I was discussing the same thing with Catalin today. It is the most reasonable
approach, as the host bridge driver that is likely to call this function should
not be aware of the architectural flags used here.
Best regards,
Liviu
>
> Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
-------------------
.oooO
( )
\ ( Oooo.
\_) ( )
) /
(_/
One small step
for me ...
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 9/9] pci: Remap I/O bus resources into CPU space with pci_remap_iospace()
2014-07-14 18:15 ` Arnd Bergmann
2014-07-15 0:14 ` Liviu Dudau
@ 2014-07-15 9:09 ` Catalin Marinas
1 sibling, 0 replies; 92+ messages in thread
From: Catalin Marinas @ 2014-07-15 9:09 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Liviu Dudau, linux-pci, Bjorn Helgaas, Will Deacon,
Benjamin Herrenschmidt, linaro-kernel, Tanmay Inamdar,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, LKML, Device Tree ML, LAKML
On Mon, Jul 14, 2014 at 07:15:48PM +0100, Arnd Bergmann wrote:
> On Monday 14 July 2014 17:54:43 Catalin Marinas wrote:
> > On Tue, Jul 01, 2014 at 07:43:34PM +0100, Liviu Dudau wrote:
> > > Introduce a default implementation for remapping PCI bus I/O resources
> > > onto the CPU address space. Architectures with special needs may
> > > provide their own version, but most should be able to use this one.
> > [...]
> > > +/**
> > > + * pci_remap_iospace - Remap the memory mapped I/O space
> > > + * @res: Resource describing the I/O space
> > > + * @phys_addr: physical address where the range will be mapped.
> > > + *
> > > + * Remap the memory mapped I/O space described by the @res
> > > + * into the CPU physical address space. Only architectures
> > > + * that have memory mapped IO defined (and hence PCI_IOBASE)
> > > + * should call this function.
> > > + */
> > > +int __weak pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
> > > +{
> > > + int err = -ENODEV;
> > > +
> > > +#ifdef PCI_IOBASE
> > > + if (!(res->flags & IORESOURCE_IO))
> > > + return -EINVAL;
> > > +
> > > + if (res->end > IO_SPACE_LIMIT)
> > > + return -EINVAL;
> > > +
> > > + err = ioremap_page_range(res->start + (unsigned long)PCI_IOBASE,
> > > + res->end + 1 + (unsigned long)PCI_IOBASE,
> > > + phys_addr, __pgprot(PROT_DEVICE_nGnRE));
> >
> > Except that PROT_DEVICE_nGnRE is arm64 only. I think that's a function
> > that should remain arch specific.
> >
>
> How about #defining a macro with the correct pgprot value in asm/pci.h
> or asm/pgtable.h?
> We can provide a default for that in another architecture independent
> location.
That should work. We already have pgprot_noncached/writecombine in
asm-generic/pgtable.h which all architectures seem to include. The
default can be pgprot_noncached and we would invoke it as
pgprot_device(PAGE_KERNEL).
--
Catalin
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 92+ messages in thread
[parent not found: <1404240214-9804-1-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>]
* Re: [PATCH v8 0/9] Support for creating generic PCI host bridges from DT
[not found] ` <1404240214-9804-1-git-send-email-Liviu.Dudau-5wv7dgnIgG8@public.gmane.org>
@ 2014-07-06 15:23 ` Rob Herring
2014-07-07 11:12 ` Liviu Dudau
2014-07-08 17:18 ` Liviu Dudau
0 siblings, 2 replies; 92+ messages in thread
From: Rob Herring @ 2014-07-06 15:23 UTC (permalink / raw)
To: Liviu Dudau
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, Device Tree ML, LKML, LAKML
On Tue, Jul 1, 2014 at 1:43 PM, Liviu Dudau <Liviu.Dudau-5wv7dgnIgG8@public.gmane.org> wrote:
> This is my resurected attempt at adding support for generic PCI host
> bridge controllers that make use of device tree information to
> configure themselves. I've tagged it as v8 although the patches
> have now been reshuffled in order to ease adoption so referring to
> the older versions might be a bit of a hoop jumping exercise.
>
> Changes from v7:
> - Reordered the patches so that fixes and non-controversial patches
> from v7 can be accepted more easily. If agreed I can split the
> series again into patches that can be upstreamed easily and ones
> that still need discussion.
> - Moved the of_create_host_bridge() function to drivers/of/of_pci.c
> to better reflect its target use.
> - Added the function to remap the bus I/O resources that used to be
> provided in my arm64 patch series and (re)named it pci_remap_iospace()
> - Removed error code checking from parsing and mapping of IRQ from DT
> in recognition that some PCI devices will not have legacy IRQ mappings.
>
> v7 thread here with all the historic information: https://lkml.org/lkml/2014/3/14/279
Can you publish a branch for this series please.
Rob
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 0/9] Support for creating generic PCI host bridges from DT
2014-07-06 15:23 ` [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Rob Herring
@ 2014-07-07 11:12 ` Liviu Dudau
2014-07-08 17:18 ` Liviu Dudau
1 sibling, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-07 11:12 UTC (permalink / raw)
To: Rob Herring
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, Device Tree ML, LKML, LAKML
On Sun, Jul 06, 2014 at 04:23:43PM +0100, Rob Herring wrote:
> On Tue, Jul 1, 2014 at 1:43 PM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> > This is my resurected attempt at adding support for generic PCI host
> > bridge controllers that make use of device tree information to
> > configure themselves. I've tagged it as v8 although the patches
> > have now been reshuffled in order to ease adoption so referring to
> > the older versions might be a bit of a hoop jumping exercise.
> >
> > Changes from v7:
> > - Reordered the patches so that fixes and non-controversial patches
> > from v7 can be accepted more easily. If agreed I can split the
> > series again into patches that can be upstreamed easily and ones
> > that still need discussion.
> > - Moved the of_create_host_bridge() function to drivers/of/of_pci.c
> > to better reflect its target use.
> > - Added the function to remap the bus I/O resources that used to be
> > provided in my arm64 patch series and (re)named it pci_remap_iospace()
> > - Removed error code checking from parsing and mapping of IRQ from DT
> > in recognition that some PCI devices will not have legacy IRQ mappings.
> >
> > v7 thread here with all the historic information: https://lkml.org/lkml/2014/3/14/279
>
> Can you publish a branch for this series please.
Do you want a branch that has the series as publised (+the one obvious miss
on the header include) or a branch that has the comments rolled in, but is
is not published yet as I'm waiting on answers from Bjorn regarding domain
number handling?
Best regards,
Liviu
>
> Rob
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 0/9] Support for creating generic PCI host bridges from DT
2014-07-06 15:23 ` [PATCH v8 0/9] Support for creating generic PCI host bridges from DT Rob Herring
2014-07-07 11:12 ` Liviu Dudau
@ 2014-07-08 17:18 ` Liviu Dudau
2014-07-11 0:44 ` Tanmay Inamdar
1 sibling, 1 reply; 92+ messages in thread
From: Liviu Dudau @ 2014-07-08 17:18 UTC (permalink / raw)
To: Rob Herring
Cc: linux-pci, Bjorn Helgaas, Catalin Marinas, Will Deacon,
Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Tanmay Inamdar, Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, Device Tree ML, LKML, LAKML
On Sun, Jul 06, 2014 at 04:23:43PM +0100, Rob Herring wrote:
> On Tue, Jul 1, 2014 at 1:43 PM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> > This is my resurected attempt at adding support for generic PCI host
> > bridge controllers that make use of device tree information to
> > configure themselves. I've tagged it as v8 although the patches
> > have now been reshuffled in order to ease adoption so referring to
> > the older versions might be a bit of a hoop jumping exercise.
> >
> > Changes from v7:
> > - Reordered the patches so that fixes and non-controversial patches
> > from v7 can be accepted more easily. If agreed I can split the
> > series again into patches that can be upstreamed easily and ones
> > that still need discussion.
> > - Moved the of_create_host_bridge() function to drivers/of/of_pci.c
> > to better reflect its target use.
> > - Added the function to remap the bus I/O resources that used to be
> > provided in my arm64 patch series and (re)named it pci_remap_iospace()
> > - Removed error code checking from parsing and mapping of IRQ from DT
> > in recognition that some PCI devices will not have legacy IRQ mappings.
> >
> > v7 thread here with all the historic information: https://lkml.org/lkml/2014/3/14/279
>
> Can you publish a branch for this series please.
>
> Rob
>
Hi Rob,
I have pushed a brach that matches my v8 patchset +1 obvious missing header include
here: http://www.linux-arm.org/git?p=linux-ld.git;a=shortlog;h=refs/heads/for-upstream/pci_v8
Best regards,
Liviu
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 0/9] Support for creating generic PCI host bridges from DT
2014-07-08 17:18 ` Liviu Dudau
@ 2014-07-11 0:44 ` Tanmay Inamdar
2014-07-11 7:33 ` Jingoo Han
0 siblings, 1 reply; 92+ messages in thread
From: Tanmay Inamdar @ 2014-07-11 0:44 UTC (permalink / raw)
To: Liviu Dudau
Cc: Rob Herring, linux-pci, Bjorn Helgaas, Catalin Marinas,
Will Deacon, Benjamin Herrenschmidt, Arnd Bergmann, linaro-kernel,
Grant Likely, Sinan Kaya, Jingoo Han, Kukjin Kim,
Suravee Suthikulanit, Device Tree ML, LKML, LAKML
Hi,
On Tue, Jul 8, 2014 at 10:18 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> On Sun, Jul 06, 2014 at 04:23:43PM +0100, Rob Herring wrote:
>> On Tue, Jul 1, 2014 at 1:43 PM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
>> > This is my resurected attempt at adding support for generic PCI host
>> > bridge controllers that make use of device tree information to
>> > configure themselves. I've tagged it as v8 although the patches
>> > have now been reshuffled in order to ease adoption so referring to
>> > the older versions might be a bit of a hoop jumping exercise.
>> >
>> > Changes from v7:
>> > - Reordered the patches so that fixes and non-controversial patches
>> > from v7 can be accepted more easily. If agreed I can split the
>> > series again into patches that can be upstreamed easily and ones
>> > that still need discussion.
>> > - Moved the of_create_host_bridge() function to drivers/of/of_pci.c
>> > to better reflect its target use.
>> > - Added the function to remap the bus I/O resources that used to be
>> > provided in my arm64 patch series and (re)named it pci_remap_iospace()
>> > - Removed error code checking from parsing and mapping of IRQ from DT
>> > in recognition that some PCI devices will not have legacy IRQ mappings.
>> >
>> > v7 thread here with all the historic information: https://lkml.org/lkml/2014/3/14/279
>>
>> Can you publish a branch for this series please.
>>
>> Rob
>>
>
> Hi Rob,
>
> I have pushed a brach that matches my v8 patchset +1 obvious missing header include
> here: http://www.linux-arm.org/git?p=linux-ld.git;a=shortlog;h=refs/heads/for-upstream/pci_v8
>
I was still getting following compilation error after applying arm64
pci headers. Please let me know if I am missing something.
linux-git/drivers/of/of_pci.c: In function ‘pci_host_bridge_of_get_ranges’:
linux-git/drivers/of/of_pci.c:114:22: error: storage size of ‘range’ isn’t known
struct of_pci_range range;
^
linux-git/drivers/of/of_pci.c:115:29: error: storage size of ‘parser’
isn’t known
struct of_pci_range_parser parser;
^
linux-git/drivers/of/of_pci.c:121:2: error: implicit declaration of
function ‘of_pci_range_parser_init’
[-Werror=implicit-function-declaration]
err = of_pci_range_parser_init(&parser, dev);
Below patch fixes the errors.
diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
index 55d8320..da88dac 100644
--- a/drivers/of/of_pci.c
+++ b/drivers/of/of_pci.c
@@ -2,6 +2,7 @@
#include <linux/export.h>
#include <linux/of.h>
#include <linux/of_pci.h>
+#include <linux/of_address.h>
static inline int __of_pci_pci_compare(struct device_node *node,
unsigned int data)
> Best regards,
> Liviu
>
>
> --
> ====================
> | I would like to |
> | fix the world, |
> | but they're not |
> | giving me the |
> \ source code! /
> ---------------
> ¯\_(ツ)_/¯
>
^ permalink raw reply related [flat|nested] 92+ messages in thread
* Re: [PATCH v8 0/9] Support for creating generic PCI host bridges from DT
2014-07-11 0:44 ` Tanmay Inamdar
@ 2014-07-11 7:33 ` Jingoo Han
2014-07-11 9:11 ` Liviu Dudau
0 siblings, 1 reply; 92+ messages in thread
From: Jingoo Han @ 2014-07-11 7:33 UTC (permalink / raw)
To: 'Tanmay Inamdar', 'Liviu Dudau'
Cc: 'Rob Herring', 'linux-pci',
'Bjorn Helgaas', 'Catalin Marinas',
'Will Deacon', 'Benjamin Herrenschmidt',
'Arnd Bergmann', 'linaro-kernel',
'Grant Likely', 'Sinan Kaya',
'Kukjin Kim', 'Suravee Suthikulanit',
'Device Tree ML', 'LKML', 'LAKML',
'Jingoo Han'
On Friday, July 11, 2014 9:44 AM, Tanmay Inamdar wrote:
> On Tue, Jul 8, 2014 at 10:18 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> > On Sun, Jul 06, 2014 at 04:23:43PM +0100, Rob Herring wrote:
> >> On Tue, Jul 1, 2014 at 1:43 PM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> >> > This is my resurected attempt at adding support for generic PCI host
> >> > bridge controllers that make use of device tree information to
> >> > configure themselves. I've tagged it as v8 although the patches
> >> > have now been reshuffled in order to ease adoption so referring to
> >> > the older versions might be a bit of a hoop jumping exercise.
> >> >
> >> > Changes from v7:
> >> > - Reordered the patches so that fixes and non-controversial patches
> >> > from v7 can be accepted more easily. If agreed I can split the
> >> > series again into patches that can be upstreamed easily and ones
> >> > that still need discussion.
> >> > - Moved the of_create_host_bridge() function to drivers/of/of_pci.c
> >> > to better reflect its target use.
> >> > - Added the function to remap the bus I/O resources that used to be
> >> > provided in my arm64 patch series and (re)named it pci_remap_iospace()
> >> > - Removed error code checking from parsing and mapping of IRQ from DT
> >> > in recognition that some PCI devices will not have legacy IRQ mappings.
> >> >
> >> > v7 thread here with all the historic information: https://lkml.org/lkml/2014/3/14/279
> >>
> >> Can you publish a branch for this series please.
> >>
> >> Rob
> >>
> >
> > Hi Rob,
> >
> > I have pushed a brach that matches my v8 patchset +1 obvious missing header include
> > here: http://www.linux-arm.org/git?p=linux-ld.git;a=shortlog;h=refs/heads/for-upstream/pci_v8
> >
>
> I was still getting following compilation error after applying arm64
> pci headers. Please let me know if I am missing something.
>
> linux-git/drivers/of/of_pci.c: In function ‘pci_host_bridge_of_get_ranges’:
> linux-git/drivers/of/of_pci.c:114:22: error: storage size of ‘range’ isn’t known
> struct of_pci_range range;
> ^
> linux-git/drivers/of/of_pci.c:115:29: error: storage size of ‘parser’
> isn’t known
> struct of_pci_range_parser parser;
> ^
> linux-git/drivers/of/of_pci.c:121:2: error: implicit declaration of
> function ‘of_pci_range_parser_init’
> [-Werror=implicit-function-declaration]
> err = of_pci_range_parser_init(&parser, dev);
>
>
> Below patch fixes the errors.
>
> diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
> index 55d8320..da88dac 100644
> --- a/drivers/of/of_pci.c
> +++ b/drivers/of/of_pci.c
> @@ -2,6 +2,7 @@
> #include <linux/export.h>
> #include <linux/of.h>
> #include <linux/of_pci.h>
> +#include <linux/of_address.h>
Yes, right. I also found the build errors as above mentioned.
"of_address.h" should be included, in order to fix the build errors.
However, for readability, the following would be better.
#include <linux/of.h>
+#include <linux/of_address.h>
#include <linux/of_pci.h>
Best regards,
Jingoo Han
>
> static inline int __of_pci_pci_compare(struct device_node *node,
> unsigned int data)
>
>
> > Best regards,
> > Liviu
> >
> >
> > --
> > ====================
> > | I would like to |
> > | fix the world, |
> > | but they're not |
> > | giving me the |
> > \ source code! /
> > ---------------
> > ¯\_(ツ)_/¯
> >
^ permalink raw reply [flat|nested] 92+ messages in thread
* Re: [PATCH v8 0/9] Support for creating generic PCI host bridges from DT
2014-07-11 7:33 ` Jingoo Han
@ 2014-07-11 9:11 ` Liviu Dudau
0 siblings, 0 replies; 92+ messages in thread
From: Liviu Dudau @ 2014-07-11 9:11 UTC (permalink / raw)
To: Jingoo Han
Cc: 'Tanmay Inamdar', 'Rob Herring',
'linux-pci', 'Bjorn Helgaas', Catalin Marinas,
Will Deacon, 'Benjamin Herrenschmidt',
'Arnd Bergmann', 'linaro-kernel',
'Grant Likely', 'Sinan Kaya',
'Kukjin Kim', 'Suravee Suthikulanit',
'Device Tree ML', 'LKML', 'LAKML'
On Fri, Jul 11, 2014 at 08:33:23AM +0100, Jingoo Han wrote:
> On Friday, July 11, 2014 9:44 AM, Tanmay Inamdar wrote:
> > On Tue, Jul 8, 2014 at 10:18 AM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> > > On Sun, Jul 06, 2014 at 04:23:43PM +0100, Rob Herring wrote:
> > >> On Tue, Jul 1, 2014 at 1:43 PM, Liviu Dudau <Liviu.Dudau@arm.com> wrote:
> > >> > This is my resurected attempt at adding support for generic PCI host
> > >> > bridge controllers that make use of device tree information to
> > >> > configure themselves. I've tagged it as v8 although the patches
> > >> > have now been reshuffled in order to ease adoption so referring to
> > >> > the older versions might be a bit of a hoop jumping exercise.
> > >> >
> > >> > Changes from v7:
> > >> > - Reordered the patches so that fixes and non-controversial patches
> > >> > from v7 can be accepted more easily. If agreed I can split the
> > >> > series again into patches that can be upstreamed easily and ones
> > >> > that still need discussion.
> > >> > - Moved the of_create_host_bridge() function to drivers/of/of_pci.c
> > >> > to better reflect its target use.
> > >> > - Added the function to remap the bus I/O resources that used to be
> > >> > provided in my arm64 patch series and (re)named it pci_remap_iospace()
> > >> > - Removed error code checking from parsing and mapping of IRQ from DT
> > >> > in recognition that some PCI devices will not have legacy IRQ mappings.
> > >> >
> > >> > v7 thread here with all the historic information: https://lkml.org/lkml/2014/3/14/279
> > >>
> > >> Can you publish a branch for this series please.
> > >>
> > >> Rob
> > >>
> > >
> > > Hi Rob,
> > >
> > > I have pushed a brach that matches my v8 patchset +1 obvious missing header include
> > > here: http://www.linux-arm.org/git?p=linux-ld.git;a=shortlog;h=refs/heads/for-upstream/pci_v8
> > >
> >
> > I was still getting following compilation error after applying arm64
> > pci headers. Please let me know if I am missing something.
> >
> > linux-git/drivers/of/of_pci.c: In function ‘pci_host_bridge_of_get_ranges’:
> > linux-git/drivers/of/of_pci.c:114:22: error: storage size of ‘range’ isn’t known
> > struct of_pci_range range;
> > ^
> > linux-git/drivers/of/of_pci.c:115:29: error: storage size of ‘parser’
> > isn’t known
> > struct of_pci_range_parser parser;
> > ^
> > linux-git/drivers/of/of_pci.c:121:2: error: implicit declaration of
> > function ‘of_pci_range_parser_init’
> > [-Werror=implicit-function-declaration]
> > err = of_pci_range_parser_init(&parser, dev);
> >
> >
> > Below patch fixes the errors.
> >
> > diff --git a/drivers/of/of_pci.c b/drivers/of/of_pci.c
> > index 55d8320..da88dac 100644
> > --- a/drivers/of/of_pci.c
> > +++ b/drivers/of/of_pci.c
> > @@ -2,6 +2,7 @@
> > #include <linux/export.h>
> > #include <linux/of.h>
> > #include <linux/of_pci.h>
> > +#include <linux/of_address.h>
>
> Yes, right. I also found the build errors as above mentioned.
> "of_address.h" should be included, in order to fix the build errors.
> However, for readability, the following would be better.
>
> #include <linux/of.h>
> +#include <linux/of_address.h>
> #include <linux/of_pci.h>
>
> Best regards,
> Jingoo Han
Thanks, guys! Like I've said, it was not my day when I've submitted v8 series.
I've cherry picked the patches into a clean branch but forgot to be thorough
and was compiling in the old working directory.
Sorry,
Liviu
>
> >
> > static inline int __of_pci_pci_compare(struct device_node *node,
> > unsigned int data)
> >
> >
> > > Best regards,
> > > Liviu
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
^ permalink raw reply [flat|nested] 92+ messages in thread