devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re:[PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
       [not found]   ` <CABGGisxq+hf91R18pnZ=VZ9f99GssWWPhpPCjNAROJmKg5-udA@mail.gmail.com>
@ 2018-08-07 14:54     ` Georgi Djakov
  2018-08-20 15:32       ` [PATCH " Maxime Ripard
  0 siblings, 1 reply; 9+ messages in thread
From: Georgi Djakov @ 2018-08-07 14:54 UTC (permalink / raw)
  To: Rob Herring, maxime.ripard
  Cc: linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki, Rob Herring,
	Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm,
	devicetree

Hi Rob,

On 08/03/2018 12:02 AM, Rob Herring wrote:
> On Tue, Jul 31, 2018 at 10:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> This binding is intended to represent the interconnect hardware present
>> in some of the modern SoCs. Currently it consists only of a binding for
>> the interconnect hardware devices (provider).
> 
> If you want the bindings reviewed, then you need to send them to the
> DT list. CC'ing me is pointless, I get CC'ed too many things to read.

Ops, ok!

> The consumer and producer binding should be a single patch. One is not
> useful without the other.

The reason for splitting them is that they can be reviewed separately.
Also we can rely on platform data instead of using DT and the consumer
binding. However will do as you suggest.

> There is also a patch series from Maxime Ripard that's addressing the
> same general area. See "dt-bindings: Add a dma-parent property". We
> don't need multiple ways to address describing the device to memory
> paths, so you all had better work out a common solution.

Looks like this fits exactly into the interconnect API concept. I see
MBUS as interconnect provider and display/camera as consumers, that
report their bandwidth needs. I am also planning to add support for
priority.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-07 14:54     ` Re:[PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings Georgi Djakov
@ 2018-08-20 15:32       ` Maxime Ripard
  2018-08-24 14:51         ` Georgi Djakov
  0 siblings, 1 reply; 9+ messages in thread
From: Maxime Ripard @ 2018-08-20 15:32 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rob Herring, linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki,
	Rob Herring, Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm

[-- Attachment #1: Type: text/plain, Size: 1137 bytes --]

Hi Georgi,

On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> > There is also a patch series from Maxime Ripard that's addressing the
> > same general area. See "dt-bindings: Add a dma-parent property". We
> > don't need multiple ways to address describing the device to memory
> > paths, so you all had better work out a common solution.
> 
> Looks like this fits exactly into the interconnect API concept. I see
> MBUS as interconnect provider and display/camera as consumers, that
> report their bandwidth needs. I am also planning to add support for
> priority.

Thanks for working on this. After looking at your serie, the one thing
I'm a bit uncertain about (and the most important one to us) is how we
would be able to tell through which interconnect the DMA are done.

This is important to us since our topology is actually quite simple as
you've seen, but the RAM is not mapped on that bus and on the CPU's,
so we need to apply an offset to each buffer being DMA'd.

Maxime

-- 
Maxime Ripard, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
https://bootlin.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-20 15:32       ` [PATCH " Maxime Ripard
@ 2018-08-24 14:51         ` Georgi Djakov
  2018-08-24 15:35           ` Rob Herring
  2018-08-27 15:08           ` Maxime Ripard
  0 siblings, 2 replies; 9+ messages in thread
From: Georgi Djakov @ 2018-08-24 14:51 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Rob Herring, linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki,
	Rob Herring, Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm

Hi Maxime,

On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> Hi Georgi,
> 
> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>> There is also a patch series from Maxime Ripard that's addressing the
>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>> don't need multiple ways to address describing the device to memory
>>> paths, so you all had better work out a common solution.
>>
>> Looks like this fits exactly into the interconnect API concept. I see
>> MBUS as interconnect provider and display/camera as consumers, that
>> report their bandwidth needs. I am also planning to add support for
>> priority.
> 
> Thanks for working on this. After looking at your serie, the one thing
> I'm a bit uncertain about (and the most important one to us) is how we
> would be able to tell through which interconnect the DMA are done.
> 
> This is important to us since our topology is actually quite simple as
> you've seen, but the RAM is not mapped on that bus and on the CPU's,
> so we need to apply an offset to each buffer being DMA'd.

Ok, i see - your problem is not about bandwidth scaling but about using
different memory ranges by the driver to access the same location. So
this is not really the same and your problem is different. Also the
interconnect bindings are describing a path and endpoints. However i am
open to any ideas.

Thanks,
Georgi

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-24 14:51         ` Georgi Djakov
@ 2018-08-24 15:35           ` Rob Herring
  2018-08-27 15:11             ` Maxime Ripard
  2018-08-27 15:08           ` Maxime Ripard
  1 sibling, 1 reply; 9+ messages in thread
From: Rob Herring @ 2018-08-24 15:35 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Maxime Ripard, open list:THERMAL, Greg Kroah-Hartman,
	Rafael J. Wysocki, Michael Turquette, Kevin Hilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, Amit Kucheria,
	seansw, daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Arnd Bergmann, linux-kernel@vger.kernel.org

On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Hi Maxime,
>
> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > Hi Georgi,
> >
> > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>> There is also a patch series from Maxime Ripard that's addressing the
> >>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>> don't need multiple ways to address describing the device to memory
> >>> paths, so you all had better work out a common solution.
> >>
> >> Looks like this fits exactly into the interconnect API concept. I see
> >> MBUS as interconnect provider and display/camera as consumers, that
> >> report their bandwidth needs. I am also planning to add support for
> >> priority.
> >
> > Thanks for working on this. After looking at your serie, the one thing
> > I'm a bit uncertain about (and the most important one to us) is how we
> > would be able to tell through which interconnect the DMA are done.
> >
> > This is important to us since our topology is actually quite simple as
> > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > so we need to apply an offset to each buffer being DMA'd.
>
> Ok, i see - your problem is not about bandwidth scaling but about using
> different memory ranges by the driver to access the same location. So
> this is not really the same and your problem is different. Also the
> interconnect bindings are describing a path and endpoints. However i am
> open to any ideas.

It may be different things you need, but both are related to the path
between a bus master and memory. We can't have each 'problem'
described in a different way. Well, we could as long as each platform
has different problems, but that's unlikely.

It could turn out that the only commonality is property naming
convention, but that's still better than 2 independent solutions.

I know you each want to just fix your issues, but the fact that DT
doesn't model the DMA side of the bus structure has been an issue at
least since the start of DT on ARM. Either we should address this in a
flexible way or we can just continue to manage without. So I'm not
inclined to take something that only addresses one SoC family.

Rob

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-24 14:51         ` Georgi Djakov
  2018-08-24 15:35           ` Rob Herring
@ 2018-08-27 15:08           ` Maxime Ripard
  2018-08-29 12:31             ` Georgi Djakov
  1 sibling, 1 reply; 9+ messages in thread
From: Maxime Ripard @ 2018-08-27 15:08 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rob Herring, linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki,
	Rob Herring, Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm

[-- Attachment #1: Type: text/plain, Size: 2050 bytes --]

Hi!

On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote:
> Hi Maxime,
> 
> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > Hi Georgi,
> > 
> > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>> There is also a patch series from Maxime Ripard that's addressing the
> >>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>> don't need multiple ways to address describing the device to memory
> >>> paths, so you all had better work out a common solution.
> >>
> >> Looks like this fits exactly into the interconnect API concept. I see
> >> MBUS as interconnect provider and display/camera as consumers, that
> >> report their bandwidth needs. I am also planning to add support for
> >> priority.
> > 
> > Thanks for working on this. After looking at your serie, the one thing
> > I'm a bit uncertain about (and the most important one to us) is how we
> > would be able to tell through which interconnect the DMA are done.
> > 
> > This is important to us since our topology is actually quite simple as
> > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > so we need to apply an offset to each buffer being DMA'd.
> 
> Ok, i see - your problem is not about bandwidth scaling but about using
> different memory ranges by the driver to access the same location.

Well, it turns out that the problem we are bitten by at the moment is
the memory range one, but the controller it goes through also provides
bandwidth scaling, priorities and so on, so it's not too far off.

> So this is not really the same and your problem is different. Also
> the interconnect bindings are describing a path and
> endpoints. However i am open to any ideas.

It's describing a path and endpoints, but it can describe multiple of
them for the same device, right? If so, we'd need to provide
additional information to distinguish which path is used for DMA.

Maxime

-- 
Maxime Ripard, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-24 15:35           ` Rob Herring
@ 2018-08-27 15:11             ` Maxime Ripard
  2018-08-29 12:33               ` Georgi Djakov
  0 siblings, 1 reply; 9+ messages in thread
From: Maxime Ripard @ 2018-08-27 15:11 UTC (permalink / raw)
  To: Rob Herring
  Cc: Georgi Djakov, open list:THERMAL, Greg Kroah-Hartman,
	Rafael J. Wysocki, Michael Turquette, Kevin Hilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, Amit Kucheria,
	seansw, daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Arnd Bergmann, linux-kernel@vger.kernel.org, A

[-- Attachment #1: Type: text/plain, Size: 2808 bytes --]

On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >
> > Hi Maxime,
> >
> > On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > > Hi Georgi,
> > >
> > > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> > >>> There is also a patch series from Maxime Ripard that's addressing the
> > >>> same general area. See "dt-bindings: Add a dma-parent property". We
> > >>> don't need multiple ways to address describing the device to memory
> > >>> paths, so you all had better work out a common solution.
> > >>
> > >> Looks like this fits exactly into the interconnect API concept. I see
> > >> MBUS as interconnect provider and display/camera as consumers, that
> > >> report their bandwidth needs. I am also planning to add support for
> > >> priority.
> > >
> > > Thanks for working on this. After looking at your serie, the one thing
> > > I'm a bit uncertain about (and the most important one to us) is how we
> > > would be able to tell through which interconnect the DMA are done.
> > >
> > > This is important to us since our topology is actually quite simple as
> > > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > > so we need to apply an offset to each buffer being DMA'd.
> >
> > Ok, i see - your problem is not about bandwidth scaling but about using
> > different memory ranges by the driver to access the same location. So
> > this is not really the same and your problem is different. Also the
> > interconnect bindings are describing a path and endpoints. However i am
> > open to any ideas.
> 
> It may be different things you need, but both are related to the path
> between a bus master and memory. We can't have each 'problem'
> described in a different way. Well, we could as long as each platform
> has different problems, but that's unlikely.
> 
> It could turn out that the only commonality is property naming
> convention, but that's still better than 2 independent solutions.

Yeah, I really don't think the two issues are unrelated. Can we maybe
have a particular interconnect-names value to mark the interconnect
being used to perform DMA?

> I know you each want to just fix your issues, but the fact that DT
> doesn't model the DMA side of the bus structure has been an issue at
> least since the start of DT on ARM. Either we should address this in a
> flexible way or we can just continue to manage without. So I'm not
> inclined to take something that only addresses one SoC family.

I'd really like to have it addressed. We're getting bit by this, and
the hacks we have don't work well anymore.

Maxime

-- 
Maxime Ripard, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-27 15:08           ` Maxime Ripard
@ 2018-08-29 12:31             ` Georgi Djakov
  0 siblings, 0 replies; 9+ messages in thread
From: Georgi Djakov @ 2018-08-29 12:31 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Rob Herring, linux-pm, Greg Kroah-Hartman, Rafael J. Wysocki,
	Rob Herring, Mike Turquette, khilman, Vincent Guittot, skannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, evgreen,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	Linux Kernel Mailing List, linux-arm-kernel, linux-arm-msm

Hi Maxime,

On 08/27/2018 06:08 PM, Maxime Ripard wrote:
> Hi!
> 
> On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote:
>> Hi Maxime,
>>
>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
>>> Hi Georgi,
>>>
>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>>>> There is also a patch series from Maxime Ripard that's addressing the
>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>>>> don't need multiple ways to address describing the device to memory
>>>>> paths, so you all had better work out a common solution.
>>>>
>>>> Looks like this fits exactly into the interconnect API concept. I see
>>>> MBUS as interconnect provider and display/camera as consumers, that
>>>> report their bandwidth needs. I am also planning to add support for
>>>> priority.
>>>
>>> Thanks for working on this. After looking at your serie, the one thing
>>> I'm a bit uncertain about (and the most important one to us) is how we
>>> would be able to tell through which interconnect the DMA are done.
>>>
>>> This is important to us since our topology is actually quite simple as
>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
>>> so we need to apply an offset to each buffer being DMA'd.
>>
>> Ok, i see - your problem is not about bandwidth scaling but about using
>> different memory ranges by the driver to access the same location.
> 
> Well, it turns out that the problem we are bitten by at the moment is
> the memory range one, but the controller it goes through also provides
> bandwidth scaling, priorities and so on, so it's not too far off.

Thanks for the clarification. Alright, so this will fit nicely into the
model as a provider. I agree that we should try to use the same binding
to describe a path from a master to memory in DT.

>> So this is not really the same and your problem is different. Also
>> the interconnect bindings are describing a path and
>> endpoints. However i am open to any ideas.
> 
> It's describing a path and endpoints, but it can describe multiple of
> them for the same device, right? If so, we'd need to provide
> additional information to distinguish which path is used for DMA.

Sure, multiple paths are supported.

BR,
Georgi

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-27 15:11             ` Maxime Ripard
@ 2018-08-29 12:33               ` Georgi Djakov
  2018-08-30  7:47                 ` Maxime Ripard
  0 siblings, 1 reply; 9+ messages in thread
From: Georgi Djakov @ 2018-08-29 12:33 UTC (permalink / raw)
  To: Maxime Ripard, Rob Herring
  Cc: open list:THERMAL, Greg Kroah-Hartman, Rafael J. Wysocki,
	Michael Turquette, Kevin Hilman, Vincent Guittot, Saravana Kannan,
	Bjorn Andersson, Amit Kucheria, seansw, daidavid1, Evan Green,
	Mark Rutland, Lorenzo Pieralisi, Alexandre Bailon, Arnd Bergmann,
	linux-kernel@vger.kernel.org, ARM/FREESCALE 

Hi Rob and Maxime,

On 08/27/2018 06:11 PM, Maxime Ripard wrote:
> On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
>> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>
>>> Hi Maxime,
>>>
>>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
>>>> Hi Georgi,
>>>>
>>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>>>>> There is also a patch series from Maxime Ripard that's addressing the
>>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>>>>> don't need multiple ways to address describing the device to memory
>>>>>> paths, so you all had better work out a common solution.
>>>>>
>>>>> Looks like this fits exactly into the interconnect API concept. I see
>>>>> MBUS as interconnect provider and display/camera as consumers, that
>>>>> report their bandwidth needs. I am also planning to add support for
>>>>> priority.
>>>>
>>>> Thanks for working on this. After looking at your serie, the one thing
>>>> I'm a bit uncertain about (and the most important one to us) is how we
>>>> would be able to tell through which interconnect the DMA are done.
>>>>
>>>> This is important to us since our topology is actually quite simple as
>>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
>>>> so we need to apply an offset to each buffer being DMA'd.
>>>
>>> Ok, i see - your problem is not about bandwidth scaling but about using
>>> different memory ranges by the driver to access the same location. So
>>> this is not really the same and your problem is different. Also the
>>> interconnect bindings are describing a path and endpoints. However i am
>>> open to any ideas.
>>
>> It may be different things you need, but both are related to the path
>> between a bus master and memory. We can't have each 'problem'
>> described in a different way. Well, we could as long as each platform
>> has different problems, but that's unlikely.
>>
>> It could turn out that the only commonality is property naming
>> convention, but that's still better than 2 independent solutions.
> 
> Yeah, I really don't think the two issues are unrelated. Can we maybe
> have a particular interconnect-names value to mark the interconnect
> being used to perform DMA?

We can call one of the paths "dma" and use it to perform DMA for the
current device. I don't see a problem with this. The name of the path is
descriptive and makes sense. And by doing we avoid adding more DT
properties, which would be an other option.

This also makes me think that it might be a good idea to have a standard
name for the path to memory as i expect some people will call it "mem",
others "ddr" etc.

Thanks,
Georgi

>> I know you each want to just fix your issues, but the fact that DT
>> doesn't model the DMA side of the bus structure has been an issue at
>> least since the start of DT on ARM. Either we should address this in a
>> flexible way or we can just continue to manage without. So I'm not
>> inclined to take something that only addresses one SoC family.
> 
> I'd really like to have it addressed. We're getting bit by this, and
> the hacks we have don't work well anymore.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings
  2018-08-29 12:33               ` Georgi Djakov
@ 2018-08-30  7:47                 ` Maxime Ripard
  0 siblings, 0 replies; 9+ messages in thread
From: Maxime Ripard @ 2018-08-30  7:47 UTC (permalink / raw)
  To: Georgi Djakov
  Cc: Rob Herring, open list:THERMAL, Greg Kroah-Hartman,
	Rafael J. Wysocki, Michael Turquette, Kevin Hilman,
	Vincent Guittot, Saravana Kannan, Bjorn Andersson, Amit Kucheria,
	seansw, daidavid1, Evan Green, Mark Rutland, Lorenzo Pieralisi,
	Alexandre Bailon, Arnd Bergmann, linux-kernel@vger.kernel.org,
	ARM/FREESCAL

[-- Attachment #1: Type: text/plain, Size: 2931 bytes --]

Hi,

On Wed, Aug 29, 2018 at 03:33:29PM +0300, Georgi Djakov wrote:
> On 08/27/2018 06:11 PM, Maxime Ripard wrote:
> > On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
> >> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>
> >>> Hi Maxime,
> >>>
> >>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> >>>> Hi Georgi,
> >>>>
> >>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>>>>> There is also a patch series from Maxime Ripard that's addressing the
> >>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>>>>> don't need multiple ways to address describing the device to memory
> >>>>>> paths, so you all had better work out a common solution.
> >>>>>
> >>>>> Looks like this fits exactly into the interconnect API concept. I see
> >>>>> MBUS as interconnect provider and display/camera as consumers, that
> >>>>> report their bandwidth needs. I am also planning to add support for
> >>>>> priority.
> >>>>
> >>>> Thanks for working on this. After looking at your serie, the one thing
> >>>> I'm a bit uncertain about (and the most important one to us) is how we
> >>>> would be able to tell through which interconnect the DMA are done.
> >>>>
> >>>> This is important to us since our topology is actually quite simple as
> >>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
> >>>> so we need to apply an offset to each buffer being DMA'd.
> >>>
> >>> Ok, i see - your problem is not about bandwidth scaling but about using
> >>> different memory ranges by the driver to access the same location. So
> >>> this is not really the same and your problem is different. Also the
> >>> interconnect bindings are describing a path and endpoints. However i am
> >>> open to any ideas.
> >>
> >> It may be different things you need, but both are related to the path
> >> between a bus master and memory. We can't have each 'problem'
> >> described in a different way. Well, we could as long as each platform
> >> has different problems, but that's unlikely.
> >>
> >> It could turn out that the only commonality is property naming
> >> convention, but that's still better than 2 independent solutions.
> > 
> > Yeah, I really don't think the two issues are unrelated. Can we maybe
> > have a particular interconnect-names value to mark the interconnect
> > being used to perform DMA?
> 
> We can call one of the paths "dma" and use it to perform DMA for the
> current device. I don't see a problem with this. The name of the path is
> descriptive and makes sense. And by doing we avoid adding more DT
> properties, which would be an other option.

That works for me. If Rob is fine with it too, I'll send an updated
version of my serie based on yours.

Thanks!
Maxime



-- 
Maxime Ripard, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-08-30  7:47 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20180731161340.13000-1-georgi.djakov@linaro.org>
     [not found] ` <20180731161340.13000-3-georgi.djakov@linaro.org>
     [not found]   ` <CABGGisxq+hf91R18pnZ=VZ9f99GssWWPhpPCjNAROJmKg5-udA@mail.gmail.com>
2018-08-07 14:54     ` Re:[PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings Georgi Djakov
2018-08-20 15:32       ` [PATCH " Maxime Ripard
2018-08-24 14:51         ` Georgi Djakov
2018-08-24 15:35           ` Rob Herring
2018-08-27 15:11             ` Maxime Ripard
2018-08-29 12:33               ` Georgi Djakov
2018-08-30  7:47                 ` Maxime Ripard
2018-08-27 15:08           ` Maxime Ripard
2018-08-29 12:31             ` Georgi Djakov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).