From mboxrd@z Thu Jan 1 00:00:00 1970 From: Georgi Djakov Subject: Re: [PATCH v7 2/8] dt-bindings: Introduce interconnect provider bindings Date: Wed, 29 Aug 2018 15:31:16 +0300 Message-ID: <75f1d8f8-84e8-e621-b91d-84b4d15edfa1@linaro.org> References: <20180731161340.13000-1-georgi.djakov@linaro.org> <20180731161340.13000-3-georgi.djakov@linaro.org> <20180820153207.xx5outviph7ec76p@flea> <672e6c6c-222f-5e7f-5d0c-acc8da68b1ab@linaro.org> <20180827150836.shl7einpuvuw42p7@flea> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20180827150836.shl7einpuvuw42p7@flea> Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org To: Maxime Ripard Cc: Rob Herring , linux-pm@vger.kernel.org, Greg Kroah-Hartman , "Rafael J. Wysocki" , Rob Herring , Mike Turquette , khilman@baylibre.com, Vincent Guittot , skannan@codeaurora.org, Bjorn Andersson , Amit Kucheria , seansw@qti.qualcomm.com, daidavid1@codeaurora.org, evgreen@chromium.org, Mark Rutland , Lorenzo Pieralisi , Alexandre Bailon , Arnd Bergmann , Linux Kernel Mailing List , linux-arm-kernel , linux-arm-msm@vger.ke List-Id: devicetree@vger.kernel.org Hi Maxime, On 08/27/2018 06:08 PM, Maxime Ripard wrote: > Hi! > > On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote: >> Hi Maxime, >> >> On 08/20/2018 06:32 PM, Maxime Ripard wrote: >>> Hi Georgi, >>> >>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote: >>>>> There is also a patch series from Maxime Ripard that's addressing the >>>>> same general area. See "dt-bindings: Add a dma-parent property". We >>>>> don't need multiple ways to address describing the device to memory >>>>> paths, so you all had better work out a common solution. >>>> >>>> Looks like this fits exactly into the interconnect API concept. I see >>>> MBUS as interconnect provider and display/camera as consumers, that >>>> report their bandwidth needs. I am also planning to add support for >>>> priority. >>> >>> Thanks for working on this. After looking at your serie, the one thing >>> I'm a bit uncertain about (and the most important one to us) is how we >>> would be able to tell through which interconnect the DMA are done. >>> >>> This is important to us since our topology is actually quite simple as >>> you've seen, but the RAM is not mapped on that bus and on the CPU's, >>> so we need to apply an offset to each buffer being DMA'd. >> >> Ok, i see - your problem is not about bandwidth scaling but about using >> different memory ranges by the driver to access the same location. > > Well, it turns out that the problem we are bitten by at the moment is > the memory range one, but the controller it goes through also provides > bandwidth scaling, priorities and so on, so it's not too far off. Thanks for the clarification. Alright, so this will fit nicely into the model as a provider. I agree that we should try to use the same binding to describe a path from a master to memory in DT. >> So this is not really the same and your problem is different. Also >> the interconnect bindings are describing a path and >> endpoints. However i am open to any ideas. > > It's describing a path and endpoints, but it can describe multiple of > them for the same device, right? If so, we'd need to provide > additional information to distinguish which path is used for DMA. Sure, multiple paths are supported. BR, Georgi