From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.kernel.org ([198.145.29.99]:45586 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727297AbeLESMM (ORCPT ); Wed, 5 Dec 2018 13:12:12 -0500 Date: Wed, 5 Dec 2018 23:42:02 +0530 From: Vinod Koul To: Andy Shevchenko Cc: Viresh Kumar , dmaengine@vger.kernel.org, stable@vger.kernel.org Subject: Re: [PATCH v2 1/7] dmaengine: dw: Add missed multi-block support for iDMA 32-bit Message-ID: <20181205181202.GV2847@vkoul-mobl> References: <20181205162818.45112-1-andriy.shevchenko@linux.intel.com> <20181205162818.45112-2-andriy.shevchenko@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181205162818.45112-2-andriy.shevchenko@linux.intel.com> Sender: stable-owner@vger.kernel.org List-ID: On 05-12-18, 18:28, Andy Shevchenko wrote: > Intel integrated DMA 32-bit support multi-block transfers. > Add missed setting to the platform data. > > Fixes: f7c799e950f9 ("dmaengine: dw: we do support Merrifield SoC in PCI mode") > Signed-off-by: Andy Shevchenko > Cc: stable@vger.kernel.org How is this a stable material? It would improve performance by using multi blocks but given the fact that this is used for slow peripherals, do you really see a user impact? > --- > drivers/dma/dw/pci.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/drivers/dma/dw/pci.c b/drivers/dma/dw/pci.c > index 7778ed705a1a..313ba10c6224 100644 > --- a/drivers/dma/dw/pci.c > +++ b/drivers/dma/dw/pci.c > @@ -25,6 +25,7 @@ static struct dw_dma_platform_data mrfld_pdata = { > .block_size = 131071, > .nr_masters = 1, > .data_width = {4}, > + .multi_block = {1, 1, 1, 1, 1, 1, 1, 1}, > }; > > static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) > -- > 2.19.2 -- ~Vinod