From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753365AbbCWTKK (ORCPT ); Mon, 23 Mar 2015 15:10:10 -0400 Received: from down.free-electrons.com ([37.187.137.238]:33030 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752465AbbCWTKH (ORCPT ); Mon, 23 Mar 2015 15:10:07 -0400 Date: Mon, 23 Mar 2015 12:07:27 -0700 From: Maxime Ripard To: Ludovic Desroches Cc: Vinod Koul , dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, Laurent Pinchart Subject: Re: [PATCH RFC 1/2] dmaengine: Introduce scheduled DMA framework Message-ID: <20150323190727.GD15676@lukather> References: <1426966927-30833-1-git-send-email-maxime.ripard@free-electrons.com> <1426966927-30833-2-git-send-email-maxime.ripard@free-electrons.com> <20150323132513.GF32259@odux.rfo.atmel.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="UoPmpPX/dBe4BELn" Content-Disposition: inline In-Reply-To: <20150323132513.GF32259@odux.rfo.atmel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --UoPmpPX/dBe4BELn Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hi Ludo, On Mon, Mar 23, 2015 at 02:25:13PM +0100, Ludovic Desroches wrote: > On Sat, Mar 21, 2015 at 12:42:06PM -0700, Maxime Ripard wrote: > > This framework aims at easing the development of dmaengine drivers by p= roviding > > generic implementations of the functions usually required by dmaengine,= while > > abstracting away most of the logic required. >=20 > For sure it will ease writing new dmaengine drivers! Moreover, adopting > this framework will convert the driver to use virtual channels isn't it? Yes, it relies on virt-dma > > It is very relevant for controllers that have more requests than channe= ls, > > where you need to have some scheduling that is usually very bug prone, = and > > shouldn't be implemented in each and every driver. > >=20 > > This introduces a new set of callbacks for drivers to implement the dev= ice > > specific behaviour. These new sets of callbacks aims at providing both = how to > > handle channel related operations (start the transfer of a new descript= or, > > pause, resume, etc.) and the LLI related operations (iterator and vario= us > > accessors). > >=20 > > So far, the only transfer types supported are memcpy and slave transfer= s, but > > eventually should support new transfer types as new drivers get convert= ed. > >=20 > > Signed-off-by: Maxime Ripard > > --- > > drivers/dma/Kconfig | 4 + > > drivers/dma/Makefile | 1 + > > drivers/dma/scheduled-dma.c | 571 ++++++++++++++++++++++++++++++++++++= ++++++++ > > drivers/dma/scheduled-dma.h | 140 +++++++++++ > > 4 files changed, 716 insertions(+) > > create mode 100644 drivers/dma/scheduled-dma.c > > create mode 100644 drivers/dma/scheduled-dma.h > >=20 > > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig > > index a874b6ec6650..032bf5fcd58b 100644 > > --- a/drivers/dma/Kconfig > > +++ b/drivers/dma/Kconfig > > @@ -406,6 +406,7 @@ config DMA_SUN6I > > depends on RESET_CONTROLLER > > select DMA_ENGINE > > select DMA_VIRTUAL_CHANNELS > > + select DMA_SCHEDULED > > help > > Support for the DMA engine first found in Allwinner A31 SoCs. > > =20 > > @@ -431,6 +432,9 @@ config DMA_ENGINE > > config DMA_VIRTUAL_CHANNELS > > tristate > > =20 > > +config DMA_SCHEDULED > > + bool > > + >=20 > I think, it should select DMA_VIRTUAL_CHANNELS Yep, indeed. > > config DMA_ACPI > > def_bool y > > depends on ACPI > > diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile > > index f915f61ec574..1db31814c749 100644 > > --- a/drivers/dma/Makefile > > +++ b/drivers/dma/Makefile > > @@ -5,6 +5,7 @@ obj-$(CONFIG_DMA_ENGINE) +=3D dmaengine.o > > obj-$(CONFIG_DMA_VIRTUAL_CHANNELS) +=3D virt-dma.o > > obj-$(CONFIG_DMA_ACPI) +=3D acpi-dma.o > > obj-$(CONFIG_DMA_OF) +=3D of-dma.o > > +obj-$(CONFIG_DMA_SCHEDULED) +=3D scheduled-dma.o > > =20 > > obj-$(CONFIG_INTEL_MID_DMAC) +=3D intel_mid_dma.o > > obj-$(CONFIG_DMATEST) +=3D dmatest.o > > diff --git a/drivers/dma/scheduled-dma.c b/drivers/dma/scheduled-dma.c > > new file mode 100644 > > index 000000000000..482d04f2ccbc > > --- /dev/null > > +++ b/drivers/dma/scheduled-dma.c > > @@ -0,0 +1,571 @@ > > +/* > > + * Copyright (C) 2015 Maxime Ripard > > + * Maxime Ripard > > + * > > + * This program is free software; you can redistribute it and/or modify > > + * it under the terms of the GNU General Public License as published by > > + * the Free Software Foundation; either version 2 of the License, or > > + * (at your option) any later version. > > + */ > > + > > +#include > > +#include > > +#include > > +#include > > + > > +#include "scheduled-dma.h" > > +#include "virt-dma.h" > > + > > +static struct sdma_request *sdma_pop_queued_transfer(struct sdma *sdma, > > + struct sdma_channel *schan) > > +{ > > + struct sdma_request *sreq =3D NULL; > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sdma->lock, flags); > > + > > + /* No requests are awaiting an available channel */ > > + if (list_empty(&sdma->pend_reqs)) > > + goto out; > > + > > + /* > > + * If we don't have any validate_request callback, any request > > + * can be dispatched to any channel. > > + * > > + * Remove the first entry and return it. > > + */ > > + if (!sdma->ops->validate_request) { > > + sreq =3D list_first_entry(&sdma->pend_reqs, > > + struct sdma_request, node); > > + list_del_init(&sreq->node); > > + goto out; > > + } > > + > > + list_for_each_entry(sreq, &sdma->pend_reqs, node) { > > + /* > > + * Ask the driver to validate that the request can > > + * happen on the channel. > > + */ > > + if (sdma->ops->validate_request(schan, sreq)) { > > + list_del_init(&sreq->node); > > + goto out; > > + } > > + > > + /* Otherwise, just keep looping */ > > + } > > + > > +out: > > + spin_unlock_irqrestore(&sdma->lock, flags); > > + > > + return sreq; > > +} > > + > > +/* > > + * Besoin d'une fonction pour pusher un descriptor vers un pchan > > + * > > + * Flow normal: > > + * - Election d'un pchan (Framework) > > + * - Push d'un descripteur vers le pchan (Driver) > > + * - idle.... > > + * - interrupt (driver) > > + * - Transfert termin=E9, notification vers le framework (driver) > > + * - Nouveau transfert dans la queue? > > + * + Si oui, on est reparti > > + * + Si non, on sort de l'interrupt, le pchan est lib=E9r=E9 > > + */ > > + > > +struct sdma_desc *sdma_report(struct sdma *sdma, > > + struct sdma_channel *schan, > > + enum sdma_report_status status) > > +{ > > + struct sdma_desc *sdesc =3D NULL; > > + struct virt_dma_desc *vdesc; > > + struct sdma_request *sreq; > > + > > + switch (status) { > > + case SDMA_REPORT_TRANSFER: > > + /* > > + * We're done with the current transfer that was in this > > + * physical channel. > > + */ > > + vchan_cookie_complete(&schan->desc->vdesc); > > + > > + /* > > + * Now, try to see if there's any queued transfer > > + * awaiting an available channel. > > + * > > + * If not, just bail out, and mark the pchan as > > + * available. > > + * > > + * If there's such a transfer, validate that the > > + * driver can handle it, and ask it to do the > > + * transfer. > > + */ > > + sreq =3D sdma_pop_queued_transfer(sdma, schan); > > + if (!sreq) { > > + list_add_tail(&schan->node, &sdma->avail_chans); > > + return NULL; > > + } > > + > > + /* Mark the request as assigned to a particular channel */ > > + sreq->chan =3D schan; > > + > > + /* Retrieve the next transfer descriptor */ > > + vdesc =3D vchan_next_desc(&sreq->vchan); > > + schan->desc =3D sdesc =3D to_sdma_desc(&vdesc->tx); > > + > > + break; > > + > > + default: > > + break; > > + } > > + > > + return sdesc; > > +} > > +EXPORT_SYMBOL_GPL(sdma_report); > > + > > +static enum dma_status sdma_tx_status(struct dma_chan *chan, > > + dma_cookie_t cookie, > > + struct dma_tx_state *state) > > +{ > > + struct sdma_request *sreq =3D to_sdma_request(chan); > > + struct sdma *sdma =3D to_sdma(chan->device); > > + struct sdma_channel *schan =3D sreq->chan; > > + struct virt_dma_desc *vd; > > + struct sdma_desc *desc; > > + enum dma_status ret; > > + unsigned long flags; > > + size_t bytes =3D 0; > > + void *lli; > > + > > + spin_lock_irqsave(&sreq->vchan.lock, flags); > > + > > + ret =3D dma_cookie_status(chan, cookie, state); > > + if (ret =3D=3D DMA_COMPLETE) > > + goto out; > > + > > + vd =3D vchan_find_desc(&sreq->vchan, cookie); > > + desc =3D to_sdma_desc(&vd->tx); > > + > > + if (vd) { > > + lli =3D desc->v_lli; > > + while (true) { > > + bytes +=3D sdma->ops->lli_size(lli); > > + > > + if (!sdma->ops->lli_has_next(lli)) > > + break; > > + > > + lli =3D sdma->ops->lli_next(lli); > > + } > > + } else if (chan) { > > + bytes =3D sdma->ops->channel_residue(schan); > > + } > > + > > + dma_set_residue(state, bytes); > > + > > +out: > > + spin_unlock_irqrestore(&sreq->vchan.lock, flags); > > + > > + return ret; > > +}; > > + > > +static int sdma_config(struct dma_chan *chan, > > + struct dma_slave_config *config) > > +{ > > + struct sdma_request *sreq =3D to_sdma_request(chan); > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sreq->vchan.lock, flags); > > + memcpy(&sreq->cfg, config, sizeof(*config)); > > + spin_unlock_irqrestore(&sreq->vchan.lock, flags); > > + > > + return 0; > > +} > > + > > +static int sdma_pause(struct dma_chan *chan) > > +{ > > + struct sdma_request *sreq =3D to_sdma_request(chan); > > + struct sdma_channel *schan =3D sreq->chan; > > + struct sdma *sdma =3D to_sdma(chan->device); > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sreq->vchan.lock, flags); > > + > > + /* > > + * If the request is currently scheduled on a channel, just > > + * pause the channel. > > + * > > + * If not, remove the request from the pending list. > > + */ > > + if (schan) { > > + sdma->ops->channel_pause(schan); > > + } else { > > + spin_lock(&sdma->lock); > > + list_del_init(&sreq->node); > > + spin_unlock(&sdma->lock); > > + } > > + > > + spin_unlock_irqrestore(&sreq->vchan.lock, flags); > > + > > + return 0; > > +} > > + > > +static int sdma_resume(struct dma_chan *chan) > > +{ > > + struct sdma_request *sreq =3D to_sdma_request(chan); > > + struct sdma_channel *schan =3D sreq->chan; > > + struct sdma *sdma =3D to_sdma(chan->device); > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sreq->vchan.lock, flags); > > + > > + /* > > + * If the request is currently scheduled on a channel, just > > + * resume the channel. > > + * > > + * If not, add the request from the pending list. > > + */ >=20 > Is such a case possible? Yes, it's possible, if you have more requests than channels, and all channels are currently busy with other transfers, we can end up in a case where we have a requests where issue_pending has been called, but is not currently scheduled on any channel, for an unknown amount of time. In such a case, you might very well end up with clients calling pause/resume on these requests. > The request should be already in the pending list, isn't it? Not really, it would have been removed by sdma_pause. > By the way, I may have missed something but where do you add a > request to the pending list? I have seen it only in the resume case. It should be in issue_pending and pause. If not, it's a bug :) Thanks! Maxime --=20 Maxime Ripard, Free Electrons Embedded Linux, Kernel and Android engineering http://free-electrons.com --UoPmpPX/dBe4BELn Content-Type: application/pgp-signature; name="signature.asc" Content-Description: Digital signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJVEGRvAAoJEBx+YmzsjxAgeHkP/i7/i7jR7/6564seT0a249OS kmELXNPjBAp071sl4QAkhBkcIWP7GMl4ndlHwD55biyzSU0Ve6EQtt+copw0YT1z mOoa817Z2mm3wwPvYAXiAnQMJCVxAgUrtzy0JnoM/MCXGyw3o82EmHnrpvHRPFq3 l3TtvbdkwJn9obYzN3tclBQJPGPllOMNWnaZ8cZ4rEZPGi6Hn5K1AqUs21pzYK95 +eUsmhxjqHAQbG5qmCKqDeDZYGfidt8gcm5Kvg/iwxuEgR4OnEBIGi9SljukeWcL 080wcSJsPA+2oL4ePZ0OuDgKItZHDvprBiAF/fuObKxj8MWTq0oygSQjzai53eK0 N4nYn2uswxxkEsfayaI9T5ClrtfJLBcPYQ6GFGziRnBoOLN2plNxTJoLWy+MpLwt wOhfIVendW2z6WEx26hwTQ9Ey5iczgCBWGmKzYG7Zi3D06xN3qXyL4rpjxQimo5g fk/vdt/8G0cF/ccwcTimRSuhD5r8A9haupaew8howUyF8t+az5g7RdD6VSfs6Qjr 0efbUILCWcmIoSnPcmPQuSy5xcf60p/PwGqGbJR1Gpys3uaFK5lGQb29Hte83Xum wDjujr7bjDiXXzJ/1yVnwjpR4Z69vjRhdmXFG9pnYtLcLk0mkUmddI9zfAXoJicU Ds7pN+rAtyy2eUFKgcK+ =MiFp -----END PGP SIGNATURE----- --UoPmpPX/dBe4BELn--