From: Vinod Koul <vkoul@kernel.org>
To: Peng Ma <peng.ma@nxp.com>
Cc: dan.j.williams@intel.com, leoyang.li@nxp.com,
linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org
Subject: Re: [V4 2/2] dmaengine: fsl-dpaa2-qdma: Add NXP dpaa2 qDMA controller driver for Layerscape SoCs
Date: Mon, 24 Jun 2019 22:15:56 +0530 [thread overview]
Message-ID: <20190624164556.GD2962@vkoul-mobl> (raw)
In-Reply-To: <20190613101341.21169-2-peng.ma@nxp.com>
On 13-06-19, 10:13, Peng Ma wrote:
> DPPA2(Data Path Acceleration Architecture 2) qDMA
> supports channel virtualization by allowing DMA
typo virtualization
> jobs to be enqueued into different frame queues.
> Core can initiate a DMA transaction by preparing a frame
> descriptor(FD) for each DMA job and enqueuing this job to
> a frame queue. through a hardware portal. The qDMA
^^^
why this full stop?
> +static struct dpaa2_qdma_comp *
> +dpaa2_qdma_request_desc(struct dpaa2_qdma_chan *dpaa2_chan)
> +{
> + struct dpaa2_qdma_comp *comp_temp = NULL;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&dpaa2_chan->queue_lock, flags);
> + if (list_empty(&dpaa2_chan->comp_free)) {
> + spin_unlock_irqrestore(&dpaa2_chan->queue_lock, flags);
> + comp_temp = kzalloc(sizeof(*comp_temp), GFP_NOWAIT);
> + if (!comp_temp)
> + goto err;
> + comp_temp->fd_virt_addr =
> + dma_pool_alloc(dpaa2_chan->fd_pool, GFP_NOWAIT,
> + &comp_temp->fd_bus_addr);
> + if (!comp_temp->fd_virt_addr)
> + goto err_comp;
> +
> + comp_temp->fl_virt_addr =
> + dma_pool_alloc(dpaa2_chan->fl_pool, GFP_NOWAIT,
> + &comp_temp->fl_bus_addr);
> + if (!comp_temp->fl_virt_addr)
> + goto err_fd_virt;
> +
> + comp_temp->desc_virt_addr =
> + dma_pool_alloc(dpaa2_chan->sdd_pool, GFP_NOWAIT,
> + &comp_temp->desc_bus_addr);
> + if (!comp_temp->desc_virt_addr)
> + goto err_fl_virt;
> +
> + comp_temp->qchan = dpaa2_chan;
> + return comp_temp;
> + }
> +
> + comp_temp = list_first_entry(&dpaa2_chan->comp_free,
> + struct dpaa2_qdma_comp, list);
> + list_del(&comp_temp->list);
> + spin_unlock_irqrestore(&dpaa2_chan->queue_lock, flags);
> +
> + comp_temp->qchan = dpaa2_chan;
> +
> + return comp_temp;
> +
> +err_fl_virt:
no err logs? how will you know what went wrong?
> +static enum
> +dma_status dpaa2_qdma_tx_status(struct dma_chan *chan,
> + dma_cookie_t cookie,
> + struct dma_tx_state *txstate)
> +{
> + return dma_cookie_status(chan, cookie, txstate);
why not set dma_cookie_status as this callback?
> +static int __cold dpaa2_qdma_setup(struct fsl_mc_device *ls_dev)
> +{
> + struct dpaa2_qdma_priv_per_prio *ppriv;
> + struct device *dev = &ls_dev->dev;
> + struct dpaa2_qdma_priv *priv;
> + u8 prio_def = DPDMAI_PRIO_NUM;
> + int err = -EINVAL;
> + int i;
> +
> + priv = dev_get_drvdata(dev);
> +
> + priv->dev = dev;
> + priv->dpqdma_id = ls_dev->obj_desc.id;
> +
> + /* Get the handle for the DPDMAI this interface is associate with */
> + err = dpdmai_open(priv->mc_io, 0, priv->dpqdma_id, &ls_dev->mc_handle);
> + if (err) {
> + dev_err(dev, "dpdmai_open() failed\n");
> + return err;
> + }
> + dev_info(dev, "Opened dpdmai object successfully\n");
this is noise in kernel, consider debug level
> +static int __cold dpaa2_dpdmai_bind(struct dpaa2_qdma_priv *priv)
> +{
> + int err;
> + int i, num;
> + struct device *dev = priv->dev;
> + struct dpaa2_qdma_priv_per_prio *ppriv;
> + struct dpdmai_rx_queue_cfg rx_queue_cfg;
> + struct fsl_mc_device *ls_dev = to_fsl_mc_device(dev);
the order is reverse than used in other fn, please stick to one style!
--
~Vinod
next prev parent reply other threads:[~2019-06-24 16:49 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-13 10:13 [V4 1/2] dmaengine: fsl-dpaa2-qdma: Add the DPDMAI(Data Path DMA Interface) support Peng Ma
2019-06-13 10:13 ` [V4 2/2] dmaengine: fsl-dpaa2-qdma: Add NXP dpaa2 qDMA controller driver for Layerscape SoCs Peng Ma
2019-06-24 16:45 ` Vinod Koul [this message]
2019-09-11 2:01 ` [EXT] " Peng Ma
2019-09-24 19:34 ` Vinod Koul
2019-09-25 2:27 ` Peng Ma
2019-09-25 16:34 ` Vinod Koul
2019-09-26 6:53 ` Peng Ma
2019-06-14 1:36 ` [V4 1/2] dmaengine: fsl-dpaa2-qdma: Add the DPDMAI(Data Path DMA Interface) support Peng Ma
2019-06-24 12:25 ` Vinod Koul
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190624164556.GD2962@vkoul-mobl \
--to=vkoul@kernel.org \
--cc=dan.j.williams@intel.com \
--cc=dmaengine@vger.kernel.org \
--cc=leoyang.li@nxp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=peng.ma@nxp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox