public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: hs.liao@mediatek.com (Horng-Shyang Liao)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v20 2/4] mailbox: mediatek: Add Mediatek CMDQ driver
Date: Mon, 6 Feb 2017 13:37:56 +0800	[thread overview]
Message-ID: <1486359476.11424.33.camel@mtksdaap41> (raw)
In-Reply-To: <CABb+yY13VaFpYz6EExvAXLpAqEWDJ4R50WG0DYbXO6iCESbA5A@mail.gmail.com>

Hi Jassi,

On Wed, 2017-02-01 at 10:52 +0530, Jassi Brar wrote:
> On Thu, Jan 26, 2017 at 2:07 PM, Horng-Shyang Liao <hs.liao@mediatek.com> wrote:
> > Hi Jassi,
> >
> > On Thu, 2017-01-26 at 10:08 +0530, Jassi Brar wrote:
> >> On Wed, Jan 4, 2017 at 8:36 AM, HS Liao <hs.liao@mediatek.com> wrote:
> >>
> >> > diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
> >> > new file mode 100644
> >> > index 0000000..747bcd3
> >> > --- /dev/null
> >> > +++ b/drivers/mailbox/mtk-cmdq-mailbox.c
> >>
> >> ...
> >>
> >> > +static void cmdq_task_exec(struct cmdq_pkt *pkt, struct cmdq_thread *thread)
> >> > +{
> >> > +       struct cmdq *cmdq;
> >> > +       struct cmdq_task *task;
> >> > +       unsigned long curr_pa, end_pa;
> >> > +
> >> > +       cmdq = dev_get_drvdata(thread->chan->mbox->dev);
> >> > +
> >> > +       /* Client should not flush new tasks if suspended. */
> >> > +       WARN_ON(cmdq->suspended);
> >> > +
> >> > +       task = kzalloc(sizeof(*task), GFP_ATOMIC);
> >> > +       task->cmdq = cmdq;
> >> > +       INIT_LIST_HEAD(&task->list_entry);
> >> > +       task->pa_base = dma_map_single(cmdq->mbox.dev, pkt->va_base,
> >> > +                                      pkt->cmd_buf_size, DMA_TO_DEVICE);
> >> >
> >> You seem to parse the requests and responses, that should ideally be
> >> done in client driver.
> >> Also, we are here in atomic context, can you move it in client driver
> >> (before the spin_lock)?
> >> Maybe by adding a new 'pa_base' member as well in 'cmdq_pkt'.
> >
> > will do

I agree with moving dma_map_single out from spin_lock.

However, mailbox clients cannot map virtual memory to mailbox
controller's device for DMA. In our previous discussion, we decided to
remove mailbox_controller.h from clients to restrict their capabilities.

Please take a look at following link from 2016/9/22 to 2016/9/30 about
mailbox_controller.h.
https://patchwork.kernel.org/patch/9312953/

Is there any better place to do dma_map_single?

> >> ....
> >> > +
> >> > +       cmdq->mbox.num_chans = CMDQ_THR_MAX_COUNT;
> >> > +       cmdq->mbox.ops = &cmdq_mbox_chan_ops;
> >> > +       cmdq->mbox.of_xlate = cmdq_xlate;
> >> > +
> >> > +       /* make use of TXDONE_BY_ACK */
> >> > +       cmdq->mbox.txdone_irq = false;
> >> > +       cmdq->mbox.txdone_poll = false;
> >> > +
> >> > +       for (i = 0; i < ARRAY_SIZE(cmdq->thread); i++) {
> >> >
> >> You mean  i < CMDQ_THR_MAX_COUNT
> >
> > will do
> >
> >> > +               cmdq->thread[i].base = cmdq->base + CMDQ_THR_BASE +
> >> > +                               CMDQ_THR_SIZE * i;
> >> > +               INIT_LIST_HEAD(&cmdq->thread[i].task_busy_list);
> >> >
> >> You seem the queue mailbox requests in this controller driver? why not
> >> use the mailbox api for that?
> >>
> >> > +               init_timer(&cmdq->thread[i].timeout);
> >> > +               cmdq->thread[i].timeout.function = cmdq_thread_handle_timeout;
> >> > +               cmdq->thread[i].timeout.data = (unsigned long)&cmdq->thread[i];
> >> >
> >> Here again... you seem to ignore the polling mechanism provided by the
> >> mailbox api, and implement your own.
> >
> > The queue is used to record the tasks which are flushed into CMDQ
> > hardware (GCE). We are handling time critical tasks, so we have to
> > queue them in GCE rather than a software queue (e.g. mailbox buffer).
> > Let me use display as an example. Many display tasks are flushed into
> > CMDQ to wait next vsync event. When vsync event is triggered by display
> > hardware, GCE needs to process all flushed tasks "within vblank" to
> > prevent garbage on screen. This is all done by GCE (without CPU)
> > to fulfill time critical requirement. After GCE finish its work,
> > it will generate interrupts, and then CMDQ driver will let clients know
> > which tasks are done.
> >
> Does the GCE provide any 'lock' to prevent modifying (by adding tasks
> to) the GCE h/w buffer when it is processing it at vsync?  Otherwise

CPU will suspend GCE when adding a task (cmdq_thread_suspend),
and resume GCE after adding task is done (cmdq_thread_resume).
If GCE is processing task(s) at vsync and CPU wants to add a new task
at the same time, CPU will detect this situation
(by cmdq_thread_is_in_wfe), resume GCE immediately, and then add
following task(s) to wait for next vsync event.
All the above logic is implemented at cmdq_task_exec.

> there maybe race/error. If there is such a 'lock' flag/irq, that could
> help here. However, you are supposed to know your h/w better, so I
> will accept this implementation assuming it can't be done any better.
> 
> Please address other comments and resubmit.
> 
> Thanks

After we figure out a better solution for dma_map_single issue, I will
resubmit a new version.

Thanks,
HS

  reply	other threads:[~2017-02-06  5:37 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-04  3:06 [PATCH v20 0/4] Mediatek MT8173 CMDQ support HS Liao
2017-01-04  3:06 ` [PATCH v20 1/4] dt-bindings: soc: Add documentation for the MediaTek GCE unit HS Liao
2017-01-04  3:06 ` [PATCH v20 2/4] mailbox: mediatek: Add Mediatek CMDQ driver HS Liao
2017-01-26  4:38   ` Jassi Brar
2017-01-26  8:37     ` Horng-Shyang Liao
2017-02-01  5:22       ` Jassi Brar
2017-02-06  5:37         ` Horng-Shyang Liao [this message]
2017-02-09 12:03           ` Horng-Shyang Liao
2017-02-16 15:32           ` Jassi Brar
2017-02-22  3:12             ` Horng-Shyang Liao
2017-02-23  4:10               ` Jassi Brar
2017-02-23 12:48                 ` Horng-Shyang Liao
     [not found]                   ` <497f8e4ef7ae4c8a9b7b4ab259801306@mtkmbs01n1.mediatek.inc>
     [not found]                     ` <1515400735.21044.35.camel@mhfsdcap03>
     [not found]                       ` <CABb+yY02S=+00mY-NTkJUBPiVruYdf0xuovKomehUxLNOQPLuA@mail.gmail.com>
2018-01-18  8:31                         ` FW: " houlong wei
2017-01-04  3:06 ` [PATCH v20 3/4] arm64: dts: mt8173: Add GCE node HS Liao
2017-01-04  3:06 ` [PATCH v20 4/4] soc: mediatek: Add Mediatek CMDQ helper HS Liao
2017-01-13  1:27 ` [PATCH v20 0/4] Mediatek MT8173 CMDQ support Horng-Shyang Liao
2017-01-20  3:11   ` Horng-Shyang Liao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1486359476.11424.33.camel@mtksdaap41 \
    --to=hs.liao@mediatek.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox