linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bjorn Andersson <bjorn.andersson@linaro.org>
To: Mukesh Ojha <quic_mojha@quicinc.com>
Cc: linux-remoteproc@vger.kernel.org, lkml <linux-kernel@vger.kernel.org>
Subject: Re: Query on moving Recovery remoteproc work to a separate wq instead of system freezable wq
Date: Mon, 17 Jan 2022 16:20:38 -0600	[thread overview]
Message-ID: <YeXrtuQglDwhNvLm@builder.lan> (raw)
In-Reply-To: <ea64436c-3d9b-9ac1-d4e8-38f15142a764@quicinc.com>

On Mon 17 Jan 09:09 CST 2022, Mukesh Ojha wrote:

> Hi,
> 
> There could be a situation there is too much load(of tasks which is affined

As in "it's theoretically possible" or "we run into this issue all the
time"?

> to particular core) on a core on which  rproc
> recovery thread will not get a chance to run with no reason but the load. If
> we make this queue unbound, then this work
> can run on any core.
> 
> Kindly Let me if i can post a proper patch for this like below.
> 
> --- a/drivers/remoteproc/remoteproc_core.c
> +++ b/drivers/remoteproc/remoteproc_core.c
> @@ -59,6 +59,7 @@ static int rproc_release_carveout(struct rproc *rproc,
> 
>  /* Unique indices for remoteproc devices */
>  static DEFINE_IDA(rproc_dev_index);
> +static struct workqueue_struct *rproc_recovery_wq;
> 
>  static const char * const rproc_crash_names[] = {
>         [RPROC_MMUFAULT]        = "mmufault",
> @@ -2487,7 +2488,7 @@ void rproc_report_crash(struct rproc *rproc, enum
> rproc_crash_type type)
>                 rproc->name, rproc_crash_to_string(type));
> 
>         /* Have a worker handle the error; ensure system is not suspended */
> -       queue_work(system_freezable_wq, &rproc->crash_handler);
> +       queue_work(rproc_recovery_wq, &rproc->crash_handler);
>  }
>  EXPORT_SYMBOL(rproc_report_crash);
> 
> @@ -2532,6 +2533,12 @@ static void __exit rproc_exit_panic(void)
> 
>  static int __init remoteproc_init(void)
>  {
> +       rproc_recovery_wq = alloc_workqueue("rproc_recovery_wq", WQ_UNBOUND
> |
> +                               WQ_HIGHPRI | WQ_FREEZABLE |
> WQ_CPU_INTENSIVE, 0);

Afaict this is not only a separate work queue, but a high priority, "cpu
intensive" work queue. Does that really represent the urgency of getting
the recovery under way?

Regards,
Bjorn

> +       if (!rproc_recovery_wq) {
> +               pr_err("creation of rproc_recovery_wq failed\n");
> +       }
> +
> 
> Thanks,
> Mukesh

  reply	other threads:[~2022-01-17 22:20 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-17 15:09 Query on moving Recovery remoteproc work to a separate wq instead of system freezable wq Mukesh Ojha
2022-01-17 22:20 ` Bjorn Andersson [this message]
2022-01-18  9:27   ` Mukesh Ojha

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YeXrtuQglDwhNvLm@builder.lan \
    --to=bjorn.andersson@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=quic_mojha@quicinc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).