From: Tejun Heo <tj@kernel.org>
To: Marc Kleine-Budde <mkl@pengutronix.de>
Cc: "Sebastian Andrzej Siewior" <bigeasy@linutronix.de>,
"Rasmus Villemoes" <linux@rasmusvillemoes.dk>,
"Peter Hurley" <peter@hurleysoftware.com>,
"Lai Jiangshan" <jiangshanlai@gmail.com>,
"Esben Haabendal" <esben@geanix.com>,
"Steven Walter" <stevenrwalter@gmail.com>,
linux-kernel@vger.kernel.org,
"Oleksij Rempel" <o.rempel@pengutronix.de>,
"Pengutronix Kernel Team" <kernel@pengutronix.de>,
"André Pribil" <a.pribil@beck-ipc.com>,
"Jiri Slaby" <jirislaby@kernel.org>,
linux-rt-users@vger.kernel.org
Subject: Re: [RFC PATCH 0/2] RT scheduling policies for workqueues
Date: Mon, 28 Mar 2022 07:39:25 -1000 [thread overview]
Message-ID: <YkHyzcfiyjLfIVOo@slm.duckdns.org> (raw)
In-Reply-To: <20220328100927.5ax34nea7sp7jdsy@pengutronix.de>
Hello,
On Mon, Mar 28, 2022 at 12:09:27PM +0200, Marc Kleine-Budde wrote:
> > Having a kthread per "low-latency" tty instance is something I would
> > prefer. The kwork corner is an anonymous worker instance and probably
> > does more harm than good. Especially if it is a knob for everyone which
> > is used for the wrong reasons and manages to be harmful in the end.
> > With a special kthread for a particular tty, the thread can be assigned
> > with the desired priority within the system and ttyS1 can be
> > distinguished from ttyS0 (and so on). This turned out to be useful in a
> > few setups over the years.
>
> +1
>
> The networking subsystem has gone the same/similar way with NAPI. NAPI
> handling can be switched from the softirq to kernel thread on a per
> interface basis.
I wonder whether it'd be useful to provide a set of wrappers which can make
switching between workqueue and kworker easy. Semantics-wise, they're
already mostly aligned and it shouldn't be too difficult to e.g. make an
unbounded workqueue be backed by a dedicated kthread_worker instead of
shared pool depending on a flag, or even allow switching dynamically.
Thanks.
--
tejun
next prev parent reply other threads:[~2022-03-28 17:39 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-23 14:55 [RFC PATCH 0/2] RT scheduling policies for workqueues Rasmus Villemoes
2022-03-23 14:55 ` [RFC PATCH 1/2] workqueue: allow use of realtime scheduling policies Rasmus Villemoes
2022-03-23 14:56 ` [RFC PATCH 2/2] workqueue: update sysfs handlers, allow setting RT policies Rasmus Villemoes
2022-03-28 10:05 ` [RFC PATCH 0/2] RT scheduling policies for workqueues Sebastian Andrzej Siewior
2022-03-28 10:09 ` Marc Kleine-Budde
2022-03-28 17:39 ` Tejun Heo [this message]
2022-03-28 18:07 ` Marc Kleine-Budde
2022-03-29 6:30 ` Sebastian Andrzej Siewior
2022-03-29 8:33 ` Rasmus Villemoes
2022-03-29 16:57 ` Tejun Heo
2022-04-01 9:21 ` Sebastian Andrzej Siewior
2022-04-06 10:00 ` Rasmus Villemoes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YkHyzcfiyjLfIVOo@slm.duckdns.org \
--to=tj@kernel.org \
--cc=a.pribil@beck-ipc.com \
--cc=bigeasy@linutronix.de \
--cc=esben@geanix.com \
--cc=jiangshanlai@gmail.com \
--cc=jirislaby@kernel.org \
--cc=kernel@pengutronix.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=linux@rasmusvillemoes.dk \
--cc=mkl@pengutronix.de \
--cc=o.rempel@pengutronix.de \
--cc=peter@hurleysoftware.com \
--cc=stevenrwalter@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox