From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Petr Mladek <pmladek@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Oleg Nesterov <oleg@redhat.com>, Tejun Heo <tj@kernel.org>,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Steven Rostedt <rostedt@goodmis.org>,
Josh Triplett <josh@joshtriplett.org>,
Thomas Gleixner <tglx@linutronix.de>,
Linus Torvalds <torvalds@linux-foundation.org>,
Jiri Kosina <jkosina@suse.cz>, Borislav Petkov <bp@suse.de>,
Michal Hocko <mhocko@suse.cz>,
linux-mm@kvack.org, Vlastimil Babka <vbabka@suse.cz>,
live-patching@vger.kernel.org, linux-api@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [RFC v2 00/18] kthread: Use kthread worker API more widely
Date: Fri, 2 Oct 2015 06:59:18 -0700 [thread overview]
Message-ID: <20151002135918.GN4043@linux.vnet.ibm.com> (raw)
In-Reply-To: <20151002120014.GG9603@pathway.suse.cz>
On Fri, Oct 02, 2015 at 02:00:14PM +0200, Petr Mladek wrote:
> On Thu 2015-10-01 10:00:53, Paul E. McKenney wrote:
> > On Thu, Oct 01, 2015 at 05:59:43PM +0200, Petr Mladek wrote:
> > > On Tue 2015-09-29 22:08:33, Paul E. McKenney wrote:
> > > > On Mon, Sep 21, 2015 at 03:03:41PM +0200, Petr Mladek wrote:
> > > > > My intention is to make it easier to manipulate kthreads. This RFC tries
> > > > > to use the kthread worker API. It is based on comments from the
> > > > > first attempt. See https://lkml.org/lkml/2015/7/28/648 and
> > > > > the list of changes below.
> > > > >
> > If the point of these patches was simply to test your API, and if you are
> > not looking to get them upstream, we are OK.
>
> I would like to eventually transform all kthreads into an API that
> will better define the kthread workflow. It need not be this one,
> though. I am still looking for a good API that will be acceptable[*]
>
> One of the reason that I played with RCU, khugepaged, and ring buffer
> kthreads is that they are maintained by core developers. I hope that
> it will help to get better consensus.
>
>
> > If you want them upstream, you need to explain to me why the patches
> > help something.
>
> As I said, RCU kthreads do not show a big win because they ignore
> freezer, are not parked, never stop, do not handle signals. But the
> change will allow to live patch them because they leave the main
> function on a safe place.
>
> The ring buffer benchmark is much better example. It reduced
> the main function of the consumer kthread to two lines.
> It removed some error prone code that modified task state,
> called scheduler, handled kthread_should_stop. IMHO, the workflow
> is better and more safe now.
>
> I am going to prepare and send more examples where the change makes
> the workflow easier.
>
>
> > And also how the patches avoid breaking things.
>
> I do my best to keep the original functionality. If we decide to use
> the kthread worker API, my first attempt is much more safe, see
> https://lkml.org/lkml/2015/7/28/650. It basically replaces the
> top level for cycle with one self-queuing work. There are some more
> instructions to go back to the cycle but they define a common
> safe point that will be maintained on a single location for
> all kthread workers.
>
>
> [*] I have played with two APIs yet. They define a safe point
> for freezing, parking, stopping, signal handling, live patching.
> Also some non-trivial logic of the main cycle is maintained
> on a single location.
>
> Here are some details:
>
> 1. iterant API
> --------------
>
> It allows to define three callbacks that are called the following
> way:
>
> init();
> while (!stop)
> func();
> destroy();
>
> See also https://lkml.org/lkml/2015/6/5/556.
>
> Advantages:
> + simple and clear workflow
> + simple use
> + simple conversion from the current kthreads API
>
> Disadvantages:
> + problematic solution of sleeping between events
> + completely new API
>
>
> 2. kthread worker API
> ---------------------
>
> It is similar to workqueues. The difference is that the works
> have a dedicated kthread, so we could better control the resources,
> e.g. priority, scheduling policy, ...
>
> Advantages:
> + already in use
> + design proven to work (workqueues)
> + nature way to wait for work in the common code (worker)
> using event driven works and delayed works
> + easy to convert to/from workqueues API
>
> Disadvantages:
> + more code needed to define, initialize, and queue works
> + more complicated conversion from the current API
> if we want to make it a clean way (event driven)
> + might need more synchronization in some cases[**]
>
> Questionable:
> + event driven vs. procedural programming style
> + allows more grained split of the functionality into
> separate units (works) that might be queued
> as needed
>
>
> [**] wake_up() is nope for empty waitqueue. But queuing a work
> into non-existing worker might cause a crash. Well, this is
> usually already synchronized.
>
>
> Any thoughts or preferences are highly appreciated.
For the RCU grace-period kthreads, I am not seeing the advantage of
either API over the current approach.
Thanx, Paul
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2015-10-02 13:59 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-21 13:03 [RFC v2 00/18] kthread: Use kthread worker API more widely Petr Mladek
2015-09-21 13:03 ` [RFC v2 01/18] kthread: Allow to call __kthread_create_on_node() with va_list args Petr Mladek
2015-09-21 13:03 ` [RFC v2 02/18] kthread: Add create_kthread_worker*() Petr Mladek
2015-09-22 18:20 ` Tejun Heo
2015-09-21 13:03 ` [RFC v2 03/18] kthread: Add drain_kthread_worker() Petr Mladek
2015-09-22 18:26 ` Tejun Heo
2015-09-21 13:03 ` [RFC v2 04/18] kthread: Add destroy_kthread_worker() Petr Mladek
2015-09-22 18:30 ` Tejun Heo
2015-09-21 13:03 ` [RFC v2 05/18] kthread: Add pending flag to kthread work Petr Mladek
2015-09-21 13:03 ` [RFC v2 06/18] kthread: Initial support for delayed " Petr Mladek
2015-09-21 13:03 ` [RFC v2 07/18] kthread: Allow to cancel " Petr Mladek
[not found] ` <1442840639-6963-8-git-send-email-pmladek-IBi9RG/b67k@public.gmane.org>
2015-09-22 19:35 ` Tejun Heo
[not found] ` <20150922193513.GE17659-qYNAdHglDFBN0TnZuCh8vA@public.gmane.org>
2015-09-25 11:26 ` Petr Mladek
2015-09-28 17:03 ` Tejun Heo
2015-10-02 15:43 ` Petr Mladek
[not found] ` <20151002154336.GC3122-KsEp0d+Q8qECVLCxKZUutA@public.gmane.org>
2015-10-02 19:24 ` Tejun Heo
[not found] ` <20151002192453.GA7564-qYNAdHglDFBN0TnZuCh8vA@public.gmane.org>
2015-10-05 10:07 ` Petr Mladek
[not found] ` <20151005100758.GK9603-KsEp0d+Q8qECVLCxKZUutA@public.gmane.org>
2015-10-05 11:09 ` Petr Mladek
[not found] ` <20151005110924.GL9603-KsEp0d+Q8qECVLCxKZUutA@public.gmane.org>
2015-10-07 9:21 ` Petr Mladek
2015-10-07 14:24 ` Tejun Heo
[not found] ` <20151007142446.GA2012-qYNAdHglDFBN0TnZuCh8vA@public.gmane.org>
2015-10-14 10:20 ` Petr Mladek
2015-10-14 17:30 ` Tejun Heo
2015-09-21 13:03 ` [RFC v2 08/18] kthread: Allow to modify delayed " Petr Mladek
2015-09-21 13:03 ` [RFC v2 09/18] mm/huge_page: Convert khugepaged() into kthread worker API Petr Mladek
2015-09-22 20:26 ` Tejun Heo
2015-09-23 9:50 ` Petr Mladek
2015-09-21 13:03 ` [RFC v2 10/18] ring_buffer: Do no not complete benchmark reader too early Petr Mladek
2015-09-21 13:03 ` [RFC v2 11/18] ring_buffer: Fix more races when terminating the producer in the benchmark Petr Mladek
2015-09-21 13:03 ` [RFC v2 12/18] ring_buffer: Convert benchmark kthreads into kthread worker API Petr Mladek
2015-09-21 13:03 ` [RFC v2 13/18] rcu: Finish folding ->fqs_state into ->gp_state Petr Mladek
2015-09-21 13:03 ` [RFC v2 14/18] rcu: Store first_gp_fqs into struct rcu_state Petr Mladek
2015-09-21 13:03 ` [RFC v2 15/18] rcu: Clean up timeouts for forcing the quiescent state Petr Mladek
2015-09-21 13:03 ` [RFC v2 16/18] rcu: Check actual RCU_GP_FLAG_FQS when handling " Petr Mladek
2015-09-21 13:03 ` [RFC v2 17/18] rcu: Convert RCU gp kthreads into kthread worker API Petr Mladek
2015-09-28 17:14 ` Paul E. McKenney
[not found] ` <20150928171437.GB5182-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
2015-10-01 15:43 ` Petr Mladek
2015-10-01 16:33 ` Paul E. McKenney
2015-09-21 13:03 ` [RFC v2 18/18] kthread: Better support freezable kthread workers Petr Mladek
2015-09-22 20:32 ` [RFC v2 00/18] kthread: Use kthread worker API more widely Tejun Heo
2015-09-30 5:08 ` Paul E. McKenney
[not found] ` <20150930050833.GA4412-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
2015-10-01 15:59 ` Petr Mladek
2015-10-01 17:00 ` Paul E. McKenney
2015-10-02 12:00 ` Petr Mladek
2015-10-02 13:59 ` Paul E. McKenney [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151002135918.GN4043@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=bp@suse.de \
--cc=jkosina@suse.cz \
--cc=josh@joshtriplett.org \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=live-patching@vger.kernel.org \
--cc=mhocko@suse.cz \
--cc=mingo@redhat.com \
--cc=oleg@redhat.com \
--cc=peterz@infradead.org \
--cc=pmladek@suse.com \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).