From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752569AbbJBN7Y (ORCPT ); Fri, 2 Oct 2015 09:59:24 -0400 Received: from e38.co.us.ibm.com ([32.97.110.159]:49010 "EHLO e38.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751368AbbJBN7V (ORCPT ); Fri, 2 Oct 2015 09:59:21 -0400 X-IBM-Helo: d03dlp01.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-api@vger.kernel.org;linux-kernel@vger.kernel.org;live-patching@vger.kernel.org Date: Fri, 2 Oct 2015 06:59:18 -0700 From: "Paul E. McKenney" To: Petr Mladek Cc: Andrew Morton , Oleg Nesterov , Tejun Heo , Ingo Molnar , Peter Zijlstra , Steven Rostedt , Josh Triplett , Thomas Gleixner , Linus Torvalds , Jiri Kosina , Borislav Petkov , Michal Hocko , linux-mm@kvack.org, Vlastimil Babka , live-patching@vger.kernel.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC v2 00/18] kthread: Use kthread worker API more widely Message-ID: <20151002135918.GN4043@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1442840639-6963-1-git-send-email-pmladek@suse.com> <20150930050833.GA4412@linux.vnet.ibm.com> <20151001155943.GE9603@pathway.suse.cz> <20151001170053.GH4043@linux.vnet.ibm.com> <20151002120014.GG9603@pathway.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151002120014.GG9603@pathway.suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15100213-0029-0000-0000-00000CFED9E4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 02, 2015 at 02:00:14PM +0200, Petr Mladek wrote: > On Thu 2015-10-01 10:00:53, Paul E. McKenney wrote: > > On Thu, Oct 01, 2015 at 05:59:43PM +0200, Petr Mladek wrote: > > > On Tue 2015-09-29 22:08:33, Paul E. McKenney wrote: > > > > On Mon, Sep 21, 2015 at 03:03:41PM +0200, Petr Mladek wrote: > > > > > My intention is to make it easier to manipulate kthreads. This RFC tries > > > > > to use the kthread worker API. It is based on comments from the > > > > > first attempt. See https://lkml.org/lkml/2015/7/28/648 and > > > > > the list of changes below. > > > > > > > If the point of these patches was simply to test your API, and if you are > > not looking to get them upstream, we are OK. > > I would like to eventually transform all kthreads into an API that > will better define the kthread workflow. It need not be this one, > though. I am still looking for a good API that will be acceptable[*] > > One of the reason that I played with RCU, khugepaged, and ring buffer > kthreads is that they are maintained by core developers. I hope that > it will help to get better consensus. > > > > If you want them upstream, you need to explain to me why the patches > > help something. > > As I said, RCU kthreads do not show a big win because they ignore > freezer, are not parked, never stop, do not handle signals. But the > change will allow to live patch them because they leave the main > function on a safe place. > > The ring buffer benchmark is much better example. It reduced > the main function of the consumer kthread to two lines. > It removed some error prone code that modified task state, > called scheduler, handled kthread_should_stop. IMHO, the workflow > is better and more safe now. > > I am going to prepare and send more examples where the change makes > the workflow easier. > > > > And also how the patches avoid breaking things. > > I do my best to keep the original functionality. If we decide to use > the kthread worker API, my first attempt is much more safe, see > https://lkml.org/lkml/2015/7/28/650. It basically replaces the > top level for cycle with one self-queuing work. There are some more > instructions to go back to the cycle but they define a common > safe point that will be maintained on a single location for > all kthread workers. > > > [*] I have played with two APIs yet. They define a safe point > for freezing, parking, stopping, signal handling, live patching. > Also some non-trivial logic of the main cycle is maintained > on a single location. > > Here are some details: > > 1. iterant API > -------------- > > It allows to define three callbacks that are called the following > way: > > init(); > while (!stop) > func(); > destroy(); > > See also https://lkml.org/lkml/2015/6/5/556. > > Advantages: > + simple and clear workflow > + simple use > + simple conversion from the current kthreads API > > Disadvantages: > + problematic solution of sleeping between events > + completely new API > > > 2. kthread worker API > --------------------- > > It is similar to workqueues. The difference is that the works > have a dedicated kthread, so we could better control the resources, > e.g. priority, scheduling policy, ... > > Advantages: > + already in use > + design proven to work (workqueues) > + nature way to wait for work in the common code (worker) > using event driven works and delayed works > + easy to convert to/from workqueues API > > Disadvantages: > + more code needed to define, initialize, and queue works > + more complicated conversion from the current API > if we want to make it a clean way (event driven) > + might need more synchronization in some cases[**] > > Questionable: > + event driven vs. procedural programming style > + allows more grained split of the functionality into > separate units (works) that might be queued > as needed > > > [**] wake_up() is nope for empty waitqueue. But queuing a work > into non-existing worker might cause a crash. Well, this is > usually already synchronized. > > > Any thoughts or preferences are highly appreciated. For the RCU grace-period kthreads, I am not seeing the advantage of either API over the current approach. Thanx, Paul