From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: Re: [PATCH 5/8] blkcg: make sure blkg_lookup() returns %NULL if @q is bypassing Date: Mon, 16 Apr 2012 15:41:32 -0700 Message-ID: <20120416224131.GJ2448@linux.vnet.ibm.com> References: <1334273380-30233-1-git-send-email-tj@kernel.org> <1334273380-30233-6-git-send-email-tj@kernel.org> <20120413160053.GE26383@redhat.com> <20120413170334.GB12233@google.com> <20120413172336.GF26383@redhat.com> Reply-To: paulmck-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20120413172336.GF26383-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Vivek Goyal Cc: axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org, ctalbott-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, rni-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Tejun Heo , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org On Fri, Apr 13, 2012 at 01:23:36PM -0400, Vivek Goyal wrote: > On Fri, Apr 13, 2012 at 10:03:34AM -0700, Tejun Heo wrote: > > Hey, > > > > On Fri, Apr 13, 2012 at 12:00:53PM -0400, Vivek Goyal wrote: > > > On Thu, Apr 12, 2012 at 04:29:37PM -0700, Tejun Heo wrote: > > > > > > [..] > > > > * In bypass mode, only the dispatch FIFO queue of @q is used. This > > > > * function makes @q enter bypass mode and drains all requests which were > > > > * throttled or issued before. On return, it's guaranteed that no request > > > > - * is being throttled or has ELVPRIV set. > > > > + * is being throttled or has ELVPRIV set and blk_queue_bypass() is %true > > > > + * inside queue or RCU read lock. > > > > */ > > > > void blk_queue_bypass_start(struct request_queue *q) > > > > { > > > > @@ -426,6 +427,7 @@ void blk_queue_bypass_start(struct request_queue *q) > > > > spin_unlock_irq(q->queue_lock); > > > > > > > > blk_drain_queue(q, false); > > > > + synchronize_rcu(); > > > > > > I guess this synchronize_rcu() needs some comments here to make it clear > > > what it meant for. IIUC, you are protecting against policy data (stats > > > update) which happen under rcu in throttling code? You want to make sure > > > all these updaters are done before you go ahead with > > > activation/deactivation of a policy. > > > > > > Well, I am wondering if CFQ is policy being activated/deactivated why > > > should we try to drain other policie's requests. Can't one continue > > > to work without draining all the throttled requests. We probably just > > > need to make sure new groups are not created. > > > > So, I think synchronization rules like this are something which the > > core should define. cfq may not use it but the sync rules should > > still be the same for all policies. In this case, what the core > > provides is "blk_queue_bypass() is guaranteed to be seen as %true > > inside RCU read lock section once this function returns", which in > > turn will guarantee that RCU read-lock protected blkg_lookup() is > > guaranteed to fail once the function returns. This property makes RCU > > protected blkg_lookup() safe against queue bypassing, which is what we > > want. > > I think now synchronize_rcu() has become part of cfq_init_queue() > effectively and that will slow down boot. In the past I had to remove > it. One alternative approach is to use synchronize_rcu_expedited(). Thanx, Paul