From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755874Ab0EQVGT (ORCPT ); Mon, 17 May 2010 17:06:19 -0400 Received: from e5.ny.us.ibm.com ([32.97.182.145]:55708 "EHLO e5.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752090Ab0EQVGR (ORCPT ); Mon, 17 May 2010 17:06:17 -0400 Date: Mon, 17 May 2010 14:06:06 -0700 From: "Paul E. McKenney" To: "Michael S. Tsirkin" Cc: Peter Zijlstra , linux-kernel@vger.kernel.org, mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, dvhltc@us.ibm.com, niv@us.ibm.com, tglx@linutronix.de, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, Arnd Bergmann , Arnd Bergmann Subject: Re: [PATCH RFC tip/core/rcu 23/23] vhost: add __rcu annotations Message-ID: <20100517210606.GW2320@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20100512213317.GA15085@linux.vnet.ibm.com> <1273700022-16523-23-git-send-email-paulmck@linux.vnet.ibm.com> <20100512214847.GD22930@redhat.com> <20100512230057.GH2303@linux.vnet.ibm.com> <1273756043.5605.3542.camel@twins> <20100513152340.GA2879@linux.vnet.ibm.com> <20100517203349.GA14994@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100517203349.GA14994@redhat.com> User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 17, 2010 at 11:33:49PM +0300, Michael S. Tsirkin wrote: > On Thu, May 13, 2010 at 08:23:40AM -0700, Paul E. McKenney wrote: > > On Thu, May 13, 2010 at 03:07:23PM +0200, Peter Zijlstra wrote: > > > On Wed, 2010-05-12 at 16:00 -0700, Paul E. McKenney wrote: > > > > Any thoughts? One approach would be to create a separate lockdep class > > > > for vhost workqueue state, similar to the approach used in instrument > > > > rcu_read_lock() and friends. > > > > > > workqueue_struct::lockdep_map, its held while executing worklets. > > > > > > lock_is_held(&vhost_workqueue->lockdep_map), should do as you want. > > > > Thank you, Peter!!! > > > > Thanx, Paul > > vhost in fact does flush_work rather than > flush_workqueue, so while for now everything runs > from vhost_workqueue in theory nothing would break > if we use some other workqueue or even a combination > thereof. > > I guess when/if this happens, we could start by converting > to _raw and then devise a solution. If there are a small finite number of work queues involved, we can easily do something like: #ifdef CONFIG_PROVE_RCU int in_vhost_workqueue(void) { return in_workqueue_context(vhost_workqueue) || in_workqueue_context(vhost_other_workqueue) || in_workqueue_context(yet_another_vhost_workqueue); } #endif /* #ifdef CONFIG_PROVE_RCU */ Seem reasonable? > By the way what would be really nice is if we had a way > to trap when rcu protected pointer is freed without a flush > while some reader is running. Current annotation does not > allow this, does it? Right now, it does not, but I wonder if something like Thomas's and Mathieu's debugobjects work could be brought to bear on this problem? This would need to be implemented in vhost, as synchronize_rcu() has no way to know what memory it is flushing, nor does flush_work(). Thanx, Paul