public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
@ 2016-02-16  5:57 Boqun Feng
  2016-02-16  5:57 ` [RFC v2 1/6] lockdep: Add helper functions of irq_context Boqun Feng
                   ` (6 more replies)
  0 siblings, 7 replies; 21+ messages in thread
From: Boqun Feng @ 2016-02-16  5:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
	Boqun Feng

Hi all,

This is the v2 for RCU_LOCKED_ACCESS.

Link of v1: http://article.gmane.org/gmane.linux.kernel/2143674

Changes since v1:

*	Define newly introduced irq_context helpers as macros when
	lockdep doesn't trace irq_context, because these helpers will be
	quite simple then and this could avoid some "defined but not
	used" warnings.

*	Introduce a new macro to static define a lock_class_key for RCU,
	in order to work around a compiler's "bug" about embracer-enclosed
	initializer list. (Paul Mckenney)


As a characteristic of RCU, read-side critical sections have a very
loose connection with rcu_dereference()s, which is you can only be sure
about an rcu_dereference() might be called in some read-side critical
section, but if code gets complex, you may not be sure which read-side
critical section exactly, this might be also an problem for some other
locking mechanisms, that is the critical sections protecting data and
the data accesses protected are not clearly correlated.

In this series, we are introducing LOCKED_ACCESS framework and based on
which, we implement the RCU_LOCKED_ACCESS functionality to give us a
clear hint: which rcu_dereference() happens in which RCU read-side
critical section. 

After this series applied, and if CONFIG_RCU_LOCKED_ACCESS=y, the proc
file /proc/locked_access/rcu will show all relationships collected so
far for rcu_read_lock() and their friends and rcu_dereference*().

Snippets of /proc/locked_access/rcu are as follow:

...(this rcu_dereference() happens after one rcu_read_lock())
...
ACQCHAIN 0xfdbf0c6aeea, 1 locks, irq_context 0:
  LOCK at [<ffffffff812b1115>] get_proc_task_net+0x5/0x140
    ACCESS TYPE 1 at kernel/pid.c:441
...
...(this rcu_dereference() happens after three rcu_read_lock())
...
ACQCHAIN 0xfe042af3bbfb2605, 3 locks, irq_context 0:
  LOCK at [<ffffffff81094b47>] SyS_kill+0x97/0x2a0
    LOCK at [<ffffffff8109286f>] kill_pid_info+0x1f/0x140
      LOCK at [<ffffffff81092605>] group_send_sig_info+0x5/0x130
        ACCESS TYPE 1 at kernel/signal.c:695
...


This patchset is based on v4.5-rc2 and consists of 6 patches(in which
patch 2-5 are the implementation of LOCKED_ACCESS):

1.	Introduce some functions of irq_context.

2.	Introduce locked access class and acqchain.

3.	Maintain the keys of acqchains.

4.	Introduce the entry point of LOCKED_ACCESS.

5.	Add proc interface for locked access class

6.	Enables LOCKED_ACCESS for RCU.

Tested by 0day and I also did a simple test on x86: build and boot a
kernel with RCU_LOCKED_ACCESS=y and CONFIG_PROVE_LOCKING=y and ran
several workloads(kernel building, git cloning, dbench), and
/proc/locked_access/rcu was able to collect the relationships between
~300 RCU read-critical sections and ~500 rcu_dereference*().

Looking forwards to any suggestion, comment and question ;-)

Regards,
Boqun

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC v2 1/6] lockdep: Add helper functions of irq_context
  2016-02-16  5:57 [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
@ 2016-02-16  5:57 ` Boqun Feng
  2016-02-17  0:17   ` Paul E. McKenney
  2016-04-23 12:53   ` [tip:locking/urgent] locking/lockdep: Fix ->irq_context calculation tip-bot for Boqun Feng
  2016-02-16  5:57 ` [RFC v2 2/6] lockdep: LOCKED_ACCESS: Introduce locked access class and acqchain Boqun Feng
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 21+ messages in thread
From: Boqun Feng @ 2016-02-16  5:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
	Boqun Feng

Add two helper functions of irq_context:

task_irq_context(): return the encoded irq_context of the task, the
return value is encoded in the same as ->irq_context of held_lock.
Always return 0 if !(CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING)

is_same_irq_context(): compare whether two irq_context are the same.
Always return true if !(CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING)

These two functions are added for the future use of irq_context by
LOCKED_ACCESS, because LOCKED_ACCESS needs to get the irq_context value
of a task when a data access happens rather than when a lock is
acquired, so it needs another way than getting the value from
held_lock::irq_context

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 60ace56..f36df37 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2839,6 +2839,16 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
 	return 1;
 }
 
+static unsigned int task_irq_context(struct task_struct *task)
+{
+	return 2 * (task->hardirq_context ? 1 : 0) + task->softirq_context;
+}
+
+static bool is_same_irq_context(unsigned int ctx1, unsigned int ctx2)
+{
+	return ctx1 == ctx2;
+}
+
 static int separate_irq_context(struct task_struct *curr,
 		struct held_lock *hlock)
 {
@@ -2847,8 +2857,7 @@ static int separate_irq_context(struct task_struct *curr,
 	/*
 	 * Keep track of points where we cross into an interrupt context:
 	 */
-	hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +
-				curr->softirq_context;
+	hlock->irq_context = task_irq_context(curr);
 	if (depth) {
 		struct held_lock *prev_hlock;
 
@@ -2880,6 +2889,9 @@ static inline int mark_irqflags(struct task_struct *curr,
 	return 1;
 }
 
+#define task_irq_context(task)			0
+#define is_same_irq_context(ctx1, ctx2)		true
+
 static inline int separate_irq_context(struct task_struct *curr,
 		struct held_lock *hlock)
 {
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC v2 2/6] lockdep: LOCKED_ACCESS: Introduce locked access class and acqchain
  2016-02-16  5:57 [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
  2016-02-16  5:57 ` [RFC v2 1/6] lockdep: Add helper functions of irq_context Boqun Feng
@ 2016-02-16  5:57 ` Boqun Feng
  2016-02-16  5:57 ` [RFC v2 3/6] lockdep: LOCKED_ACCESS: Maintain the keys of acqchains Boqun Feng
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 21+ messages in thread
From: Boqun Feng @ 2016-02-16  5:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
	Boqun Feng

As a similar concept as a lock class, a locked access class is a group
of locks and related data accesses of those locks.  A locked access
class also contains the structures for allocation and lookup of
acqchains and accesses.

The address of a locked access class is used as its key, we tag a group
of locks with a locked access class by setting the content of the
group's lock_class_key to be the key(i.e. address) of locked access
class, and we can do this trick because, AFAIK, lockdep only uses the
address of a lock_class_key. However, although this doesn't change the
size of a lock_class_key, this does change the alignment requirement.

And tagging a lock_class_key with a locked access class is the step #1
to use LOCKED_ACCESS for a locking mechanism.

As a similar concept as a lock class chain, an acqchain is short for
acquire chain, which is a chain of lock acquisitions. The difference
between a lock class chain and an acqchain is that a lock class chain is
keyed by the hash sum of the class keys, whereas an acqchain is keyed by
the hash sum of the acquire ips. We introduce acqchains here because in
LOCKED_ACCESS, we care more about the locations of critical sections.

This patch defines the structures for those concepts, along with the
manipulation operations. Besides, config option CONFIG_LOCKED_ACCESS
is introduced in this patch, too.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/lockdep.h            |  17 ++++
 kernel/locking/lockdep.c           | 156 +++++++++++++++++++++++++++++++++++++
 kernel/locking/lockdep_internals.h |  74 ++++++++++++++++++
 lib/Kconfig.debug                  |  12 +++
 4 files changed, 259 insertions(+)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index c57e424..140c1c3 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -51,9 +51,26 @@ struct lockdep_subclass_key {
 	char __one_byte;
 } __attribute__ ((__packed__));
 
+#ifdef CONFIG_LOCKED_ACCESS
+struct locked_access_class;
+
+struct lock_class_key {
+	union {
+		struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES];
+		/*
+		 * Use the content of the lock_class_key to store locked access
+		 * class, as lockdep only use the address of a lock_class_key.
+		 * However, please note by doing so, we will change the align
+		 * requirement of lock_class_key
+		 */
+		struct locked_access_class *laclass;
+	};
+};
+#else
 struct lock_class_key {
 	struct lockdep_subclass_key	subkeys[MAX_LOCKDEP_SUBCLASSES];
 };
+#endif
 
 extern struct lock_class_key __lockdep_no_validate__;
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index f36df37..fdb1b8c 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4310,3 +4310,159 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 	dump_stack();
 }
 EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);
+
+#ifdef CONFIG_LOCKED_ACCESS
+static void locked_access_class_init(struct locked_access_class *laclass)
+{
+	int i;
+	unsigned long flags;
+
+	local_irq_save(flags);
+	arch_spin_lock(&laclass->lock);
+
+	if (laclass->initialized) {
+		arch_spin_unlock(&laclass->lock);
+		return;
+	}
+
+	for (i = 0; i < ACQCHAIN_HASH_SIZE; i++)
+		INIT_LIST_HEAD(laclass->acqchain_hashtable + i);
+
+	laclass->nr_acqchains = 0;
+	laclass->nr_acqchain_hlocks = 0;
+	laclass->nr_access_structs = 0;
+
+	/*
+	 * Make sure the members of laclass have been initialized before
+	 * setting ->initialized.
+	 */
+	smp_store_release(&laclass->initialized, 1);
+	arch_spin_unlock(&laclass->lock);
+	local_irq_restore(flags);
+}
+
+static struct acqchain *lookup_acqchain(struct locked_access_class *laclass,
+					struct task_struct *task,
+					u64 acqchain_key)
+{
+	struct list_head *hash_head;
+	struct acqchain *acqchain;
+
+	hash_head = acqchainhashentry(laclass, acqchain_key);
+
+	list_for_each_entry_lockless(acqchain, hash_head, entry) {
+		if (acqchain->chain_key == acqchain_key
+		    && is_same_irq_context(acqchain->irq_context,
+					   task_irq_context(task)))
+			return acqchain;
+	}
+
+	return NULL;
+}
+
+static struct acqchain *add_acqchain(struct locked_access_class *laclass,
+				     struct task_struct *task,
+				     u64 acqchain_key)
+{
+	unsigned long flags;
+	struct acqchain *acqchain = NULL;
+	struct held_lock *hlock, *hlock_curr;
+	struct lockdep_map *instance;
+	int i, j, max_depth;
+	struct list_head *hash_head;
+
+	local_irq_save(flags);
+	arch_spin_lock(&laclass->lock);
+
+	/* Lookup again while holding the lock */
+	acqchain = lookup_acqchain(laclass, task, acqchain_key);
+
+	if (acqchain)
+		goto out;
+
+	if (laclass->nr_acqchains >= MAX_ACQCHAINS)
+		goto out;
+
+	hash_head = acqchainhashentry(laclass, acqchain_key);
+	acqchain = laclass->acqchains + laclass->nr_acqchains;
+	acqchain->chain_key = acqchain_key;
+
+	hlock = task->held_locks + task->lockdep_depth - 1;
+	acqchain->irq_context = task_irq_context(task);
+
+	for (i = task->lockdep_depth - 1; i >= 0; i--) {
+		hlock_curr = task->held_locks + i;
+		if (!is_same_irq_context(hlock_curr->irq_context,
+				acqchain->irq_context))
+			break;
+	}
+
+	i++;
+	max_depth = task->lockdep_depth - i;
+
+	j = 0;
+	if (likely(laclass->nr_acqchain_hlocks + max_depth
+			<= MAX_ACQCHAIN_HLOCKS)) {
+		acqchain->base = laclass->nr_acqchain_hlocks;
+		for (; i < task->lockdep_depth; i++) {
+			hlock_curr = task->held_locks + i;
+			instance = hlock_curr->instance;
+
+			/*
+			 * The a lock instance may use its address as * ->key,
+			 * in which case the lock instance doesn't belong to
+			 * a locked access class.
+			 */
+			if (instance != (void *)instance->key &&
+			    instance->key->laclass == laclass) {
+				laclass->acqchain_hlocks[acqchain->base + j]
+						= hlock_curr->acquire_ip;
+				j++;
+			}
+		}
+		laclass->nr_acqchain_hlocks += j;
+	}
+
+	acqchain->depth = j;
+
+	/*
+	 * Make sure stores to the members of acqchain observed after publish
+	 * it.
+	 */
+	smp_store_release(&laclass->nr_acqchains, laclass->nr_acqchains + 1);
+	INIT_LIST_HEAD(&acqchain->accesses);
+
+	/*
+	 * Pair with the list_for_each_entry_lockless() in lookup_acqchain()
+	 */
+	list_add_tail_rcu(&acqchain->entry, hash_head);
+out:
+	arch_spin_unlock(&laclass->lock);
+	local_irq_restore(flags);
+	return acqchain;
+}
+
+/*
+ * Lookup the acqchain with a key of @acqchain_key in the hash table of
+ * @laclass, if none exists, add a new one.
+ *
+ * Return the acqchain if one is found or if one is added, otherwise return
+ * NULL.
+ *
+ * Must be called after @laclass is initialized.
+ */
+static struct acqchain *
+lookup_or_add_acqchain(struct locked_access_class *laclass,
+		      struct task_struct *task,
+		      u64 acqchain_key)
+{
+	struct acqchain *acqchain = NULL;
+
+	acqchain = lookup_acqchain(laclass, task, acqchain_key);
+	if (!acqchain)
+		acqchain = add_acqchain(laclass, task, acqchain_key);
+
+	return acqchain;
+}
+
+#endif /* CONFIG_LOCKED_ACCESS */
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 51c4b24..5e2e133 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -168,3 +168,77 @@ DECLARE_PER_CPU(struct lockdep_stats, lockdep_stats);
 # define debug_atomic_dec(ptr)		do { } while (0)
 # define debug_atomic_read(ptr)		0
 #endif
+
+#ifdef CONFIG_LOCKED_ACCESS
+/*
+ * A chain of lock acquisitions, keyed by the hash sum of all the
+ * instruction positions of lock acquisitions
+ */
+struct acqchain {
+	u8				irq_context;
+	s8				depth;
+	s16				base;
+	/* Entry in hash table */
+	struct list_head		entry;
+	u64				chain_key;
+	/* List of data accesses that happen after this chain */
+	struct list_head		accesses;
+};
+
+#define iterate_acqchain_key(key, ip) \
+	(((key) << MAX_LOCKDEP_KEYS_BITS) ^ \
+	((key) >> (64 - MAX_LOCKDEP_KEYS_BITS)) ^ \
+	(ip))
+
+#define MAX_ACQCHAINS_BITS	16
+#define MAX_ACQCHAINS		(1UL << MAX_ACQCHAINS_BITS)
+#define MAX_ACQCHAIN_HLOCKS	(MAX_ACQCHAINS * 5)
+
+#define ACQCHAIN_HASH_BITS	(MAX_ACQCHAINS_BITS-1)
+#define ACQCHAIN_HASH_SIZE	(1UL << ACQCHAIN_HASH_BITS)
+#define __acqchainhashfn(chain)	hash_long(chain, ACQCHAIN_HASH_BITS)
+#define acqchainhashentry(lad, chain) \
+	(lad->acqchain_hashtable + __acqchainhashfn((chain)))
+
+#define MAX_LOCKED_ACCESS_STRUCTS	(1UL << 16)
+
+/* Records of data accesses in LOCKED_ACCESS */
+struct locked_access_struct {
+	struct list_head		list;
+	struct locked_access_location	*loc;
+	int				type;
+};
+
+/*
+ * locked_access_class represent a group of critical sections and related data
+ * accesses. Locked access class should be only defined statically, and the
+ * address of a locked_access_class is used as the 'key' of a locked access
+ * class.
+ */
+struct locked_access_class {
+	const char                   *name;
+	/* Hash table of acqchains, for lookup */
+	struct list_head             acqchain_hashtable[ACQCHAIN_HASH_SIZE];
+	/* Storage of acqchains, for allocation */
+	struct acqchain	             acqchains[MAX_ACQCHAINS];
+	long                         nr_acqchains;
+	/* Storage of acquired IPs of acqchains, for allocation */
+	unsigned long                acqchain_hlocks[MAX_ACQCHAIN_HLOCKS];
+	long                         nr_acqchain_hlocks;
+	/* Storage of data accesses, for allocation */
+	struct locked_access_struct  access_structs[MAX_LOCKED_ACCESS_STRUCTS];
+	long                         nr_access_structs;
+	arch_spinlock_t              lock;
+	int                          initialized;
+};
+
+#define INIT_LOCKED_ACCESS_DATA(_name) \
+	{ \
+		.name = #_name, \
+		.lock = __ARCH_SPIN_LOCK_UNLOCKED, \
+		.initialized = 0, \
+		.nr_acqchains = 0, \
+		.nr_acqchain_hlocks = 0,\
+		.nr_access_structs = 0, \
+	}
+#endif /* CONFIG_LOCKED_ACCESS */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ecb9e75..178f288 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1065,6 +1065,18 @@ config LOCK_STAT
 	 CONFIG_LOCK_STAT defines "contended" and "acquired" lock events.
 	 (CONFIG_LOCKDEP defines "acquire" and "release" events.)
 
+config LOCKED_ACCESS
+	bool
+	depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+	select DEBUG_LOCK_ALLOC
+	default n
+	help
+	  Provide a way to correlate data accesses with critical sections,
+	  LOCKED_ACCESS is designed for the users of locking mechanisms to see
+	  which data access happens inside which lock critical section. This
+	  could help the users of a lock mechanism to gain more detailed
+	  information about their lock usage.
+
 config DEBUG_LOCKDEP
 	bool "Lock dependency engine debugging"
 	depends on DEBUG_KERNEL && LOCKDEP
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC v2 3/6] lockdep: LOCKED_ACCESS: Maintain the keys of acqchains
  2016-02-16  5:57 [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
  2016-02-16  5:57 ` [RFC v2 1/6] lockdep: Add helper functions of irq_context Boqun Feng
  2016-02-16  5:57 ` [RFC v2 2/6] lockdep: LOCKED_ACCESS: Introduce locked access class and acqchain Boqun Feng
@ 2016-02-16  5:57 ` Boqun Feng
  2016-02-16  5:57 ` [RFC v2 4/6] lockdep: LOCKED_ACCESS: Introduce locked_access_point() Boqun Feng
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 21+ messages in thread
From: Boqun Feng @ 2016-02-16  5:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
	Boqun Feng

Add held_lock::prev_acqchain_key and task_struct::curr_acqchain_key to
maintain the keys of acqchains as the same as what lockdep does for
keys of lock classes.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/lockdep.h  |  3 +++
 include/linux/sched.h    |  3 +++
 kernel/fork.c            |  3 +++
 kernel/locking/lockdep.c | 31 +++++++++++++++++++++++++++++++
 4 files changed, 40 insertions(+)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 140c1c3..2ffe6c3 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -244,6 +244,9 @@ struct held_lock {
 	 * with zero), here we store the previous hash value:
 	 */
 	u64				prev_chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+	u64				prev_acqchain_key;
+#endif
 	unsigned long			acquire_ip;
 	struct lockdep_map		*instance;
 	struct lockdep_map		*nest_lock;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index a10494a..c24348e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1643,6 +1643,9 @@ struct task_struct {
 	unsigned int lockdep_recursion;
 	struct held_lock held_locks[MAX_LOCK_DEPTH];
 	gfp_t lockdep_reclaim_gfp;
+#ifdef CONFIG_LOCKED_ACCESS
+	u64 curr_acqchain_key;
+#endif
 #endif
 #ifdef CONFIG_UBSAN
 	unsigned int in_ubsan;
diff --git a/kernel/fork.c b/kernel/fork.c
index 2e391c7..d9fc51b 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1404,6 +1404,9 @@ static struct task_struct *copy_process(unsigned long clone_flags,
 	p->lockdep_depth = 0; /* no locks held yet */
 	p->curr_chain_key = 0;
 	p->lockdep_recursion = 0;
+# ifdef CONFIG_LOCKED_ACCESS
+	p->curr_acqchain_key = 0;
+# endif
 #endif
 
 #ifdef CONFIG_DEBUG_MUTEXES
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index fdb1b8c..3ebd00d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3089,6 +3089,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	int chain_head = 0;
 	int class_idx;
 	u64 chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+	u64 acqchain_key;
+#endif
 
 	if (unlikely(!debug_locks))
 		return 0;
@@ -3196,6 +3199,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		return 0;
 
 	chain_key = curr->curr_chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+	acqchain_key = curr->curr_acqchain_key;
+#endif
 	if (!depth) {
 		/*
 		 * How can we have a chain hash when we ain't got no keys?!
@@ -3206,12 +3212,23 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	}
 
 	hlock->prev_chain_key = chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+	hlock->prev_acqchain_key = acqchain_key;
+#endif
 	if (separate_irq_context(curr, hlock)) {
 		chain_key = 0;
+
+#ifdef CONFIG_LOCKED_ACCESS
+		acqchain_key = 0;
+#endif
+
 		chain_head = 1;
 	}
 	chain_key = iterate_chain_key(chain_key, id);
 
+#ifdef CONFIG_LOCKED_ACCESS
+	acqchain_key = iterate_acqchain_key(acqchain_key, ip);
+#endif
 	if (nest_lock && !__lock_is_held(nest_lock))
 		return print_lock_nested_lock_not_held(curr, hlock, ip);
 
@@ -3219,6 +3236,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		return 0;
 
 	curr->curr_chain_key = chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+	curr->curr_acqchain_key = acqchain_key;
+#endif
 	curr->lockdep_depth++;
 	check_chain_key(curr);
 #ifdef CONFIG_DEBUG_LOCKDEP
@@ -3348,6 +3368,9 @@ found_it:
 
 	curr->lockdep_depth = i;
 	curr->curr_chain_key = hlock->prev_chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+	curr->curr_acqchain_key = hlock->prev_acqchain_key;
+#endif
 
 	for (; i < depth; i++) {
 		hlock = curr->held_locks + i;
@@ -3438,6 +3461,9 @@ found_it:
 
 	curr->lockdep_depth = i;
 	curr->curr_chain_key = hlock->prev_chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+	curr->curr_acqchain_key = hlock->prev_acqchain_key;
+#endif
 
 	for (i++; i < depth; i++) {
 		hlock = curr->held_locks + i;
@@ -3879,6 +3905,11 @@ void lockdep_reset(void)
 
 	raw_local_irq_save(flags);
 	current->curr_chain_key = 0;
+
+#ifdef CONFIG_LOCKED_ACCESS
+	current->curr_acqchain_key = 0;
+#endif
+
 	current->lockdep_depth = 0;
 	current->lockdep_recursion = 0;
 	memset(current->held_locks, 0, MAX_LOCK_DEPTH*sizeof(struct held_lock));
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC v2 4/6] lockdep: LOCKED_ACCESS: Introduce locked_access_point()
  2016-02-16  5:57 [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
                   ` (2 preceding siblings ...)
  2016-02-16  5:57 ` [RFC v2 3/6] lockdep: LOCKED_ACCESS: Maintain the keys of acqchains Boqun Feng
@ 2016-02-16  5:57 ` Boqun Feng
  2016-02-16  5:57 ` [RFC v2 5/6] lockdep: LOCKED_ACCESS: Add proc interface for locked access class Boqun Feng
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 21+ messages in thread
From: Boqun Feng @ 2016-02-16  5:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
	Boqun Feng

locked_access_point() is the entry point of the whole LOCKED_ACCESS
framework, every time a locked_access_point() is called, LOCKE_ACCESS
will correlate the data access location with the current acqchain.

So putting locked_access_point() at the data accesses you care about is
the step #2 to use LOCKED_ACCESS.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/lockdep.h  |  29 ++++++++++++
 kernel/locking/lockdep.c | 121 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 150 insertions(+)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 2ffe6c3..cf7b6c8 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -570,4 +570,33 @@ lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 }
 #endif
 
+#ifdef CONFIG_LOCKED_ACCESS
+struct locked_access_location {
+	/* Filename of the access */
+	const char			*filename;
+	/* Line number of the access */
+	long				lineno;
+};
+
+#define LOCKED_ACCESS_TYPE_READ		1 /* read */
+
+extern void locked_access(struct locked_access_class *laclass,
+			  struct locked_access_location *access,
+			  int type);
+
+/*
+ * Entry point of LOCKED_ACCESS, should be called at every place the data
+ * accesses of the laclass happen.
+ *
+ * @_type must be one of the LOCKED_ACCESS_TYPE_*
+ */
+#define locked_access_point(_laclass, _type) \
+({ \
+	static struct locked_access_location a = { \
+				.filename = __FILE__, \
+				.lineno = __LINE__, \
+	}; \
+	locked_access(_laclass, &a, _type); \
+})
+#endif /* CONFIG_LOCKED_ACCESS */
 #endif /* __LINUX_LOCKDEP_H */
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 3ebd00d..4094757 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4496,4 +4496,125 @@ lookup_or_add_acqchain(struct locked_access_class *laclass,
 	return acqchain;
 }
 
+/*
+ * Lookup the data access at @loc in the ->accesses list of @acqchain.
+ *
+ * Must be called after @laclass is initialized.
+ */
+static int lookup_locked_access(struct acqchain *acqchain,
+				struct locked_access_location *loc)
+{
+	struct locked_access_struct *s;
+
+	list_for_each_entry_lockless(s, &acqchain->accesses, list) {
+		if (s->loc == loc)
+			return 1;
+	}
+	return 0;
+
+}
+
+/*
+ * Add the data access at @loc into the ->accesses list of @acqchain.
+ *
+ * Return 1 if one access is added, otherwise return 0.
+ *
+ * Must be called after @laclass is initialized.
+ */
+static int add_locked_access(struct locked_access_class *laclass,
+			     struct acqchain *acqchain,
+			     struct locked_access_location *loc,
+			     int type)
+{
+	unsigned long flags;
+	struct locked_access_struct *s;
+
+	local_irq_save(flags);
+	arch_spin_lock(&laclass->lock);
+
+	/* Lookup again while holding the lock */
+	if (lookup_locked_access(acqchain, loc)) {
+		arch_spin_unlock(&laclass->lock);
+		local_irq_restore(flags);
+		return 1;
+	}
+
+	if (unlikely(laclass->nr_access_structs >= MAX_LOCKED_ACCESS_STRUCTS)) {
+		arch_spin_unlock(&laclass->lock);
+		local_irq_restore(flags);
+		return 0;
+	}
+
+	s = laclass->access_structs + laclass->nr_access_structs;
+	s->loc = loc;
+	s->type = type;
+	laclass->nr_access_structs++;
+
+	/*
+	 * Pair with the list_for_each_entry_lockless() in
+	 * lookup_locked_access()
+	 */
+	list_add_tail_rcu(&s->list, &acqchain->accesses);
+
+	arch_spin_unlock(&laclass->lock);
+	local_irq_restore(flags);
+	return 1;
+}
+
+/*
+ * Correlate the data access at @loc with @acqchain for @laclass
+ *
+ * Must be called after @laclass is initialized.
+ */
+static int correlate_locked_access(struct locked_access_class *laclass,
+				   struct acqchain *acqchain,
+				   struct locked_access_location *loc,
+				   int type)
+{
+	if (lookup_locked_access(acqchain, loc))
+		return 1;
+
+	return add_locked_access(laclass, acqchain, loc, type);
+}
+
+/*
+ * The implementation of entry point locked_access_point(), called when a data
+ * access belonging to @laclass happens.
+ */
+void locked_access(struct locked_access_class *laclass,
+		   struct locked_access_location *loc,
+		   int type)
+{
+	u64 acqchain_key = current->curr_acqchain_key;
+	struct acqchain *acqchain;
+
+	if (unlikely(!laclass))
+		return;
+
+	/*
+	 * Don't track for data access for lockdep itself, because we rely
+	 * on the ->held_locks lockdep maintains, and if ->lockdep_recursion is
+	 * not 0, the lock is not being maintained in ->held_locks.
+	 */
+	if (unlikely(current->lockdep_recursion))
+		return;
+
+	/* Data access outside a critical section */
+	if (current->lockdep_depth <= 0)
+		return;
+
+	/*
+	 * we only check whether laclass is initialized in locked_access,
+	 * because this is the entry point of LOCKED_ACCESS.
+	 */
+	if (unlikely(!smp_load_acquire(&laclass->initialized)))
+		locked_access_class_init(laclass);
+
+	acqchain = lookup_or_add_acqchain(laclass, current, acqchain_key);
+
+	if (acqchain)
+		correlate_locked_access(laclass, acqchain, loc, type);
+}
+EXPORT_SYMBOL(locked_access);
+
 #endif /* CONFIG_LOCKED_ACCESS */
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC v2 5/6] lockdep: LOCKED_ACCESS: Add proc interface for locked access class
  2016-02-16  5:57 [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
                   ` (3 preceding siblings ...)
  2016-02-16  5:57 ` [RFC v2 4/6] lockdep: LOCKED_ACCESS: Introduce locked_access_point() Boqun Feng
@ 2016-02-16  5:57 ` Boqun Feng
  2016-02-16  5:57 ` [RFC v2 6/6] RCU: Track rcu_dereference() in RCU read-side critical section Boqun Feng
  2016-02-25 14:32 ` [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Peter Zijlstra
  6 siblings, 0 replies; 21+ messages in thread
From: Boqun Feng @ 2016-02-16  5:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
	Boqun Feng

Provide the proc filesystem interface for LOCKED_ACCESS, for a locked
access class whose name is <name>, there will be a file at
/proc/locked_access/<name> containing all the information the
LOCKED_ACCESS has collected so far.

Also add a macro to define a locked access class.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 kernel/locking/lockdep_internals.h |  17 +++++
 kernel/locking/lockdep_proc.c      | 127 +++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 5e2e133..cacf869 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -241,4 +241,21 @@ struct locked_access_class {
 		.nr_acqchain_hlocks = 0,\
 		.nr_access_structs = 0, \
 	}
+
+extern int create_laclass_proc(const char *name,
+			   struct locked_access_class *laclass);
+
+#define DEFINE_CREATE_LACLASS_PROC(name) \
+static int __init ___create_##name##_laclass_proc(void) \
+{ \
+	return create_laclass_proc(#name, &name##_laclass); \
+} \
+late_initcall(___create_##name##_laclass_proc)
+
+/* Define a Locked Access Class and create its proc file */
+#define DEFINE_LACLASS(name) \
+	struct locked_access_class name##_laclass = \
+			INIT_LOCKED_ACCESS_DATA(name); \
+	EXPORT_SYMBOL(name##_laclass); \
+	DEFINE_CREATE_LACLASS_PROC(name)
 #endif /* CONFIG_LOCKED_ACCESS */
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index dbb61a3..a278d1b 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -673,6 +673,130 @@ static const struct file_operations proc_lock_stat_operations = {
 };
 #endif /* CONFIG_LOCK_STAT */
 
+#ifdef CONFIG_LOCKED_ACCESS
+#include <linux/slab.h>
+static struct proc_dir_entry *locked_access_dir;
+static int seq_laclass_show(struct seq_file *m, void *v)
+{
+	struct locked_access_class *laclass;
+	loff_t *pos = v;
+	struct acqchain *acqchain;
+	struct locked_access_struct *s;
+	struct locked_access_location *loc;
+	unsigned long acq_ip;
+	int i;
+
+	laclass = (struct locked_access_class *)m->private;
+
+	/* Pair with the smp_store_release() in add_acqchain() */
+	if (*pos >= smp_load_acquire(&laclass->nr_acqchains) || *pos < 0)
+		return SEQ_SKIP;
+
+	acqchain = laclass->acqchains + *pos;
+
+	seq_printf(m, "ACQCHAIN 0x%llx, %d locks, irq_context %x:\n",
+			acqchain->chain_key, acqchain->depth,
+			acqchain->irq_context);
+	for (i = 0; i < acqchain->depth; i++) {
+		acq_ip = laclass->acqchain_hlocks[acqchain->base + i];
+		seq_printf(m, "%*sLOCK at [<%p>] %pSR\n",
+				(i+1) * 2, "", (void *) acq_ip,
+				(void *) acq_ip);
+	}
+
+	/* Pair with the list_add_tail_rcu() in add_locked_access() */
+	list_for_each_entry_lockless(s, &acqchain->accesses, list) {
+		loc = s->loc;
+		seq_printf(m, "%*sACCESS TYPE %x at %s:%ld\n",
+				(i+1) * 2, "", s->type, loc->filename,
+				loc->lineno);
+
+	}
+	return 0;
+}
+
+static void *seq_laclass_start(struct seq_file *m, loff_t *pos)
+{
+	struct locked_access_class *laclass;
+	loff_t *rpos;
+
+	laclass = (struct locked_access_class *)m->private;
+
+	/* Pair with the smp_store_release() in add_acqchain() */
+	if (*pos >= smp_load_acquire(&laclass->nr_acqchains) || *pos < 0)
+		return NULL;
+
+	rpos = kzalloc(sizeof(loff_t), GFP_KERNEL);
+
+	if (!rpos)
+		return NULL;
+
+	*rpos = *pos;
+
+	return rpos;
+}
+
+static void *seq_laclass_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	struct locked_access_class *laclass;
+	loff_t *rpos = v;
+
+	laclass = (struct locked_access_class *)m->private;
+
+	/* Pair with the smp_store_release() in add_acqchain() */
+	if (*pos + 1 < smp_load_acquire(&laclass->nr_acqchains) && *pos >= 0) {
+		*pos = ++*rpos;
+		return rpos;
+	}
+
+	return NULL;
+}
+
+static void seq_laclass_stop(struct seq_file *m, void *v)
+{
+	kfree(v);
+}
+
+static const struct seq_operations seq_laclass_ops = {
+	.start = seq_laclass_start,
+	.next = seq_laclass_next,
+	.stop = seq_laclass_stop,
+	.show = seq_laclass_show
+};
+
+static int proc_laclass_open(struct inode *inode, struct file *file)
+{
+	struct seq_file *seq;
+	int rc;
+
+	rc = seq_open(file, &seq_laclass_ops);
+
+	if (rc != 0)
+		return rc;
+
+	if (!PDE_DATA(inode))
+		return -ENOENT;
+
+	seq = file->private_data;
+	seq->private = PDE_DATA(inode);
+
+	return 0;
+}
+
+static const struct file_operations proc_laclass_operations = {
+	.open		= proc_laclass_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= seq_release
+};
+
+int create_laclass_proc(const char *name, struct locked_access_class *laclass)
+{
+	return !proc_create_data(name, S_IRUSR, locked_access_dir,
+			&proc_laclass_operations, laclass);
+}
+#endif /* CONFIG_LOCKED_ACCESS */
+
 static int __init lockdep_proc_init(void)
 {
 	proc_create("lockdep", S_IRUSR, NULL, &proc_lockdep_operations);
@@ -688,6 +812,9 @@ static int __init lockdep_proc_init(void)
 		    &proc_lock_stat_operations);
 #endif
 
+#ifdef CONFIG_LOCKED_ACCESS
+	locked_access_dir = proc_mkdir("locked_access", NULL);
+#endif /* CONFIG_LOCKED_ACCESS */
 	return 0;
 }
 
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC v2 6/6] RCU: Track rcu_dereference() in RCU read-side critical section
  2016-02-16  5:57 [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
                   ` (4 preceding siblings ...)
  2016-02-16  5:57 ` [RFC v2 5/6] lockdep: LOCKED_ACCESS: Add proc interface for locked access class Boqun Feng
@ 2016-02-16  5:57 ` Boqun Feng
  2016-02-25 14:32 ` [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Peter Zijlstra
  6 siblings, 0 replies; 21+ messages in thread
From: Boqun Feng @ 2016-02-16  5:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
	Boqun Feng

The variables protected by an RCU read-side critical section are
sometimes hard to figure out, especially when the critical section is
long or has some function calls in it. However, figuring out which
variable a RCU read-side critical section protects could save
us a lot of time for code reviewing, bug fixing or performance tuning.

This patch therefore uses the LOCKED_ACCESS to collect the information
of relationship between rcu_dereference*() and rcu_read_lock*() by
doing:

Step 0: define a locked_access_class for RCU.

Step 1: set the content of rcu_*_lock_key and __srcu_key to the
address of the locked_access_class for RCU.

Step 2: add locked_access_point() in __rcu_dereference_check()

After that we can figure out not only in which RCU read-side critical
section but also after which rcu_read_lock*() called an
rcu_dereference*() is called.

This feature is controlled by a config option RCU_LOCKED_ACCESS.

Also clean up the initialization code of lockdep_maps for different
flavors of RCU a little bit.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/rcupdate.h | 12 ++++++++++++
 include/linux/srcu.h     |  2 +-
 kernel/locking/lockdep.c |  3 +++
 kernel/rcu/update.c      | 25 +++++++++++--------------
 lib/Kconfig.debug        | 12 ++++++++++++
 5 files changed, 39 insertions(+), 15 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 14e6f47..7ca22c1 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -610,6 +610,17 @@ static inline void rcu_preempt_sleep_check(void)
 #define rcu_dereference_sparse(p, space)
 #endif /* #else #ifdef __CHECKER__ */
 
+#ifdef CONFIG_RCU_LOCKED_ACCESS
+extern struct locked_access_class rcu_laclass;
+#define STATIC_DEFINE_RCU_LOCK_CLASS_KEY(name) \
+	static struct lock_class_key name = { .laclass = &rcu_laclass }
+#define rcu_dereference_access() \
+	locked_access_point(&rcu_laclass, LOCKED_ACCESS_TYPE_READ)
+#else /* #ifdef CONFIG_LOCKED_ACCESS */
+#define rcu_dereference_access()
+#define STATIC_DEFINE_RCU_LOCK_CLASS_KEY(name) \
+	static struct lock_class_key name
+#endif /* #else #ifdef CONFIG_LOCKED_ACCESS */
 #define __rcu_access_pointer(p, space) \
 ({ \
 	typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
@@ -622,6 +633,7 @@ static inline void rcu_preempt_sleep_check(void)
 	typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
 	RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
 	rcu_dereference_sparse(p, space); \
+	rcu_dereference_access(); \
 	((typeof(*p) __force __kernel *)(________p1)); \
 })
 #define __rcu_dereference_protected(p, c, space) \
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index f5f80c5..ba4bcbe 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -69,7 +69,7 @@ int __init_srcu_struct(struct srcu_struct *sp, const char *name,
 
 #define init_srcu_struct(sp) \
 ({ \
-	static struct lock_class_key __srcu_key; \
+	STATIC_DEFINE_RCU_LOCK_CLASS_KEY(__srcu_key); \
 	\
 	__init_srcu_struct((sp), #sp, &__srcu_key); \
 })
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 4094757..c6c25c5 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4616,5 +4616,8 @@ void locked_access(struct locked_access_class *laclass,
 		correlate_locked_access(laclass, acqchain, loc, type);
 }
 EXPORT_SYMBOL(locked_access);
+#ifdef CONFIG_RCU_LOCKED_ACCESS
+DEFINE_LACLASS(rcu);
+#endif
 
 #endif /* CONFIG_LOCKED_ACCESS */
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 76b94e1..ecb29aa 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -235,20 +235,17 @@ EXPORT_SYMBOL_GPL(__rcu_read_unlock);
 #endif /* #ifdef CONFIG_PREEMPT_RCU */
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
-static struct lock_class_key rcu_lock_key;
-struct lockdep_map rcu_lock_map =
-	STATIC_LOCKDEP_MAP_INIT("rcu_read_lock", &rcu_lock_key);
-EXPORT_SYMBOL_GPL(rcu_lock_map);
-
-static struct lock_class_key rcu_bh_lock_key;
-struct lockdep_map rcu_bh_lock_map =
-	STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_bh", &rcu_bh_lock_key);
-EXPORT_SYMBOL_GPL(rcu_bh_lock_map);
-
-static struct lock_class_key rcu_sched_lock_key;
-struct lockdep_map rcu_sched_lock_map =
-	STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_sched", &rcu_sched_lock_key);
-EXPORT_SYMBOL_GPL(rcu_sched_lock_map);
+
+#define DEFINE_RCU_LOCKDEP_MAP(flavor)					\
+	STATIC_DEFINE_RCU_LOCK_CLASS_KEY(rcu##flavor##_lock_key);	\
+	struct lockdep_map rcu##flavor##_lock_map =			\
+		STATIC_LOCKDEP_MAP_INIT("rcu" #flavor "_read_lock",	\
+					&rcu##flavor##_lock_key);	\
+	EXPORT_SYMBOL_GPL(rcu##flavor##_lock_map)
+
+DEFINE_RCU_LOCKDEP_MAP();
+DEFINE_RCU_LOCKDEP_MAP(_bh);
+DEFINE_RCU_LOCKDEP_MAP(_sched);
 
 static struct lock_class_key rcu_callback_key;
 struct lockdep_map rcu_callback_map =
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 178f288..b6003d4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1410,6 +1410,18 @@ config RCU_EQS_DEBUG
 	  Say N here if you need ultimate kernel/user switch latencies
 	  Say Y if you are unsure
 
+config RCU_LOCKED_ACCESS
+	bool "Track data access in RCU read-side critical sections"
+	depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+	select LOCKED_ACCESS
+	default n
+	help
+	  Track data acces in RCU read-side critical sections,
+	  by doing so, one can examine which rcu_dereference() and its
+	  friends are called in which RCU read-side critical sections,
+	  and even more detailed, after which rcu_read_lock() and
+	  its friends are called.
+
 endmenu # "RCU Debugging"
 
 config DEBUG_BLOCK_EXT_DEVT
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [RFC v2 1/6] lockdep: Add helper functions of irq_context
  2016-02-16  5:57 ` [RFC v2 1/6] lockdep: Add helper functions of irq_context Boqun Feng
@ 2016-02-17  0:17   ` Paul E. McKenney
  2016-04-23 12:53   ` [tip:locking/urgent] locking/lockdep: Fix ->irq_context calculation tip-bot for Boqun Feng
  1 sibling, 0 replies; 21+ messages in thread
From: Paul E. McKenney @ 2016-02-17  0:17 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Tue, Feb 16, 2016 at 01:57:40PM +0800, Boqun Feng wrote:
> Add two helper functions of irq_context:
> 
> task_irq_context(): return the encoded irq_context of the task, the
> return value is encoded in the same as ->irq_context of held_lock.
> Always return 0 if !(CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING)
> 
> is_same_irq_context(): compare whether two irq_context are the same.
> Always return true if !(CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING)
> 
> These two functions are added for the future use of irq_context by
> LOCKED_ACCESS, because LOCKED_ACCESS needs to get the irq_context value
> of a task when a data access happens rather than when a lock is
> acquired, so it needs another way than getting the value from
> held_lock::irq_context
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

Queued again for review and testing.  ;-)

							Thanx, Paul

> ---
>  kernel/locking/lockdep.c | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 60ace56..f36df37 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -2839,6 +2839,16 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
>  	return 1;
>  }
> 
> +static unsigned int task_irq_context(struct task_struct *task)
> +{
> +	return 2 * (task->hardirq_context ? 1 : 0) + task->softirq_context;
> +}
> +
> +static bool is_same_irq_context(unsigned int ctx1, unsigned int ctx2)
> +{
> +	return ctx1 == ctx2;
> +}
> +
>  static int separate_irq_context(struct task_struct *curr,
>  		struct held_lock *hlock)
>  {
> @@ -2847,8 +2857,7 @@ static int separate_irq_context(struct task_struct *curr,
>  	/*
>  	 * Keep track of points where we cross into an interrupt context:
>  	 */
> -	hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +
> -				curr->softirq_context;
> +	hlock->irq_context = task_irq_context(curr);
>  	if (depth) {
>  		struct held_lock *prev_hlock;
> 
> @@ -2880,6 +2889,9 @@ static inline int mark_irqflags(struct task_struct *curr,
>  	return 1;
>  }
> 
> +#define task_irq_context(task)			0
> +#define is_same_irq_context(ctx1, ctx2)		true
> +
>  static inline int separate_irq_context(struct task_struct *curr,
>  		struct held_lock *hlock)
>  {
> -- 
> 2.7.1
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-02-16  5:57 [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
                   ` (5 preceding siblings ...)
  2016-02-16  5:57 ` [RFC v2 6/6] RCU: Track rcu_dereference() in RCU read-side critical section Boqun Feng
@ 2016-02-25 14:32 ` Peter Zijlstra
  2016-02-25 15:37   ` Paul E. McKenney
  6 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-02-25 14:32 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, Ingo Molnar, Paul E. McKenney, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Tue, Feb 16, 2016 at 01:57:39PM +0800, Boqun Feng wrote:
> As a characteristic of RCU, read-side critical sections have a very
> loose connection with rcu_dereference()s, which is you can only be sure
> about an rcu_dereference() might be called in some read-side critical
> section, but if code gets complex, you may not be sure which read-side
> critical section exactly, this might be also an problem for some other
> locking mechanisms, that is the critical sections protecting data and
> the data accesses protected are not clearly correlated.
> 
> In this series, we are introducing LOCKED_ACCESS framework and based on
> which, we implement the RCU_LOCKED_ACCESS functionality to give us a
> clear hint: which rcu_dereference() happens in which RCU read-side
> critical section. 
> 
> After this series applied, and if CONFIG_RCU_LOCKED_ACCESS=y, the proc
> file /proc/locked_access/rcu will show all relationships collected so
> far for rcu_read_lock() and their friends and rcu_dereference*().
> 

But why !? What does this bring us, why do I want to even look at these
patches?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-02-25 14:32 ` [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Peter Zijlstra
@ 2016-02-25 15:37   ` Paul E. McKenney
  2016-02-26  3:06     ` Boqun Feng
  0 siblings, 1 reply; 21+ messages in thread
From: Paul E. McKenney @ 2016-02-25 15:37 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Boqun Feng, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Thu, Feb 25, 2016 at 03:32:43PM +0100, Peter Zijlstra wrote:
> On Tue, Feb 16, 2016 at 01:57:39PM +0800, Boqun Feng wrote:
> > As a characteristic of RCU, read-side critical sections have a very
> > loose connection with rcu_dereference()s, which is you can only be sure
> > about an rcu_dereference() might be called in some read-side critical
> > section, but if code gets complex, you may not be sure which read-side
> > critical section exactly, this might be also an problem for some other
> > locking mechanisms, that is the critical sections protecting data and
> > the data accesses protected are not clearly correlated.
> > 
> > In this series, we are introducing LOCKED_ACCESS framework and based on
> > which, we implement the RCU_LOCKED_ACCESS functionality to give us a
> > clear hint: which rcu_dereference() happens in which RCU read-side
> > critical section. 
> > 
> > After this series applied, and if CONFIG_RCU_LOCKED_ACCESS=y, the proc
> > file /proc/locked_access/rcu will show all relationships collected so
> > far for rcu_read_lock() and their friends and rcu_dereference*().
> 
> But why !? What does this bring us, why do I want to even look at these
> patches?

There were some complaints about the difficulty of figuring out what
was being protected by a given rcu_read_lock() in cases where the
corresponding rcu_dereference() is several function calls down, and
especially in cases where the function calls are via pointers.
These cases show up in a number of places, perhaps most prominently
in networking.

Boqun's patches therefore use lockdep to make an association between
each rcu_dereference() and the rcu_read_lock() protecting it.

Seem reasonable, or were the complaints just a flash in the pan?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-02-25 15:37   ` Paul E. McKenney
@ 2016-02-26  3:06     ` Boqun Feng
  2016-02-26 11:25       ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Boqun Feng @ 2016-02-26  3:06 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Peter Zijlstra, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

[-- Attachment #1: Type: text/plain, Size: 2846 bytes --]

On Thu, Feb 25, 2016 at 07:37:24AM -0800, Paul E. McKenney wrote:
> On Thu, Feb 25, 2016 at 03:32:43PM +0100, Peter Zijlstra wrote:
> > On Tue, Feb 16, 2016 at 01:57:39PM +0800, Boqun Feng wrote:
> > > As a characteristic of RCU, read-side critical sections have a very
> > > loose connection with rcu_dereference()s, which is you can only be sure
> > > about an rcu_dereference() might be called in some read-side critical
> > > section, but if code gets complex, you may not be sure which read-side
> > > critical section exactly, this might be also an problem for some other
> > > locking mechanisms, that is the critical sections protecting data and
> > > the data accesses protected are not clearly correlated.
> > > 
> > > In this series, we are introducing LOCKED_ACCESS framework and based on
> > > which, we implement the RCU_LOCKED_ACCESS functionality to give us a
> > > clear hint: which rcu_dereference() happens in which RCU read-side
> > > critical section. 
> > > 
> > > After this series applied, and if CONFIG_RCU_LOCKED_ACCESS=y, the proc
> > > file /proc/locked_access/rcu will show all relationships collected so
> > > far for rcu_read_lock() and their friends and rcu_dereference*().
> > 
> > But why !? What does this bring us, why do I want to even look at these
> > patches?
> 
> There were some complaints about the difficulty of figuring out what
> was being protected by a given rcu_read_lock() in cases where the
> corresponding rcu_dereference() is several function calls down, and
> especially in cases where the function calls are via pointers.
> These cases show up in a number of places, perhaps most prominently
> in networking.
> 
> Boqun's patches therefore use lockdep to make an association between
> each rcu_dereference() and the rcu_read_lock() protecting it.
> 

Yes, this patch is aiming to provide some information about this,
actually the background of this patchset is a discussion between Ingo
and Paul last year:

http://lkml.kernel.org/g/20150414102505.GA13015@gmail.com

In that discussion, Ingo gave one example about how hard it is to figure
out what a RCU read-side critical section is protecting. Ingo's proposal
of solving this problem was to add an extra parameter for RCU related
primitives, however, Paul seemed to have concerns about that approach,
because there were many RCU users' code that would need to be modified
and there were corner cases where the one extra parameter was not enough
or not necessary.

Therefore I tried to figure out a way for making the association of
rcu_dereference() and rcu_read_lock() automatically without the
modification of code of the RCU users. That's how this patchset
comes.

Regards,
Boqun

> Seem reasonable, or were the complaints just a flash in the pan?
> 
> 							Thanx, Paul
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-02-26  3:06     ` Boqun Feng
@ 2016-02-26 11:25       ` Peter Zijlstra
  2016-02-29  1:12         ` Boqun Feng
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-02-26 11:25 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Paul E. McKenney, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Fri, Feb 26, 2016 at 11:06:27AM +0800, Boqun Feng wrote:
> On Thu, Feb 25, 2016 at 07:37:24AM -0800, Paul E. McKenney wrote:
> > On Thu, Feb 25, 2016 at 03:32:43PM +0100, Peter Zijlstra wrote:
> > > On Tue, Feb 16, 2016 at 01:57:39PM +0800, Boqun Feng wrote:
> > > > As a characteristic of RCU, read-side critical sections have a very
> > > > loose connection with rcu_dereference()s, which is you can only be sure
> > > > about an rcu_dereference() might be called in some read-side critical
> > > > section, but if code gets complex, you may not be sure which read-side
> > > > critical section exactly, this might be also an problem for some other
> > > > locking mechanisms, that is the critical sections protecting data and
> > > > the data accesses protected are not clearly correlated.
> > > > 
> > > > In this series, we are introducing LOCKED_ACCESS framework and based on
> > > > which, we implement the RCU_LOCKED_ACCESS functionality to give us a
> > > > clear hint: which rcu_dereference() happens in which RCU read-side
> > > > critical section. 
> > > > 
> > > > After this series applied, and if CONFIG_RCU_LOCKED_ACCESS=y, the proc
> > > > file /proc/locked_access/rcu will show all relationships collected so
> > > > far for rcu_read_lock() and their friends and rcu_dereference*().
> > > 
> > > But why !? What does this bring us, why do I want to even look at these
> > > patches?
> > 
> > There were some complaints about the difficulty of figuring out what
> > was being protected by a given rcu_read_lock() in cases where the
> > corresponding rcu_dereference() is several function calls down, and
> > especially in cases where the function calls are via pointers.
> > These cases show up in a number of places, perhaps most prominently
> > in networking.
> > Boqun's patches therefore use lockdep to make an association between
> > each rcu_dereference() and the rcu_read_lock() protecting it.
> > 
> 
> Yes, this patch is aiming to provide some information about this,
> actually the background of this patchset is a discussion between Ingo
> and Paul last year:
> 
> http://lkml.kernel.org/g/20150414102505.GA13015@gmail.com
> 
> In that discussion, Ingo gave one example about how hard it is to figure
> out what a RCU read-side critical section is protecting. Ingo's proposal
> of solving this problem was to add an extra parameter for RCU related
> primitives, however, Paul seemed to have concerns about that approach,
> because there were many RCU users' code that would need to be modified
> and there were corner cases where the one extra parameter was not enough
> or not necessary.
> 
> Therefore I tried to figure out a way for making the association of
> rcu_dereference() and rcu_read_lock() automatically without the
> modification of code of the RCU users. That's how this patchset
> comes.

So the problem I have with this is that your text didn't have a problem
statement, and if there's no problem, what the heck is all this code
for.

Secondly, I'm not sure a random list of data is going to help anybody.

The whole point of lockdep is that it tells you if you do broken. This
doesn't, and cannot, because it lacks information to verify against.

So I'm still not sure this is useful. Also, I would argue your code has
problems if you cannot even find your rcu_read_lock().

And yes, networking is a special snowflake, they very often push their
rcu_read_lock(),local_bh_disable(),preempt_disable() whatnot, all the
way out, because cycles.

Which makes it entirely impossible to tell what all it guards. That's a
very concious choice they make, but its a royal pain for a number of
things. I'm not sure we can do anything about it.

So no, I'm not convinced. Again, why is this useful?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-02-26 11:25       ` Peter Zijlstra
@ 2016-02-29  1:12         ` Boqun Feng
  2016-02-29 12:43           ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Boqun Feng @ 2016-02-29  1:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul E. McKenney, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

[-- Attachment #1: Type: text/plain, Size: 6468 bytes --]

On Fri, Feb 26, 2016 at 12:25:21PM +0100, Peter Zijlstra wrote:
> On Fri, Feb 26, 2016 at 11:06:27AM +0800, Boqun Feng wrote:
> > On Thu, Feb 25, 2016 at 07:37:24AM -0800, Paul E. McKenney wrote:
> > > On Thu, Feb 25, 2016 at 03:32:43PM +0100, Peter Zijlstra wrote:
> > > > On Tue, Feb 16, 2016 at 01:57:39PM +0800, Boqun Feng wrote:
> > > > > As a characteristic of RCU, read-side critical sections have a very
> > > > > loose connection with rcu_dereference()s, which is you can only be sure
> > > > > about an rcu_dereference() might be called in some read-side critical
> > > > > section, but if code gets complex, you may not be sure which read-side
> > > > > critical section exactly, this might be also an problem for some other
> > > > > locking mechanisms, that is the critical sections protecting data and
> > > > > the data accesses protected are not clearly correlated.
> > > > > 
> > > > > In this series, we are introducing LOCKED_ACCESS framework and based on
> > > > > which, we implement the RCU_LOCKED_ACCESS functionality to give us a
> > > > > clear hint: which rcu_dereference() happens in which RCU read-side
> > > > > critical section. 
> > > > > 
> > > > > After this series applied, and if CONFIG_RCU_LOCKED_ACCESS=y, the proc
> > > > > file /proc/locked_access/rcu will show all relationships collected so
> > > > > far for rcu_read_lock() and their friends and rcu_dereference*().
> > > > 
> > > > But why !? What does this bring us, why do I want to even look at these
> > > > patches?
> > > 
> > > There were some complaints about the difficulty of figuring out what
> > > was being protected by a given rcu_read_lock() in cases where the
> > > corresponding rcu_dereference() is several function calls down, and
> > > especially in cases where the function calls are via pointers.
> > > These cases show up in a number of places, perhaps most prominently
> > > in networking.
> > > Boqun's patches therefore use lockdep to make an association between
> > > each rcu_dereference() and the rcu_read_lock() protecting it.
> > > 
> > 
> > Yes, this patch is aiming to provide some information about this,
> > actually the background of this patchset is a discussion between Ingo
> > and Paul last year:
> > 
> > http://lkml.kernel.org/g/20150414102505.GA13015@gmail.com
> > 
> > In that discussion, Ingo gave one example about how hard it is to figure
> > out what a RCU read-side critical section is protecting. Ingo's proposal
> > of solving this problem was to add an extra parameter for RCU related
> > primitives, however, Paul seemed to have concerns about that approach,
> > because there were many RCU users' code that would need to be modified
> > and there were corner cases where the one extra parameter was not enough
> > or not necessary.
> > 
> > Therefore I tried to figure out a way for making the association of
> > rcu_dereference() and rcu_read_lock() automatically without the
> > modification of code of the RCU users. That's how this patchset
> > comes.
> 
> So the problem I have with this is that your text didn't have a problem
> statement, and if there's no problem, what the heck is all this code
> for.
> 

You are right, that's my bad. I should have done this better. Sorry
about that..

Please allow me to restate the problem and what this patchset can
help ;-)

The problem is:

Currently there is no way to know which rcu_dereference() and its
friends a rcu_read_lock() or one of its friends is protecting. And
the lack of such information is a pain for kernel developers and
reviewers to understand the purpose of a certain rcu_read_lock().

And this patchset is trying to provide the information to developers.

> Secondly, I'm not sure a random list of data is going to help anybody.
> 

One possible usage might be:

If someone is going to refactor a piece of code which originally uses
RCU for synchronization, and now he wants to switch to other
synchronization mechanism for some data because of some reason(for
consistency or writer side latency guarantee, etc.). Before he makes the
change, he needs to know if this rcu_read_lock() also protects other
data, and this is not an easy job and I don't think we have anything
helpful for this. With RCU_LOCKED_ACCESS, he can build the kernel with
RCU_LOCKED_ACCESS enabled, run some workloads and get the information
from a /proc file. In this way, RCU_LOCKED_ACCESS is helpful to
developers who want to know the information of the associations between
rcu_read_lock() and rcu_dereference().

> The whole point of lockdep is that it tells you if you do broken. This
> doesn't, and cannot, because it lacks information to verify against.
> 

Yes.. RCU_LOCKED_ACCESS is not a validator, so it cannot be as useful as
lockdep for finding bug.

> So I'm still not sure this is useful. Also, I would argue your code has
> problems if you cannot even find your rcu_read_lock().
> 

I think what you mean here is, for example, the case where we use
preempt_disable() instead of rcu_read_lock_sched() to pair with
synchronize_sched(), right?

If so, sure, this series cannot do anything about this, however, this
series is to correlate rcu_read_lock() and its friends with
rcu_dereference() and its friends. So it may miss some RCU usages if
rcu_read_lock() and its friends are not used. But these missing usages
can be covered in a similar way as this series, I can either add it
in the next version of this series or provide it as a separate
follow-up patchset, of course, if we all agree the problem is worth
solving and this patchset is helpful.

Or I'm missing your point here?

> And yes, networking is a special snowflake, they very often push their
> rcu_read_lock(),local_bh_disable(),preempt_disable() whatnot, all the
> way out, because cycles.
> 
> Which makes it entirely impossible to tell what all it guards. That's a
> very concious choice they make, but its a royal pain for a number of
> things. I'm not sure we can do anything about it.
> 
> So no, I'm not convinced. Again, why is this useful?

Sorry again for not providing enough text for you to review this series,
hope now it's more clear.

Looking forwards to your thoughts, thank you ;-)

If you think this series deserves a reconsideration I will send out a
new version with more text about what's the problem and why this is
useful.

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-02-29  1:12         ` Boqun Feng
@ 2016-02-29 12:43           ` Peter Zijlstra
  2016-03-01  9:32             ` Boqun Feng
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-02-29 12:43 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Paul E. McKenney, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Mon, Feb 29, 2016 at 09:12:20AM +0800, Boqun Feng wrote:

> The problem is:
> 
> Currently there is no way to know which rcu_dereference() and its
> friends a rcu_read_lock() or one of its friends is protecting. And
> the lack of such information is a pain for kernel developers and
> reviewers to understand the purpose of a certain rcu_read_lock().

If I have to go run a kernel and look at /proc data in order to complete
a review the patch will fail review, end of story.

> > Secondly, I'm not sure a random list of data is going to help anybody.
> > 
> 
> One possible usage might be:
> 
> If someone is going to refactor a piece of code which originally uses
> RCU for synchronization, and now he wants to switch to other
> synchronization mechanism for some data because of some reason(for
> consistency or writer side latency guarantee, etc.). Before he makes the
> change, he needs to know if this rcu_read_lock() also protects other
> data, and this is not an easy job and I don't think we have anything
> helpful for this. With RCU_LOCKED_ACCESS, he can build the kernel with
> RCU_LOCKED_ACCESS enabled, run some workloads and get the information
> from a /proc file. In this way, RCU_LOCKED_ACCESS is helpful to
> developers who want to know the information of the associations between
> rcu_read_lock() and rcu_dereference().

So 'possible', but will anyone actually do this? `

> > The whole point of lockdep is that it tells you if you do broken. This
> > doesn't, and cannot, because it lacks information to verify against.
> > 
> 
> Yes.. RCU_LOCKED_ACCESS is not a validator, so it cannot be as useful as
> lockdep for finding bug.

I would try really _really_ hard to make it a validator, those are so
much more useful.

One could for example allow something like:

	rcu_read_lock();
	rcu_annotate(&var->field);

	foo();

	rcu_read_unlock();

As an alternative to the syntax suggested by Ingo. This would allow
keeping the existing rcu_read_lock() signature so you don't have to
force update the entire kernel at once, while also (easily) allowing
multiple variables. Like:

	rcu_read_lock();
	rcu_annotate(&var->field);
	rcu_annotate(&var2->field2);

You can then have a special rule that if a particular RCU section has an
annotation, any rcu_dereference() not matched will field a warning. If
the annotation section is empty, nothing.


> > So I'm still not sure this is useful. Also, I would argue your code has
> > problems if you cannot even find your rcu_read_lock().
> > 
> 
> I think what you mean here is, for example, the case where we use
> preempt_disable() instead of rcu_read_lock_sched() to pair with
> synchronize_sched(), right?

No, I was more like:

	rcu_read_lock();
	foo()
	  bar()
	    var->func();
	      obj->func();
	        whatever();

and you're looking at a change to whatever() and wonder where the heck
the corresponding rcu_read_lock() lives and if we're having it held at
all.

> If so, sure, this series cannot do anything about this, however, this

Why not? Can't you stick whatever you need into preempt_disable() ?
kernel/sched/core.c:preempt_count_{add,sub}() are there for a (debug)
reason.

> If you think this series deserves a reconsideration I will send out a
> new version with more text about what's the problem and why this is
> useful.

So please make it a validator. That's so much more useful.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-02-29 12:43           ` Peter Zijlstra
@ 2016-03-01  9:32             ` Boqun Feng
  2016-03-01  9:57               ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Boqun Feng @ 2016-03-01  9:32 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul E. McKenney, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

[-- Attachment #1: Type: text/plain, Size: 7029 bytes --]

On Mon, Feb 29, 2016 at 01:43:18PM +0100, Peter Zijlstra wrote:
> On Mon, Feb 29, 2016 at 09:12:20AM +0800, Boqun Feng wrote:
> 
> > The problem is:
> > 
> > Currently there is no way to know which rcu_dereference() and its
> > friends a rcu_read_lock() or one of its friends is protecting. And
> > the lack of such information is a pain for kernel developers and
> > reviewers to understand the purpose of a certain rcu_read_lock().
> 
> If I have to go run a kernel and look at /proc data in order to complete
> a review the patch will fail review, end of story.
> 

OK, got it.

> > > Secondly, I'm not sure a random list of data is going to help anybody.
> > > 
> > 
> > One possible usage might be:
> > 
> > If someone is going to refactor a piece of code which originally uses
> > RCU for synchronization, and now he wants to switch to other
> > synchronization mechanism for some data because of some reason(for
> > consistency or writer side latency guarantee, etc.). Before he makes the
> > change, he needs to know if this rcu_read_lock() also protects other
> > data, and this is not an easy job and I don't think we have anything
> > helpful for this. With RCU_LOCKED_ACCESS, he can build the kernel with
> > RCU_LOCKED_ACCESS enabled, run some workloads and get the information
> > from a /proc file. In this way, RCU_LOCKED_ACCESS is helpful to
> > developers who want to know the information of the associations between
> > rcu_read_lock() and rcu_dereference().
> 
> So 'possible', but will anyone actually do this? `
> 

Not sure, but one of the purposes of sending out this RFC is telling
people we have a way to do this and see if they are interested in ;-)

> > > The whole point of lockdep is that it tells you if you do broken. This
> > > doesn't, and cannot, because it lacks information to verify against.
> > > 
> > 
> > Yes.. RCU_LOCKED_ACCESS is not a validator, so it cannot be as useful as
> > lockdep for finding bug.
> 
> I would try really _really_ hard to make it a validator, those are so
> much more useful.
> 

Agreed. I would love to make it a validator too. But I think that
information collection is the first step and an essential part for
making a validator, right? And what RCU_LOCKED_ACCESS does are
collecting information and showing that via a proc file. I really think
it's better to know what information we can collect before we actually
try to make a validator, besides, the proc file output could serve as a
way to debug the validator if we really had a validator.

So maybe we can first discuss about and work on how to collect and show
the information, and then seek a way for making a validator?

> One could for example allow something like:
> 
> 	rcu_read_lock();
> 	rcu_annotate(&var->field);
> 
> 	foo();
> 
> 	rcu_read_unlock();
> 
> As an alternative to the syntax suggested by Ingo. This would allow
> keeping the existing rcu_read_lock() signature so you don't have to
> force update the entire kernel at once, while also (easily) allowing
> multiple variables. Like:
> 
> 	rcu_read_lock();
> 	rcu_annotate(&var->field);
> 	rcu_annotate(&var2->field2);
> 
> You can then have a special rule that if a particular RCU section has an
> annotation, any rcu_dereference() not matched will field a warning. If
> the annotation section is empty, nothing.
> 

Good idea! but I don't think annotating a field in C language is easy,
I will try to see what we can get. Do you have something already in your
mind?

> 
> > > So I'm still not sure this is useful. Also, I would argue your code has
> > > problems if you cannot even find your rcu_read_lock().
> > > 
> > 
> > I think what you mean here is, for example, the case where we use
> > preempt_disable() instead of rcu_read_lock_sched() to pair with
> > synchronize_sched(), right?
> 
> No, I was more like:
> 
> 	rcu_read_lock();
> 	foo()
> 	  bar()
> 	    var->func();
> 	      obj->func();
> 	        whatever();
> 
> and you're looking at a change to whatever() and wonder where the heck
> the corresponding rcu_read_lock() lives and if we're having it held at
> all.
> 

Confused.. RCU_LOCKED_ACCESS has such information, For example, in the
piece of /proc/locked_access/rcu I put in the cover letter, which I will
put in the commit logs for the next version of this series:

ACQCHAIN 0xfe042af3bbfb2605, 3 locks, irq_context 0:
  LOCK at [<ffffffff81094b47>] SyS_kill+0x97/0x2a0
    LOCK at [<ffffffff8109286f>] kill_pid_info+0x1f/0x140
      LOCK at [<ffffffff81092605>] group_send_sig_info+0x5/0x130
        ACCESS TYPE 1 at kernel/signal.c:695

, which actually reflects:

	(in kernel/signal.c)
	SyS_kill()
	  kill_something_info()
	    rcu_read_lock()
	    kill_pid_info()
	      rcu_read_lock()
	      group_send_sig_info()
	        rcu_read_lock()
		check_kill_permission()
		  kill_ok_by_cred()
		    __task_cred()

and __task_cred() is an rcu_dereference(), so if anyone wants to figure
out what rcu_read_lock()s are hold when this deref happens, he can dump
the proc file, and search for the filename and linenumber of
__task_cred() and he will get the symbol + offset of all the
rcu_read_lock()s for this deref.


(Yes, I know there is an inconsistency on symbol + offset versus
filename + linenumber here, the reason of this inconsistency is that
the original purpose of this feature is to show some information to
developers to help understand code, therefore filename + linenumber is
more appropriate for this purpose. So I use filename + linenumber to
indicate the locations for rcu_dereference(), however, as what I can
only get from the task_struct::held_locks is acquire_ip, so I have to
use the symbol + offset for rcu_read_lock(), and there comes the
inconsistency.  Maybe I should make them all symbol + offset, and use
tools like addr2line to convert to filename + linenumber.)

> > If so, sure, this series cannot do anything about this, however, this
> 
> Why not? Can't you stick whatever you need into preempt_disable() ?
> kernel/sched/core.c:preempt_count_{add,sub}() are there for a (debug)
> reason.
> 

Yeah, you are right. I can do something similar to preempt_disable() by
maintaining a stack of callsites of preempt_disable() for each thread,
which means a little much memory cost, but it's worthwhile if we make
it useful.

> > If you think this series deserves a reconsideration I will send out a
> > new version with more text about what's the problem and why this is
> > useful.
> 
> So please make it a validator. That's so much more useful.

I cerntainly have done a bad job of the explanation of this series in
the cover letter and the commit logs, so I'm considering sending the
next version of this series with better text and examples, or you prefer
I should try to make it a validator first and send out as a whole for
reviewing?

Thank you for your thoughts and comments ;-)

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-03-01  9:32             ` Boqun Feng
@ 2016-03-01  9:57               ` Peter Zijlstra
  2016-03-01 10:01                 ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-03-01  9:57 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Paul E. McKenney, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Tue, Mar 01, 2016 at 05:32:42PM +0800, Boqun Feng wrote:
> > One could for example allow something like:
> > 
> > 	rcu_read_lock();
> > 	rcu_annotate(&var->field);
> > 
> > 	foo();
> > 
> > 	rcu_read_unlock();
> > 
> > As an alternative to the syntax suggested by Ingo. This would allow
> > keeping the existing rcu_read_lock() signature so you don't have to
> > force update the entire kernel at once, while also (easily) allowing
> > multiple variables. Like:
> > 
> > 	rcu_read_lock();
> > 	rcu_annotate(&var->field);
> > 	rcu_annotate(&var2->field2);
> > 
> > You can then have a special rule that if a particular RCU section has an
> > annotation, any rcu_dereference() not matched will field a warning. If
> > the annotation section is empty, nothing.
> > 
> 
> Good idea! but I don't think annotating a field in C language is easy,
> I will try to see what we can get. Do you have something already in your
> mind?

No, didn't really think about that :-/ The most restrictive version is
taking the absolute address, but that would make things like actual data
structures impossible.



> > > > So I'm still not sure this is useful. Also, I would argue your code has
> > > > problems if you cannot even find your rcu_read_lock().
> > > > 
> > > 
> > > I think what you mean here is, for example, the case where we use
> > > preempt_disable() instead of rcu_read_lock_sched() to pair with
> > > synchronize_sched(), right?
> > 
> > No, I was more like:
> > 
> > 	rcu_read_lock();
> > 	foo()
> > 	  bar()
> > 	    var->func();
> > 	      obj->func();
> > 	        whatever();
> > 
> > and you're looking at a change to whatever() and wonder where the heck
> > the corresponding rcu_read_lock() lives and if we're having it held at
> > all.
> > 
> 
> Confused.. RCU_LOCKED_ACCESS has such information, For example, in the
> piece of /proc/locked_access/rcu I put in the cover letter, which I will
> put in the commit logs for the next version of this series:

Yes, but my point was that if it wasn't obvious from the code, your code
has issues. You should not be needing a tool to figure this out.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-03-01  9:57               ` Peter Zijlstra
@ 2016-03-01 10:01                 ` Peter Zijlstra
  2016-03-02  6:37                   ` Boqun Feng
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-03-01 10:01 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Paul E. McKenney, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Tue, Mar 01, 2016 at 10:57:07AM +0100, Peter Zijlstra wrote:
> On Tue, Mar 01, 2016 at 05:32:42PM +0800, Boqun Feng wrote:
> > > One could for example allow something like:
> > > 
> > > 	rcu_read_lock();
> > > 	rcu_annotate(&var->field);
> > > 
> > > 	foo();
> > > 
> > > 	rcu_read_unlock();
> > > 
> > > As an alternative to the syntax suggested by Ingo. This would allow
> > > keeping the existing rcu_read_lock() signature so you don't have to
> > > force update the entire kernel at once, while also (easily) allowing
> > > multiple variables. Like:
> > > 
> > > 	rcu_read_lock();
> > > 	rcu_annotate(&var->field);
> > > 	rcu_annotate(&var2->field2);
> > > 
> > > You can then have a special rule that if a particular RCU section has an
> > > annotation, any rcu_dereference() not matched will field a warning. If
> > > the annotation section is empty, nothing.
> > > 
> > 
> > Good idea! but I don't think annotating a field in C language is easy,
> > I will try to see what we can get. Do you have something already in your
> > mind?
> 
> No, didn't really think about that :-/ The most restrictive version is
> taking the absolute address, but that would make things like actual data
> structures impossible.

So the thing with locks is they get a struct lockdep_map added, in which
we store all kinds of useful. But I don't think we cannot add a similar
structure to each and every RCU dereferencable (is that a word?)
variable.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-03-01 10:01                 ` Peter Zijlstra
@ 2016-03-02  6:37                   ` Boqun Feng
  2016-03-02 10:18                     ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Boqun Feng @ 2016-03-02  6:37 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul E. McKenney, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

[-- Attachment #1: Type: text/plain, Size: 2848 bytes --]

On Tue, Mar 01, 2016 at 11:01:34AM +0100, Peter Zijlstra wrote:
> On Tue, Mar 01, 2016 at 10:57:07AM +0100, Peter Zijlstra wrote:
> > On Tue, Mar 01, 2016 at 05:32:42PM +0800, Boqun Feng wrote:
> > > > One could for example allow something like:
> > > > 
> > > > 	rcu_read_lock();
> > > > 	rcu_annotate(&var->field);
> > > > 
> > > > 	foo();
> > > > 
> > > > 	rcu_read_unlock();
> > > > 
> > > > As an alternative to the syntax suggested by Ingo. This would allow
> > > > keeping the existing rcu_read_lock() signature so you don't have to
> > > > force update the entire kernel at once, while also (easily) allowing
> > > > multiple variables. Like:
> > > > 
> > > > 	rcu_read_lock();
> > > > 	rcu_annotate(&var->field);
> > > > 	rcu_annotate(&var2->field2);
> > > > 
> > > > You can then have a special rule that if a particular RCU section has an
> > > > annotation, any rcu_dereference() not matched will field a warning. If
> > > > the annotation section is empty, nothing.
> > > > 
> > > 
> > > Good idea! but I don't think annotating a field in C language is easy,
> > > I will try to see what we can get. Do you have something already in your
> > > mind?
> > 
> > No, didn't really think about that :-/ The most restrictive version is
> > taking the absolute address, but that would make things like actual data
> > structures impossible.
> 

Another problem of taking the absolute address is the address may change
from rcu_annotate() to rcu_dereference() for example:

	rcu_read_lock();
	rcu_annotate(&var->field);
					// in another thread
					var = new_var;
	
	// the address of var->field is different now.
	rcu_dereference(var->field);
	rcu_read_unlock();


> So the thing with locks is they get a struct lockdep_map added, in which
> we store all kinds of useful. But I don't think we cannot add a similar
> structure to each and every RCU dereferencable (is that a word?)
> variable.
> 

Well, some of them have rcu_head, but not all.. so you're right.

> 

I come up with something you may not like ;-) , which is taking the
strings of the expressions, for example:

	rcu_read_lock();
	rcu_annotate(var->field);	// "var->field" is added for 
					// the current section

	rcu_dereference(var->field);	// OK, because the expression
					// "var->field" is annotated.

	rcu_dereference(var->field2);	// Not OK, because the
					// expression "var->field2" is
					// not annotated, nor is any of
					// its suffix.

	rcu_annotate(field3);		// "field3" is added for the
					// current section
	
	rcu_dereference(var2->field3);	// OK, because the suffix
					// "field3" is annotated.
	rcu_read_unlock();

I think this is more accurate than taking the absolute address because
the address changing situations exist. So.. thoughts?

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-03-02  6:37                   ` Boqun Feng
@ 2016-03-02 10:18                     ` Peter Zijlstra
  2016-03-02 14:08                       ` Paul E. McKenney
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-03-02 10:18 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Paul E. McKenney, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Wed, Mar 02, 2016 at 02:37:18PM +0800, Boqun Feng wrote:
> I come up with something you may not like ;-) , which is taking the
> strings of the expressions, for example:

Heh, cute. Better than nothing I suppose.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections
  2016-03-02 10:18                     ` Peter Zijlstra
@ 2016-03-02 14:08                       ` Paul E. McKenney
  0 siblings, 0 replies; 21+ messages in thread
From: Paul E. McKenney @ 2016-03-02 14:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Boqun Feng, linux-kernel, Ingo Molnar, Josh Triplett,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin

On Wed, Mar 02, 2016 at 11:18:23AM +0100, Peter Zijlstra wrote:
> On Wed, Mar 02, 2016 at 02:37:18PM +0800, Boqun Feng wrote:
> > I come up with something you may not like ;-) , which is taking the
> > strings of the expressions, for example:
> 
> Heh, cute. Better than nothing I suppose.

It would be very good to get feedback from people using RCU heavily.
Some RCU read-side critical sections access a lot of fields, which
might make this sort of notation a bit of a maintainability issue.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [tip:locking/urgent] locking/lockdep: Fix ->irq_context calculation
  2016-02-16  5:57 ` [RFC v2 1/6] lockdep: Add helper functions of irq_context Boqun Feng
  2016-02-17  0:17   ` Paul E. McKenney
@ 2016-04-23 12:53   ` tip-bot for Boqun Feng
  1 sibling, 0 replies; 21+ messages in thread
From: tip-bot for Boqun Feng @ 2016-04-23 12:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, hpa, linux-kernel, mingo, mathieu.desnoyers, josh,
	rostedt, akpm, boqun.feng, jiangshanlai, paulmck, tglx, peterz

Commit-ID:  c24697566298df04cac9913e0601501b5ee2b3f5
Gitweb:     http://git.kernel.org/tip/c24697566298df04cac9913e0601501b5ee2b3f5
Author:     Boqun Feng <boqun.feng@gmail.com>
AuthorDate: Tue, 16 Feb 2016 13:57:40 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 23 Apr 2016 13:53:03 +0200

locking/lockdep: Fix ->irq_context calculation

task_irq_context() returns the encoded irq_context of the task, the
return value is encoded in the same as ->irq_context of held_lock.

Always return 0 if !(CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING)

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: sasha.levin@oracle.com
Link: http://lkml.kernel.org/r/1455602265-16490-2-git-send-email-boqun.feng@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/lockdep.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ed94109..beb06f6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2932,6 +2932,11 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
 	return 1;
 }
 
+static inline unsigned int task_irq_context(struct task_struct *task)
+{
+	return 2 * !!task->hardirq_context + !!task->softirq_context;
+}
+
 static int separate_irq_context(struct task_struct *curr,
 		struct held_lock *hlock)
 {
@@ -2940,8 +2945,6 @@ static int separate_irq_context(struct task_struct *curr,
 	/*
 	 * Keep track of points where we cross into an interrupt context:
 	 */
-	hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +
-				curr->softirq_context;
 	if (depth) {
 		struct held_lock *prev_hlock;
 
@@ -2973,6 +2976,11 @@ static inline int mark_irqflags(struct task_struct *curr,
 	return 1;
 }
 
+static inline unsigned int task_irq_context(struct task_struct *task)
+{
+	return 0;
+}
+
 static inline int separate_irq_context(struct task_struct *curr,
 		struct held_lock *hlock)
 {
@@ -3241,6 +3249,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	hlock->acquire_ip = ip;
 	hlock->instance = lock;
 	hlock->nest_lock = nest_lock;
+	hlock->irq_context = task_irq_context(curr);
 	hlock->trylock = trylock;
 	hlock->read = read;
 	hlock->check = check;

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-04-23 12:54 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-16  5:57 [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
2016-02-16  5:57 ` [RFC v2 1/6] lockdep: Add helper functions of irq_context Boqun Feng
2016-02-17  0:17   ` Paul E. McKenney
2016-04-23 12:53   ` [tip:locking/urgent] locking/lockdep: Fix ->irq_context calculation tip-bot for Boqun Feng
2016-02-16  5:57 ` [RFC v2 2/6] lockdep: LOCKED_ACCESS: Introduce locked access class and acqchain Boqun Feng
2016-02-16  5:57 ` [RFC v2 3/6] lockdep: LOCKED_ACCESS: Maintain the keys of acqchains Boqun Feng
2016-02-16  5:57 ` [RFC v2 4/6] lockdep: LOCKED_ACCESS: Introduce locked_access_point() Boqun Feng
2016-02-16  5:57 ` [RFC v2 5/6] lockdep: LOCKED_ACCESS: Add proc interface for locked access class Boqun Feng
2016-02-16  5:57 ` [RFC v2 6/6] RCU: Track rcu_dereference() in RCU read-side critical section Boqun Feng
2016-02-25 14:32 ` [RFC v2 0/6] Track RCU dereferences in RCU read-side critical sections Peter Zijlstra
2016-02-25 15:37   ` Paul E. McKenney
2016-02-26  3:06     ` Boqun Feng
2016-02-26 11:25       ` Peter Zijlstra
2016-02-29  1:12         ` Boqun Feng
2016-02-29 12:43           ` Peter Zijlstra
2016-03-01  9:32             ` Boqun Feng
2016-03-01  9:57               ` Peter Zijlstra
2016-03-01 10:01                 ` Peter Zijlstra
2016-03-02  6:37                   ` Boqun Feng
2016-03-02 10:18                     ` Peter Zijlstra
2016-03-02 14:08                       ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox