* [RFC 0/6] Track RCU dereferences in RCU read-side critical sections
@ 2016-02-03 16:45 Boqun Feng
2016-02-03 16:45 ` [RFC 1/6] lockdep: Add helper functions of irq_context Boqun Feng
` (6 more replies)
0 siblings, 7 replies; 13+ messages in thread
From: Boqun Feng @ 2016-02-03 16:45 UTC (permalink / raw)
To: linux-kernel
Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
Boqun Feng
As a characteristic of RCU, read-side critical sections have a very
loose connection with rcu_dereference()s, which is you can only be sure
about an rcu_dereference() might be called in some read-side critical
section, but if code gets complex, you may not be sure which read-side
critical section exactly, this might be also an problem for some other
locking mechanisms, that is the critical sections protecting data and
the data accesses protected are not clearly correlated.
In this series, we are introducing LOCKED_ACCESS framework and based on
which, we implement the RCU_LOCKED_ACCESS functionality to give us a
clear hint: which rcu_dereference() happens in which RCU read-side
critical section.
The basic idea of LOCKED_ACCESS is to maintain a chain of locks we have
acquired already, and when there happens a data access, correlate the
data access with the chain.
Lockdep already has lock chains, but we introduce a new but similar one
concept: acqchain, an acqchain is similar to a lock chain, except that
the key of an acqchain is the hash sum of the acquire (instruction)
positions of the locks in the chain, whereas the key of a lock chain is
the hash sum of the class keys of the locks in the chain.
Acqchains are introduced because we want to correlate data accesses with
critical sections and critical sections are better represented by the
acquire positions rather than lock classes.
The acqchain key of a task is maintained in the same way as lock chain
keys in lockdep.
Similar as lockdep, LOCKED_ACCESS also classify locks and data accesses
by groups, locked access class is introduced for this reason. A locked
access class also contains the data for allocation and lookup of
acqchains and accesses, and the address of a locked access class is used
as its key. By tagging locks and data accesses with the keys, we could
describe which locks and data accesses are related.
The entry point of LOCKED_ACCESS is locked_access_point(). Calling
locked_access_point() indicates that a data access happens, and after it
called the data access will be correlated with the current acqchain.
We also provide a /proc filesystem interface to show the information
we've collected, for each locked access class with the name <name> there
will be a file at /proc/locked_access/<name> showing all the
relationships collected so far for this locked access classes.
Based on LOCKED_ACCESS, we implement RCU_LOCKED_ACCESS, that tracks
rcu_dereference()s inside RCU read-side critical sections.
This patchset is based on v4.5-rc2 and consists of 6 patches(in which
patch 2-5 are the implementation of LOCKED_ACCESS):
1. Introduce some functions of irq_context.
2. Introduce locked access class and acqchain.
3. Maintain the keys of acqchains.
4. Introduce the entry point of LOCKED_ACCESS.
5. Add proc interface for locked access class
6. Enables LOCKED_ACCESS for RCU.
Tested by 0day and I also did a simple test on x86: build and boot a
kernel with RCU_LOCKED_ACCESS=y and CONFIG_PROVE_LOCKING=y and ran
several workloads(kernel building, git cloning, dbench), no problem has
been observed, and /proc/locked_access/rcu was able to collect the
relationships between ~300 RCU read-critical sections and ~500
rcu_dereference*().
Snippets of /proc/locked_access/rcu are as follow:
...(this rcu_dereference() happens after one rcu_read_lock())
...
ACQCHAIN 0xfdbf0c6aeea, 1 locks, irq_context 0:
LOCK at [<ffffffff812b1115>] get_proc_task_net+0x5/0x140
ACCESS TYPE 1 at kernel/pid.c:441
...
...(this rcu_dereference() happens after three rcu_read_lock())
...
ACQCHAIN 0xfe042af3bbfb2605, 3 locks, irq_context 0:
LOCK at [<ffffffff81094b47>] SyS_kill+0x97/0x2a0
LOCK at [<ffffffff8109286f>] kill_pid_info+0x1f/0x140
LOCK at [<ffffffff81092605>] group_send_sig_info+0x5/0x130
ACCESS TYPE 1 at kernel/signal.c:695
...
Looking forwards to any suggestion, comment and question ;-)
Regards,
Boqun
^ permalink raw reply [flat|nested] 13+ messages in thread
* [RFC 1/6] lockdep: Add helper functions of irq_context
2016-02-03 16:45 [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
@ 2016-02-03 16:45 ` Boqun Feng
2016-02-03 16:45 ` [RFC 2/6] lockdep: LOCKED_ACCESS: Introduce locked access class and acqchain Boqun Feng
` (5 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Boqun Feng @ 2016-02-03 16:45 UTC (permalink / raw)
To: linux-kernel
Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
Boqun Feng
Add two helper functions of irq_context:
task_irq_context(): return the encoded irq_context of the task, the
return value is encoded in the same as ->irq_context of held_lock.
Always return 0 if !(CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING)
is_same_irq_context(): compare whether two irq_context are the same.
Always return true if !(CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING)
These two functions are added for the future use of irq_context by
LOCKED_ACCESS, because LOCKED_ACCESS needs to get the irq_context value
of a task when a data access happens rather than when a lock is
acquired, so it needs another way than getting the value from
held_lock::irq_context
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
kernel/locking/lockdep.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 60ace56..c8a632b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2839,6 +2839,16 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
return 1;
}
+static unsigned int task_irq_context(struct task_struct *task)
+{
+ return 2 * (task->hardirq_context ? 1 : 0) + task->softirq_context;
+}
+
+static bool is_same_irq_context(unsigned int ctx1, unsigned int ctx2)
+{
+ return ctx1 == ctx2;
+}
+
static int separate_irq_context(struct task_struct *curr,
struct held_lock *hlock)
{
@@ -2847,8 +2857,7 @@ static int separate_irq_context(struct task_struct *curr,
/*
* Keep track of points where we cross into an interrupt context:
*/
- hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +
- curr->softirq_context;
+ hlock->irq_context = task_irq_context(curr);
if (depth) {
struct held_lock *prev_hlock;
@@ -2880,6 +2889,16 @@ static inline int mark_irqflags(struct task_struct *curr,
return 1;
}
+static unsigned int task_irq_context(struct task_struct *task)
+{
+ return 0;
+}
+
+static bool is_same_irq_context(unsigned int ctx1, unsigned int ctx2)
+{
+ return true;
+}
+
static inline int separate_irq_context(struct task_struct *curr,
struct held_lock *hlock)
{
--
2.7.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [RFC 2/6] lockdep: LOCKED_ACCESS: Introduce locked access class and acqchain
2016-02-03 16:45 [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
2016-02-03 16:45 ` [RFC 1/6] lockdep: Add helper functions of irq_context Boqun Feng
@ 2016-02-03 16:45 ` Boqun Feng
2016-02-03 16:45 ` [RFC 3/6] lockdep: LOCKED_ACCESS: Maintain the keys of acqchains Boqun Feng
` (4 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Boqun Feng @ 2016-02-03 16:45 UTC (permalink / raw)
To: linux-kernel
Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
Boqun Feng
As a similar concept as a lock class, a locked access class is a group
of locks and related data accesses of those locks. A locked access
class also contains the structures for allocation and lookup of
acqchains and accesses.
The address of a locked access class is used as its key, we tag a group
of locks with a locked access class by setting the content of the
group's lock_class_key to be the key(i.e. address) of locked access
class, and we can do this trick because, AFAIK, lockdep only uses the
address of a lock_class_key. However, although this doesn't change the
size of a lock_class_key, this does change the alignment requirement.
And tagging a lock_class_key with a locked access class is the step #1
to use LOCKED_ACCESS for a locking mechanism.
As a similar concept as a lock class chain, an acqchain is short for
acquire chain, which is a chain of lock acquisitions. The difference
between a lock class chain and an acqchain is that a lock class chain is
keyed by the hash sum of the class keys, whereas an acqchain is keyed by
the hash sum of the acquire ips. We introduce acqchains here because in
LOCKED_ACCESS, we care more about the locations of critical sections.
This patch defines the structures for those concepts, along with the
manipulation operations. Besides, config option CONFIG_LOCKED_ACCESS
is introduced in this patch, too.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
include/linux/lockdep.h | 17 ++++
kernel/locking/lockdep.c | 156 +++++++++++++++++++++++++++++++++++++
kernel/locking/lockdep_internals.h | 74 ++++++++++++++++++
lib/Kconfig.debug | 12 +++
4 files changed, 259 insertions(+)
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index c57e424..140c1c3 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -51,9 +51,26 @@ struct lockdep_subclass_key {
char __one_byte;
} __attribute__ ((__packed__));
+#ifdef CONFIG_LOCKED_ACCESS
+struct locked_access_class;
+
+struct lock_class_key {
+ union {
+ struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES];
+ /*
+ * Use the content of the lock_class_key to store locked access
+ * class, as lockdep only use the address of a lock_class_key.
+ * However, please note by doing so, we will change the align
+ * requirement of lock_class_key
+ */
+ struct locked_access_class *laclass;
+ };
+};
+#else
struct lock_class_key {
struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES];
};
+#endif
extern struct lock_class_key __lockdep_no_validate__;
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index c8a632b..3aff961 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4317,3 +4317,159 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
dump_stack();
}
EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);
+
+#ifdef CONFIG_LOCKED_ACCESS
+static void locked_access_class_init(struct locked_access_class *laclass)
+{
+ int i;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ arch_spin_lock(&laclass->lock);
+
+ if (laclass->initialized) {
+ arch_spin_unlock(&laclass->lock);
+ return;
+ }
+
+ for (i = 0; i < ACQCHAIN_HASH_SIZE; i++)
+ INIT_LIST_HEAD(laclass->acqchain_hashtable + i);
+
+ laclass->nr_acqchains = 0;
+ laclass->nr_acqchain_hlocks = 0;
+ laclass->nr_access_structs = 0;
+
+ /*
+ * Make sure the members of laclass have been initialized before
+ * setting ->initialized.
+ */
+ smp_store_release(&laclass->initialized, 1);
+ arch_spin_unlock(&laclass->lock);
+ local_irq_restore(flags);
+}
+
+static struct acqchain *lookup_acqchain(struct locked_access_class *laclass,
+ struct task_struct *task,
+ u64 acqchain_key)
+{
+ struct list_head *hash_head;
+ struct acqchain *acqchain;
+
+ hash_head = acqchainhashentry(laclass, acqchain_key);
+
+ list_for_each_entry_lockless(acqchain, hash_head, entry) {
+ if (acqchain->chain_key == acqchain_key
+ && is_same_irq_context(acqchain->irq_context,
+ task_irq_context(task)))
+ return acqchain;
+ }
+
+ return NULL;
+}
+
+static struct acqchain *add_acqchain(struct locked_access_class *laclass,
+ struct task_struct *task,
+ u64 acqchain_key)
+{
+ unsigned long flags;
+ struct acqchain *acqchain = NULL;
+ struct held_lock *hlock, *hlock_curr;
+ struct lockdep_map *instance;
+ int i, j, max_depth;
+ struct list_head *hash_head;
+
+ local_irq_save(flags);
+ arch_spin_lock(&laclass->lock);
+
+ /* Lookup again while holding the lock */
+ acqchain = lookup_acqchain(laclass, task, acqchain_key);
+
+ if (acqchain)
+ goto out;
+
+ if (laclass->nr_acqchains >= MAX_ACQCHAINS)
+ goto out;
+
+ hash_head = acqchainhashentry(laclass, acqchain_key);
+ acqchain = laclass->acqchains + laclass->nr_acqchains;
+ acqchain->chain_key = acqchain_key;
+
+ hlock = task->held_locks + task->lockdep_depth - 1;
+ acqchain->irq_context = task_irq_context(task);
+
+ for (i = task->lockdep_depth - 1; i >= 0; i--) {
+ hlock_curr = task->held_locks + i;
+ if (!is_same_irq_context(hlock_curr->irq_context,
+ acqchain->irq_context))
+ break;
+ }
+
+ i++;
+ max_depth = task->lockdep_depth - i;
+
+ j = 0;
+ if (likely(laclass->nr_acqchain_hlocks + max_depth
+ <= MAX_ACQCHAIN_HLOCKS)) {
+ acqchain->base = laclass->nr_acqchain_hlocks;
+ for (; i < task->lockdep_depth; i++) {
+ hlock_curr = task->held_locks + i;
+ instance = hlock_curr->instance;
+
+ /*
+ * The a lock instance may use its address as * ->key,
+ * in which case the lock instance doesn't belong to
+ * a locked access class.
+ */
+ if (instance != (void *)instance->key &&
+ instance->key->laclass == laclass) {
+ laclass->acqchain_hlocks[acqchain->base + j]
+ = hlock_curr->acquire_ip;
+ j++;
+ }
+ }
+ laclass->nr_acqchain_hlocks += j;
+ }
+
+ acqchain->depth = j;
+
+ /*
+ * Make sure stores to the members of acqchain observed after publish
+ * it.
+ */
+ smp_store_release(&laclass->nr_acqchains, laclass->nr_acqchains + 1);
+ INIT_LIST_HEAD(&acqchain->accesses);
+
+ /*
+ * Pair with the list_for_each_entry_lockless() in lookup_acqchain()
+ */
+ list_add_tail_rcu(&acqchain->entry, hash_head);
+out:
+ arch_spin_unlock(&laclass->lock);
+ local_irq_restore(flags);
+ return acqchain;
+}
+
+/*
+ * Lookup the acqchain with a key of @acqchain_key in the hash table of
+ * @laclass, if none exists, add a new one.
+ *
+ * Return the acqchain if one is found or if one is added, otherwise return
+ * NULL.
+ *
+ * Must be called after @laclass is initialized.
+ */
+static struct acqchain *
+lookup_or_add_acqchain(struct locked_access_class *laclass,
+ struct task_struct *task,
+ u64 acqchain_key)
+{
+ struct acqchain *acqchain = NULL;
+
+ acqchain = lookup_acqchain(laclass, task, acqchain_key);
+ if (!acqchain)
+ acqchain = add_acqchain(laclass, task, acqchain_key);
+
+ return acqchain;
+}
+
+#endif /* CONFIG_LOCKED_ACCESS */
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 51c4b24..5e2e133 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -168,3 +168,77 @@ DECLARE_PER_CPU(struct lockdep_stats, lockdep_stats);
# define debug_atomic_dec(ptr) do { } while (0)
# define debug_atomic_read(ptr) 0
#endif
+
+#ifdef CONFIG_LOCKED_ACCESS
+/*
+ * A chain of lock acquisitions, keyed by the hash sum of all the
+ * instruction positions of lock acquisitions
+ */
+struct acqchain {
+ u8 irq_context;
+ s8 depth;
+ s16 base;
+ /* Entry in hash table */
+ struct list_head entry;
+ u64 chain_key;
+ /* List of data accesses that happen after this chain */
+ struct list_head accesses;
+};
+
+#define iterate_acqchain_key(key, ip) \
+ (((key) << MAX_LOCKDEP_KEYS_BITS) ^ \
+ ((key) >> (64 - MAX_LOCKDEP_KEYS_BITS)) ^ \
+ (ip))
+
+#define MAX_ACQCHAINS_BITS 16
+#define MAX_ACQCHAINS (1UL << MAX_ACQCHAINS_BITS)
+#define MAX_ACQCHAIN_HLOCKS (MAX_ACQCHAINS * 5)
+
+#define ACQCHAIN_HASH_BITS (MAX_ACQCHAINS_BITS-1)
+#define ACQCHAIN_HASH_SIZE (1UL << ACQCHAIN_HASH_BITS)
+#define __acqchainhashfn(chain) hash_long(chain, ACQCHAIN_HASH_BITS)
+#define acqchainhashentry(lad, chain) \
+ (lad->acqchain_hashtable + __acqchainhashfn((chain)))
+
+#define MAX_LOCKED_ACCESS_STRUCTS (1UL << 16)
+
+/* Records of data accesses in LOCKED_ACCESS */
+struct locked_access_struct {
+ struct list_head list;
+ struct locked_access_location *loc;
+ int type;
+};
+
+/*
+ * locked_access_class represent a group of critical sections and related data
+ * accesses. Locked access class should be only defined statically, and the
+ * address of a locked_access_class is used as the 'key' of a locked access
+ * class.
+ */
+struct locked_access_class {
+ const char *name;
+ /* Hash table of acqchains, for lookup */
+ struct list_head acqchain_hashtable[ACQCHAIN_HASH_SIZE];
+ /* Storage of acqchains, for allocation */
+ struct acqchain acqchains[MAX_ACQCHAINS];
+ long nr_acqchains;
+ /* Storage of acquired IPs of acqchains, for allocation */
+ unsigned long acqchain_hlocks[MAX_ACQCHAIN_HLOCKS];
+ long nr_acqchain_hlocks;
+ /* Storage of data accesses, for allocation */
+ struct locked_access_struct access_structs[MAX_LOCKED_ACCESS_STRUCTS];
+ long nr_access_structs;
+ arch_spinlock_t lock;
+ int initialized;
+};
+
+#define INIT_LOCKED_ACCESS_DATA(_name) \
+ { \
+ .name = #_name, \
+ .lock = __ARCH_SPIN_LOCK_UNLOCKED, \
+ .initialized = 0, \
+ .nr_acqchains = 0, \
+ .nr_acqchain_hlocks = 0,\
+ .nr_access_structs = 0, \
+ }
+#endif /* CONFIG_LOCKED_ACCESS */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ecb9e75..178f288 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1065,6 +1065,18 @@ config LOCK_STAT
CONFIG_LOCK_STAT defines "contended" and "acquired" lock events.
(CONFIG_LOCKDEP defines "acquire" and "release" events.)
+config LOCKED_ACCESS
+ bool
+ depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+ select DEBUG_LOCK_ALLOC
+ default n
+ help
+ Provide a way to correlate data accesses with critical sections,
+ LOCKED_ACCESS is designed for the users of locking mechanisms to see
+ which data access happens inside which lock critical section. This
+ could help the users of a lock mechanism to gain more detailed
+ information about their lock usage.
+
config DEBUG_LOCKDEP
bool "Lock dependency engine debugging"
depends on DEBUG_KERNEL && LOCKDEP
--
2.7.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [RFC 3/6] lockdep: LOCKED_ACCESS: Maintain the keys of acqchains
2016-02-03 16:45 [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
2016-02-03 16:45 ` [RFC 1/6] lockdep: Add helper functions of irq_context Boqun Feng
2016-02-03 16:45 ` [RFC 2/6] lockdep: LOCKED_ACCESS: Introduce locked access class and acqchain Boqun Feng
@ 2016-02-03 16:45 ` Boqun Feng
2016-02-03 16:45 ` [RFC 4/6] lockdep: LOCKED_ACCESS: Introduce locked_access_point() Boqun Feng
` (3 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Boqun Feng @ 2016-02-03 16:45 UTC (permalink / raw)
To: linux-kernel
Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
Boqun Feng
Add held_lock::prev_acqchain_key and task_struct::curr_acqchain_key to
maintain the keys of acqchains as the same as what lockdep does for
keys of lock classes.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
include/linux/lockdep.h | 3 +++
include/linux/sched.h | 3 +++
kernel/fork.c | 3 +++
kernel/locking/lockdep.c | 31 +++++++++++++++++++++++++++++++
4 files changed, 40 insertions(+)
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 140c1c3..2ffe6c3 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -244,6 +244,9 @@ struct held_lock {
* with zero), here we store the previous hash value:
*/
u64 prev_chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+ u64 prev_acqchain_key;
+#endif
unsigned long acquire_ip;
struct lockdep_map *instance;
struct lockdep_map *nest_lock;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index a10494a..c24348e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1643,6 +1643,9 @@ struct task_struct {
unsigned int lockdep_recursion;
struct held_lock held_locks[MAX_LOCK_DEPTH];
gfp_t lockdep_reclaim_gfp;
+#ifdef CONFIG_LOCKED_ACCESS
+ u64 curr_acqchain_key;
+#endif
#endif
#ifdef CONFIG_UBSAN
unsigned int in_ubsan;
diff --git a/kernel/fork.c b/kernel/fork.c
index 2e391c7..d9fc51b 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1404,6 +1404,9 @@ static struct task_struct *copy_process(unsigned long clone_flags,
p->lockdep_depth = 0; /* no locks held yet */
p->curr_chain_key = 0;
p->lockdep_recursion = 0;
+# ifdef CONFIG_LOCKED_ACCESS
+ p->curr_acqchain_key = 0;
+# endif
#endif
#ifdef CONFIG_DEBUG_MUTEXES
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 3aff961..77732a9 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3096,6 +3096,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
int chain_head = 0;
int class_idx;
u64 chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+ u64 acqchain_key;
+#endif
if (unlikely(!debug_locks))
return 0;
@@ -3203,6 +3206,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
return 0;
chain_key = curr->curr_chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+ acqchain_key = curr->curr_acqchain_key;
+#endif
if (!depth) {
/*
* How can we have a chain hash when we ain't got no keys?!
@@ -3213,12 +3219,23 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
}
hlock->prev_chain_key = chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+ hlock->prev_acqchain_key = acqchain_key;
+#endif
if (separate_irq_context(curr, hlock)) {
chain_key = 0;
+
+#ifdef CONFIG_LOCKED_ACCESS
+ acqchain_key = 0;
+#endif
+
chain_head = 1;
}
chain_key = iterate_chain_key(chain_key, id);
+#ifdef CONFIG_LOCKED_ACCESS
+ acqchain_key = iterate_acqchain_key(acqchain_key, ip);
+#endif
if (nest_lock && !__lock_is_held(nest_lock))
return print_lock_nested_lock_not_held(curr, hlock, ip);
@@ -3226,6 +3243,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
return 0;
curr->curr_chain_key = chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+ curr->curr_acqchain_key = acqchain_key;
+#endif
curr->lockdep_depth++;
check_chain_key(curr);
#ifdef CONFIG_DEBUG_LOCKDEP
@@ -3355,6 +3375,9 @@ found_it:
curr->lockdep_depth = i;
curr->curr_chain_key = hlock->prev_chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+ curr->curr_acqchain_key = hlock->prev_acqchain_key;
+#endif
for (; i < depth; i++) {
hlock = curr->held_locks + i;
@@ -3445,6 +3468,9 @@ found_it:
curr->lockdep_depth = i;
curr->curr_chain_key = hlock->prev_chain_key;
+#ifdef CONFIG_LOCKED_ACCESS
+ curr->curr_acqchain_key = hlock->prev_acqchain_key;
+#endif
for (i++; i < depth; i++) {
hlock = curr->held_locks + i;
@@ -3886,6 +3912,11 @@ void lockdep_reset(void)
raw_local_irq_save(flags);
current->curr_chain_key = 0;
+
+#ifdef CONFIG_LOCKED_ACCESS
+ current->curr_acqchain_key = 0;
+#endif
+
current->lockdep_depth = 0;
current->lockdep_recursion = 0;
memset(current->held_locks, 0, MAX_LOCK_DEPTH*sizeof(struct held_lock));
--
2.7.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [RFC 4/6] lockdep: LOCKED_ACCESS: Introduce locked_access_point()
2016-02-03 16:45 [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
` (2 preceding siblings ...)
2016-02-03 16:45 ` [RFC 3/6] lockdep: LOCKED_ACCESS: Maintain the keys of acqchains Boqun Feng
@ 2016-02-03 16:45 ` Boqun Feng
2016-02-03 16:45 ` [RFC 5/6] lockdep: LOCKED_ACCESS: Add proc interface for locked access class Boqun Feng
` (2 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Boqun Feng @ 2016-02-03 16:45 UTC (permalink / raw)
To: linux-kernel
Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
Boqun Feng
locked_access_point() is the entry point of the whole LOCKED_ACCESS
framework, every time a locked_access_point() is called, LOCKE_ACCESS
will correlate the data access location with the current acqchain.
So putting locked_access_point() at the data accesses you care about is
the step #2 to use LOCKED_ACCESS.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
include/linux/lockdep.h | 29 ++++++++++++
kernel/locking/lockdep.c | 121 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 150 insertions(+)
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 2ffe6c3..cf7b6c8 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -570,4 +570,33 @@ lockdep_rcu_suspicious(const char *file, const int line, const char *s)
}
#endif
+#ifdef CONFIG_LOCKED_ACCESS
+struct locked_access_location {
+ /* Filename of the access */
+ const char *filename;
+ /* Line number of the access */
+ long lineno;
+};
+
+#define LOCKED_ACCESS_TYPE_READ 1 /* read */
+
+extern void locked_access(struct locked_access_class *laclass,
+ struct locked_access_location *access,
+ int type);
+
+/*
+ * Entry point of LOCKED_ACCESS, should be called at every place the data
+ * accesses of the laclass happen.
+ *
+ * @_type must be one of the LOCKED_ACCESS_TYPE_*
+ */
+#define locked_access_point(_laclass, _type) \
+({ \
+ static struct locked_access_location a = { \
+ .filename = __FILE__, \
+ .lineno = __LINE__, \
+ }; \
+ locked_access(_laclass, &a, _type); \
+})
+#endif /* CONFIG_LOCKED_ACCESS */
#endif /* __LINUX_LOCKDEP_H */
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 77732a9..996c2d5 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4503,4 +4503,125 @@ lookup_or_add_acqchain(struct locked_access_class *laclass,
return acqchain;
}
+/*
+ * Lookup the data access at @loc in the ->accesses list of @acqchain.
+ *
+ * Must be called after @laclass is initialized.
+ */
+static int lookup_locked_access(struct acqchain *acqchain,
+ struct locked_access_location *loc)
+{
+ struct locked_access_struct *s;
+
+ list_for_each_entry_lockless(s, &acqchain->accesses, list) {
+ if (s->loc == loc)
+ return 1;
+ }
+ return 0;
+
+}
+
+/*
+ * Add the data access at @loc into the ->accesses list of @acqchain.
+ *
+ * Return 1 if one access is added, otherwise return 0.
+ *
+ * Must be called after @laclass is initialized.
+ */
+static int add_locked_access(struct locked_access_class *laclass,
+ struct acqchain *acqchain,
+ struct locked_access_location *loc,
+ int type)
+{
+ unsigned long flags;
+ struct locked_access_struct *s;
+
+ local_irq_save(flags);
+ arch_spin_lock(&laclass->lock);
+
+ /* Lookup again while holding the lock */
+ if (lookup_locked_access(acqchain, loc)) {
+ arch_spin_unlock(&laclass->lock);
+ local_irq_restore(flags);
+ return 1;
+ }
+
+ if (unlikely(laclass->nr_access_structs >= MAX_LOCKED_ACCESS_STRUCTS)) {
+ arch_spin_unlock(&laclass->lock);
+ local_irq_restore(flags);
+ return 0;
+ }
+
+ s = laclass->access_structs + laclass->nr_access_structs;
+ s->loc = loc;
+ s->type = type;
+ laclass->nr_access_structs++;
+
+ /*
+ * Pair with the list_for_each_entry_lockless() in
+ * lookup_locked_access()
+ */
+ list_add_tail_rcu(&s->list, &acqchain->accesses);
+
+ arch_spin_unlock(&laclass->lock);
+ local_irq_restore(flags);
+ return 1;
+}
+
+/*
+ * Correlate the data access at @loc with @acqchain for @laclass
+ *
+ * Must be called after @laclass is initialized.
+ */
+static int correlate_locked_access(struct locked_access_class *laclass,
+ struct acqchain *acqchain,
+ struct locked_access_location *loc,
+ int type)
+{
+ if (lookup_locked_access(acqchain, loc))
+ return 1;
+
+ return add_locked_access(laclass, acqchain, loc, type);
+}
+
+/*
+ * The implementation of entry point locked_access_point(), called when a data
+ * access belonging to @laclass happens.
+ */
+void locked_access(struct locked_access_class *laclass,
+ struct locked_access_location *loc,
+ int type)
+{
+ u64 acqchain_key = current->curr_acqchain_key;
+ struct acqchain *acqchain;
+
+ if (unlikely(!laclass))
+ return;
+
+ /*
+ * Don't track for data access for lockdep itself, because we rely
+ * on the ->held_locks lockdep maintains, and if ->lockdep_recursion is
+ * not 0, the lock is not being maintained in ->held_locks.
+ */
+ if (unlikely(current->lockdep_recursion))
+ return;
+
+ /* Data access outside a critical section */
+ if (current->lockdep_depth <= 0)
+ return;
+
+ /*
+ * we only check whether laclass is initialized in locked_access,
+ * because this is the entry point of LOCKED_ACCESS.
+ */
+ if (unlikely(!smp_load_acquire(&laclass->initialized)))
+ locked_access_class_init(laclass);
+
+ acqchain = lookup_or_add_acqchain(laclass, current, acqchain_key);
+
+ if (acqchain)
+ correlate_locked_access(laclass, acqchain, loc, type);
+}
+EXPORT_SYMBOL(locked_access);
+
#endif /* CONFIG_LOCKED_ACCESS */
--
2.7.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [RFC 5/6] lockdep: LOCKED_ACCESS: Add proc interface for locked access class
2016-02-03 16:45 [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
` (3 preceding siblings ...)
2016-02-03 16:45 ` [RFC 4/6] lockdep: LOCKED_ACCESS: Introduce locked_access_point() Boqun Feng
@ 2016-02-03 16:45 ` Boqun Feng
2016-02-03 16:45 ` [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section Boqun Feng
2016-02-15 18:03 ` [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Paul E. McKenney
6 siblings, 0 replies; 13+ messages in thread
From: Boqun Feng @ 2016-02-03 16:45 UTC (permalink / raw)
To: linux-kernel
Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
Boqun Feng
Provide the proc filesystem interface for LOCKED_ACCESS, for a locked
access class whose name is <name>, there will be a file at
/proc/locked_access/<name> containing all the information the
LOCKED_ACCESS has collected so far.
Also add a macro to define a locked access class.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
kernel/locking/lockdep_internals.h | 17 +++++
kernel/locking/lockdep_proc.c | 127 +++++++++++++++++++++++++++++++++++++
2 files changed, 144 insertions(+)
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 5e2e133..cacf869 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -241,4 +241,21 @@ struct locked_access_class {
.nr_acqchain_hlocks = 0,\
.nr_access_structs = 0, \
}
+
+extern int create_laclass_proc(const char *name,
+ struct locked_access_class *laclass);
+
+#define DEFINE_CREATE_LACLASS_PROC(name) \
+static int __init ___create_##name##_laclass_proc(void) \
+{ \
+ return create_laclass_proc(#name, &name##_laclass); \
+} \
+late_initcall(___create_##name##_laclass_proc)
+
+/* Define a Locked Access Class and create its proc file */
+#define DEFINE_LACLASS(name) \
+ struct locked_access_class name##_laclass = \
+ INIT_LOCKED_ACCESS_DATA(name); \
+ EXPORT_SYMBOL(name##_laclass); \
+ DEFINE_CREATE_LACLASS_PROC(name)
#endif /* CONFIG_LOCKED_ACCESS */
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index dbb61a3..a278d1b 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -673,6 +673,130 @@ static const struct file_operations proc_lock_stat_operations = {
};
#endif /* CONFIG_LOCK_STAT */
+#ifdef CONFIG_LOCKED_ACCESS
+#include <linux/slab.h>
+static struct proc_dir_entry *locked_access_dir;
+static int seq_laclass_show(struct seq_file *m, void *v)
+{
+ struct locked_access_class *laclass;
+ loff_t *pos = v;
+ struct acqchain *acqchain;
+ struct locked_access_struct *s;
+ struct locked_access_location *loc;
+ unsigned long acq_ip;
+ int i;
+
+ laclass = (struct locked_access_class *)m->private;
+
+ /* Pair with the smp_store_release() in add_acqchain() */
+ if (*pos >= smp_load_acquire(&laclass->nr_acqchains) || *pos < 0)
+ return SEQ_SKIP;
+
+ acqchain = laclass->acqchains + *pos;
+
+ seq_printf(m, "ACQCHAIN 0x%llx, %d locks, irq_context %x:\n",
+ acqchain->chain_key, acqchain->depth,
+ acqchain->irq_context);
+ for (i = 0; i < acqchain->depth; i++) {
+ acq_ip = laclass->acqchain_hlocks[acqchain->base + i];
+ seq_printf(m, "%*sLOCK at [<%p>] %pSR\n",
+ (i+1) * 2, "", (void *) acq_ip,
+ (void *) acq_ip);
+ }
+
+ /* Pair with the list_add_tail_rcu() in add_locked_access() */
+ list_for_each_entry_lockless(s, &acqchain->accesses, list) {
+ loc = s->loc;
+ seq_printf(m, "%*sACCESS TYPE %x at %s:%ld\n",
+ (i+1) * 2, "", s->type, loc->filename,
+ loc->lineno);
+
+ }
+ return 0;
+}
+
+static void *seq_laclass_start(struct seq_file *m, loff_t *pos)
+{
+ struct locked_access_class *laclass;
+ loff_t *rpos;
+
+ laclass = (struct locked_access_class *)m->private;
+
+ /* Pair with the smp_store_release() in add_acqchain() */
+ if (*pos >= smp_load_acquire(&laclass->nr_acqchains) || *pos < 0)
+ return NULL;
+
+ rpos = kzalloc(sizeof(loff_t), GFP_KERNEL);
+
+ if (!rpos)
+ return NULL;
+
+ *rpos = *pos;
+
+ return rpos;
+}
+
+static void *seq_laclass_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ struct locked_access_class *laclass;
+ loff_t *rpos = v;
+
+ laclass = (struct locked_access_class *)m->private;
+
+ /* Pair with the smp_store_release() in add_acqchain() */
+ if (*pos + 1 < smp_load_acquire(&laclass->nr_acqchains) && *pos >= 0) {
+ *pos = ++*rpos;
+ return rpos;
+ }
+
+ return NULL;
+}
+
+static void seq_laclass_stop(struct seq_file *m, void *v)
+{
+ kfree(v);
+}
+
+static const struct seq_operations seq_laclass_ops = {
+ .start = seq_laclass_start,
+ .next = seq_laclass_next,
+ .stop = seq_laclass_stop,
+ .show = seq_laclass_show
+};
+
+static int proc_laclass_open(struct inode *inode, struct file *file)
+{
+ struct seq_file *seq;
+ int rc;
+
+ rc = seq_open(file, &seq_laclass_ops);
+
+ if (rc != 0)
+ return rc;
+
+ if (!PDE_DATA(inode))
+ return -ENOENT;
+
+ seq = file->private_data;
+ seq->private = PDE_DATA(inode);
+
+ return 0;
+}
+
+static const struct file_operations proc_laclass_operations = {
+ .open = proc_laclass_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release
+};
+
+int create_laclass_proc(const char *name, struct locked_access_class *laclass)
+{
+ return !proc_create_data(name, S_IRUSR, locked_access_dir,
+ &proc_laclass_operations, laclass);
+}
+#endif /* CONFIG_LOCKED_ACCESS */
+
static int __init lockdep_proc_init(void)
{
proc_create("lockdep", S_IRUSR, NULL, &proc_lockdep_operations);
@@ -688,6 +812,9 @@ static int __init lockdep_proc_init(void)
&proc_lock_stat_operations);
#endif
+#ifdef CONFIG_LOCKED_ACCESS
+ locked_access_dir = proc_mkdir("locked_access", NULL);
+#endif /* CONFIG_LOCKED_ACCESS */
return 0;
}
--
2.7.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section
2016-02-03 16:45 [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
` (4 preceding siblings ...)
2016-02-03 16:45 ` [RFC 5/6] lockdep: LOCKED_ACCESS: Add proc interface for locked access class Boqun Feng
@ 2016-02-03 16:45 ` Boqun Feng
2016-02-16 1:05 ` Paul E. McKenney
2016-02-15 18:03 ` [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Paul E. McKenney
6 siblings, 1 reply; 13+ messages in thread
From: Boqun Feng @ 2016-02-03 16:45 UTC (permalink / raw)
To: linux-kernel
Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin,
Boqun Feng
The variables protected by an RCU read-side critical section are
sometimes hard to figure out, especially when the critical section is
long or has some function calls in it. However, figuring out which
variable a RCU read-side critical section protects could save
us a lot of time for code reviewing, bug fixing or performance tuning.
This patch therefore uses the LOCKED_ACCESS to collect the information
of relationship between rcu_dereference*() and rcu_read_lock*() by
doing:
Step 0: define a locked_access_class for RCU.
Step 1: set the content of rcu_*_lock_key and __srcu_key to the
address of the locked_access_class for RCU.
Step 2: add locked_access_point() in __rcu_dereference_check()
After that we can figure out not only in which RCU read-side critical
section but also after which rcu_read_lock*() called an
rcu_dereference*() is called.
This feature is controlled by a config option RCU_LOCKED_ACCESS.
Also clean up the initialization code of lockdep_maps for different
flavors of RCU a little bit.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
include/linux/rcupdate.h | 8 ++++++++
include/linux/srcu.h | 8 +++++++-
kernel/locking/lockdep.c | 3 +++
kernel/rcu/update.c | 31 +++++++++++++++++--------------
lib/Kconfig.debug | 12 ++++++++++++
5 files changed, 47 insertions(+), 15 deletions(-)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 14e6f47..4bab658 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -610,6 +610,13 @@ static inline void rcu_preempt_sleep_check(void)
#define rcu_dereference_sparse(p, space)
#endif /* #else #ifdef __CHECKER__ */
+#ifdef CONFIG_RCU_LOCKED_ACCESS
+extern struct locked_access_class rcu_laclass;
+#define rcu_dereference_access() \
+ locked_access_point(&rcu_laclass, LOCKED_ACCESS_TYPE_READ)
+#else /* #ifdef CONFIG_LOCKED_ACCESS */
+#define rcu_dereference_access()
+#endif /* #else #ifdef CONFIG_LOCKED_ACCESS */
#define __rcu_access_pointer(p, space) \
({ \
typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
@@ -622,6 +629,7 @@ static inline void rcu_preempt_sleep_check(void)
typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
rcu_dereference_sparse(p, space); \
+ rcu_dereference_access(); \
((typeof(*p) __force __kernel *)(________p1)); \
})
#define __rcu_dereference_protected(p, c, space) \
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index f5f80c5..ff048a2 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -64,12 +64,18 @@ struct srcu_struct {
#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# ifdef CONFIG_RCU_LOCKED_ACCESS
+# define RCU_KEY { .laclass = &rcu_laclass }
+# else
+# define RCU_KEY { 0 }
+# endif
+
int __init_srcu_struct(struct srcu_struct *sp, const char *name,
struct lock_class_key *key);
#define init_srcu_struct(sp) \
({ \
- static struct lock_class_key __srcu_key; \
+ static struct lock_class_key __srcu_key = RCU_KEY; \
\
__init_srcu_struct((sp), #sp, &__srcu_key); \
})
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 996c2d5..36bd0cc 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4623,5 +4623,8 @@ void locked_access(struct locked_access_class *laclass,
correlate_locked_access(laclass, acqchain, loc, type);
}
EXPORT_SYMBOL(locked_access);
+#ifdef CONFIG_RCU_LOCKED_ACCESS
+DEFINE_LACLASS(rcu);
+#endif
#endif /* CONFIG_LOCKED_ACCESS */
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 76b94e1..ba2ded6 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -235,20 +235,23 @@ EXPORT_SYMBOL_GPL(__rcu_read_unlock);
#endif /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_DEBUG_LOCK_ALLOC
-static struct lock_class_key rcu_lock_key;
-struct lockdep_map rcu_lock_map =
- STATIC_LOCKDEP_MAP_INIT("rcu_read_lock", &rcu_lock_key);
-EXPORT_SYMBOL_GPL(rcu_lock_map);
-
-static struct lock_class_key rcu_bh_lock_key;
-struct lockdep_map rcu_bh_lock_map =
- STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_bh", &rcu_bh_lock_key);
-EXPORT_SYMBOL_GPL(rcu_bh_lock_map);
-
-static struct lock_class_key rcu_sched_lock_key;
-struct lockdep_map rcu_sched_lock_map =
- STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_sched", &rcu_sched_lock_key);
-EXPORT_SYMBOL_GPL(rcu_sched_lock_map);
+
+# ifdef CONFIG_RCU_LOCKED_ACCESS
+# define RCU_KEY { .laclass = &rcu_laclass }
+# else
+# define RCU_KEY { 0 }
+# endif
+
+#define DEFINE_RCU_LOCKDEP_MAP(flavor) \
+ static struct lock_class_key rcu##flavor##_lock_key = RCU_KEY; \
+ struct lockdep_map rcu##flavor##_lock_map = \
+ STATIC_LOCKDEP_MAP_INIT("rcu" #flavor "_read_lock", \
+ &rcu##flavor##_lock_key); \
+ EXPORT_SYMBOL_GPL(rcu##flavor##_lock_map)
+
+DEFINE_RCU_LOCKDEP_MAP();
+DEFINE_RCU_LOCKDEP_MAP(_bh);
+DEFINE_RCU_LOCKDEP_MAP(_sched);
static struct lock_class_key rcu_callback_key;
struct lockdep_map rcu_callback_map =
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 178f288..b6003d4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1410,6 +1410,18 @@ config RCU_EQS_DEBUG
Say N here if you need ultimate kernel/user switch latencies
Say Y if you are unsure
+config RCU_LOCKED_ACCESS
+ bool "Track data access in RCU read-side critical sections"
+ depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+ select LOCKED_ACCESS
+ default n
+ help
+ Track data acces in RCU read-side critical sections,
+ by doing so, one can examine which rcu_dereference() and its
+ friends are called in which RCU read-side critical sections,
+ and even more detailed, after which rcu_read_lock() and
+ its friends are called.
+
endmenu # "RCU Debugging"
config DEBUG_BLOCK_EXT_DEVT
--
2.7.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [RFC 0/6] Track RCU dereferences in RCU read-side critical sections
2016-02-03 16:45 [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
` (5 preceding siblings ...)
2016-02-03 16:45 ` [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section Boqun Feng
@ 2016-02-15 18:03 ` Paul E. McKenney
6 siblings, 0 replies; 13+ messages in thread
From: Paul E. McKenney @ 2016-02-15 18:03 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin
On Thu, Feb 04, 2016 at 12:45:06AM +0800, Boqun Feng wrote:
> As a characteristic of RCU, read-side critical sections have a very
> loose connection with rcu_dereference()s, which is you can only be sure
> about an rcu_dereference() might be called in some read-side critical
> section, but if code gets complex, you may not be sure which read-side
> critical section exactly, this might be also an problem for some other
> locking mechanisms, that is the critical sections protecting data and
> the data accesses protected are not clearly correlated.
Seeing no objections, I am queuing this series for review and testing.
The information it provides would have been extremely helpful to me
several times in past!
Thanx, Paul
> In this series, we are introducing LOCKED_ACCESS framework and based on
> which, we implement the RCU_LOCKED_ACCESS functionality to give us a
> clear hint: which rcu_dereference() happens in which RCU read-side
> critical section.
>
> The basic idea of LOCKED_ACCESS is to maintain a chain of locks we have
> acquired already, and when there happens a data access, correlate the
> data access with the chain.
>
> Lockdep already has lock chains, but we introduce a new but similar one
> concept: acqchain, an acqchain is similar to a lock chain, except that
> the key of an acqchain is the hash sum of the acquire (instruction)
> positions of the locks in the chain, whereas the key of a lock chain is
> the hash sum of the class keys of the locks in the chain.
>
> Acqchains are introduced because we want to correlate data accesses with
> critical sections and critical sections are better represented by the
> acquire positions rather than lock classes.
>
> The acqchain key of a task is maintained in the same way as lock chain
> keys in lockdep.
>
> Similar as lockdep, LOCKED_ACCESS also classify locks and data accesses
> by groups, locked access class is introduced for this reason. A locked
> access class also contains the data for allocation and lookup of
> acqchains and accesses, and the address of a locked access class is used
> as its key. By tagging locks and data accesses with the keys, we could
> describe which locks and data accesses are related.
>
> The entry point of LOCKED_ACCESS is locked_access_point(). Calling
> locked_access_point() indicates that a data access happens, and after it
> called the data access will be correlated with the current acqchain.
>
> We also provide a /proc filesystem interface to show the information
> we've collected, for each locked access class with the name <name> there
> will be a file at /proc/locked_access/<name> showing all the
> relationships collected so far for this locked access classes.
>
> Based on LOCKED_ACCESS, we implement RCU_LOCKED_ACCESS, that tracks
> rcu_dereference()s inside RCU read-side critical sections.
>
> This patchset is based on v4.5-rc2 and consists of 6 patches(in which
> patch 2-5 are the implementation of LOCKED_ACCESS):
>
> 1. Introduce some functions of irq_context.
>
> 2. Introduce locked access class and acqchain.
>
> 3. Maintain the keys of acqchains.
>
> 4. Introduce the entry point of LOCKED_ACCESS.
>
> 5. Add proc interface for locked access class
>
> 6. Enables LOCKED_ACCESS for RCU.
>
> Tested by 0day and I also did a simple test on x86: build and boot a
> kernel with RCU_LOCKED_ACCESS=y and CONFIG_PROVE_LOCKING=y and ran
> several workloads(kernel building, git cloning, dbench), no problem has
> been observed, and /proc/locked_access/rcu was able to collect the
> relationships between ~300 RCU read-critical sections and ~500
> rcu_dereference*().
>
> Snippets of /proc/locked_access/rcu are as follow:
>
> ...(this rcu_dereference() happens after one rcu_read_lock())
> ...
> ACQCHAIN 0xfdbf0c6aeea, 1 locks, irq_context 0:
> LOCK at [<ffffffff812b1115>] get_proc_task_net+0x5/0x140
> ACCESS TYPE 1 at kernel/pid.c:441
> ...
> ...(this rcu_dereference() happens after three rcu_read_lock())
> ...
> ACQCHAIN 0xfe042af3bbfb2605, 3 locks, irq_context 0:
> LOCK at [<ffffffff81094b47>] SyS_kill+0x97/0x2a0
> LOCK at [<ffffffff8109286f>] kill_pid_info+0x1f/0x140
> LOCK at [<ffffffff81092605>] group_send_sig_info+0x5/0x130
> ACCESS TYPE 1 at kernel/signal.c:695
> ...
>
> Looking forwards to any suggestion, comment and question ;-)
>
> Regards,
> Boqun
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section
2016-02-03 16:45 ` [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section Boqun Feng
@ 2016-02-16 1:05 ` Paul E. McKenney
2016-02-16 3:01 ` Boqun Feng
0 siblings, 1 reply; 13+ messages in thread
From: Paul E. McKenney @ 2016-02-16 1:05 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin
On Thu, Feb 04, 2016 at 12:45:12AM +0800, Boqun Feng wrote:
> The variables protected by an RCU read-side critical section are
> sometimes hard to figure out, especially when the critical section is
> long or has some function calls in it. However, figuring out which
> variable a RCU read-side critical section protects could save
> us a lot of time for code reviewing, bug fixing or performance tuning.
>
> This patch therefore uses the LOCKED_ACCESS to collect the information
> of relationship between rcu_dereference*() and rcu_read_lock*() by
> doing:
>
> Step 0: define a locked_access_class for RCU.
>
> Step 1: set the content of rcu_*_lock_key and __srcu_key to the
> address of the locked_access_class for RCU.
>
> Step 2: add locked_access_point() in __rcu_dereference_check()
>
> After that we can figure out not only in which RCU read-side critical
> section but also after which rcu_read_lock*() called an
> rcu_dereference*() is called.
>
> This feature is controlled by a config option RCU_LOCKED_ACCESS.
>
> Also clean up the initialization code of lockdep_maps for different
> flavors of RCU a little bit.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> include/linux/rcupdate.h | 8 ++++++++
> include/linux/srcu.h | 8 +++++++-
> kernel/locking/lockdep.c | 3 +++
> kernel/rcu/update.c | 31 +++++++++++++++++--------------
> lib/Kconfig.debug | 12 ++++++++++++
> 5 files changed, 47 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 14e6f47..4bab658 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -610,6 +610,13 @@ static inline void rcu_preempt_sleep_check(void)
> #define rcu_dereference_sparse(p, space)
> #endif /* #else #ifdef __CHECKER__ */
>
> +#ifdef CONFIG_RCU_LOCKED_ACCESS
> +extern struct locked_access_class rcu_laclass;
> +#define rcu_dereference_access() \
> + locked_access_point(&rcu_laclass, LOCKED_ACCESS_TYPE_READ)
> +#else /* #ifdef CONFIG_LOCKED_ACCESS */
> +#define rcu_dereference_access()
> +#endif /* #else #ifdef CONFIG_LOCKED_ACCESS */
> #define __rcu_access_pointer(p, space) \
> ({ \
> typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
> @@ -622,6 +629,7 @@ static inline void rcu_preempt_sleep_check(void)
> typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
> RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
> rcu_dereference_sparse(p, space); \
> + rcu_dereference_access(); \
> ((typeof(*p) __force __kernel *)(________p1)); \
> })
> #define __rcu_dereference_protected(p, c, space) \
> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> index f5f80c5..ff048a2 100644
> --- a/include/linux/srcu.h
> +++ b/include/linux/srcu.h
> @@ -64,12 +64,18 @@ struct srcu_struct {
>
> #ifdef CONFIG_DEBUG_LOCK_ALLOC
>
> +# ifdef CONFIG_RCU_LOCKED_ACCESS
> +# define RCU_KEY { .laclass = &rcu_laclass }
> +# else
> +# define RCU_KEY { 0 }
> +# endif
> +
> int __init_srcu_struct(struct srcu_struct *sp, const char *name,
> struct lock_class_key *key);
>
> #define init_srcu_struct(sp) \
> ({ \
> - static struct lock_class_key __srcu_key; \
> + static struct lock_class_key __srcu_key = RCU_KEY; \
This gets me the following for the TASKS01, TINY02, TREE02, TREE05,
TREE06, and TREE08 configurations:
CC mm/mmap.o
In file included from /home/paulmck/public_git/linux-rcu/include/linux/notifier.h:15:0,
from /home/paulmck/public_git/linux-rcu/arch/x86/include/asm/kdebug.h:4,
from /home/paulmck/public_git/linux-rcu/include/linux/kdebug.h:4,
from /home/paulmck/public_git/linux-rcu/kernel/notifier.c:1:
/home/paulmck/public_git/linux-rcu/kernel/notifier.c: In function ‘srcu_init_notifier_head’:
/home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: missing braces around initializer [-Wmissing-braces]
static struct lock_class_key __srcu_key = RCU_KEY; \
^
/home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
if (init_srcu_struct(&nh->srcu) < 0)
^
/home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: (near initialization for ‘__srcu_key.subkeys’) [-Wmissing-braces]
static struct lock_class_key __srcu_key = RCU_KEY; \
^
/home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
if (init_srcu_struct(&nh->srcu) < 0)
^
These have the following in their .config files:
CONFIG_DEBUG_LOCK_ALLOC=y
However, none of your new Kconfig options are set, which needs to be
tested because this will be the common case. Several of them have
CONFIG_PROVE_LOCKING=y, but others do not.
I have dequeued these for the moment. Please send an updated patch
series when you have this fixed.
Thanx, Paul
> \
> __init_srcu_struct((sp), #sp, &__srcu_key); \
> })
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 996c2d5..36bd0cc 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -4623,5 +4623,8 @@ void locked_access(struct locked_access_class *laclass,
> correlate_locked_access(laclass, acqchain, loc, type);
> }
> EXPORT_SYMBOL(locked_access);
> +#ifdef CONFIG_RCU_LOCKED_ACCESS
> +DEFINE_LACLASS(rcu);
> +#endif
>
> #endif /* CONFIG_LOCKED_ACCESS */
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index 76b94e1..ba2ded6 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -235,20 +235,23 @@ EXPORT_SYMBOL_GPL(__rcu_read_unlock);
> #endif /* #ifdef CONFIG_PREEMPT_RCU */
>
> #ifdef CONFIG_DEBUG_LOCK_ALLOC
> -static struct lock_class_key rcu_lock_key;
> -struct lockdep_map rcu_lock_map =
> - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock", &rcu_lock_key);
> -EXPORT_SYMBOL_GPL(rcu_lock_map);
> -
> -static struct lock_class_key rcu_bh_lock_key;
> -struct lockdep_map rcu_bh_lock_map =
> - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_bh", &rcu_bh_lock_key);
> -EXPORT_SYMBOL_GPL(rcu_bh_lock_map);
> -
> -static struct lock_class_key rcu_sched_lock_key;
> -struct lockdep_map rcu_sched_lock_map =
> - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_sched", &rcu_sched_lock_key);
> -EXPORT_SYMBOL_GPL(rcu_sched_lock_map);
> +
> +# ifdef CONFIG_RCU_LOCKED_ACCESS
> +# define RCU_KEY { .laclass = &rcu_laclass }
> +# else
> +# define RCU_KEY { 0 }
> +# endif
> +
> +#define DEFINE_RCU_LOCKDEP_MAP(flavor) \
> + static struct lock_class_key rcu##flavor##_lock_key = RCU_KEY; \
> + struct lockdep_map rcu##flavor##_lock_map = \
> + STATIC_LOCKDEP_MAP_INIT("rcu" #flavor "_read_lock", \
> + &rcu##flavor##_lock_key); \
> + EXPORT_SYMBOL_GPL(rcu##flavor##_lock_map)
> +
> +DEFINE_RCU_LOCKDEP_MAP();
> +DEFINE_RCU_LOCKDEP_MAP(_bh);
> +DEFINE_RCU_LOCKDEP_MAP(_sched);
>
> static struct lock_class_key rcu_callback_key;
> struct lockdep_map rcu_callback_map =
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 178f288..b6003d4 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1410,6 +1410,18 @@ config RCU_EQS_DEBUG
> Say N here if you need ultimate kernel/user switch latencies
> Say Y if you are unsure
>
> +config RCU_LOCKED_ACCESS
> + bool "Track data access in RCU read-side critical sections"
> + depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
> + select LOCKED_ACCESS
> + default n
> + help
> + Track data acces in RCU read-side critical sections,
> + by doing so, one can examine which rcu_dereference() and its
> + friends are called in which RCU read-side critical sections,
> + and even more detailed, after which rcu_read_lock() and
> + its friends are called.
> +
> endmenu # "RCU Debugging"
>
> config DEBUG_BLOCK_EXT_DEVT
> --
> 2.7.0
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section
2016-02-16 1:05 ` Paul E. McKenney
@ 2016-02-16 3:01 ` Boqun Feng
2016-02-16 4:08 ` Paul E. McKenney
2016-02-16 5:55 ` Josh Triplett
0 siblings, 2 replies; 13+ messages in thread
From: Boqun Feng @ 2016-02-16 3:01 UTC (permalink / raw)
To: Paul E. McKenney
Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin
[-- Attachment #1: Type: text/plain, Size: 9484 bytes --]
On Mon, Feb 15, 2016 at 05:05:53PM -0800, Paul E. McKenney wrote:
> On Thu, Feb 04, 2016 at 12:45:12AM +0800, Boqun Feng wrote:
> > The variables protected by an RCU read-side critical section are
> > sometimes hard to figure out, especially when the critical section is
> > long or has some function calls in it. However, figuring out which
> > variable a RCU read-side critical section protects could save
> > us a lot of time for code reviewing, bug fixing or performance tuning.
> >
> > This patch therefore uses the LOCKED_ACCESS to collect the information
> > of relationship between rcu_dereference*() and rcu_read_lock*() by
> > doing:
> >
> > Step 0: define a locked_access_class for RCU.
> >
> > Step 1: set the content of rcu_*_lock_key and __srcu_key to the
> > address of the locked_access_class for RCU.
> >
> > Step 2: add locked_access_point() in __rcu_dereference_check()
> >
> > After that we can figure out not only in which RCU read-side critical
> > section but also after which rcu_read_lock*() called an
> > rcu_dereference*() is called.
> >
> > This feature is controlled by a config option RCU_LOCKED_ACCESS.
> >
> > Also clean up the initialization code of lockdep_maps for different
> > flavors of RCU a little bit.
> >
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> > include/linux/rcupdate.h | 8 ++++++++
> > include/linux/srcu.h | 8 +++++++-
> > kernel/locking/lockdep.c | 3 +++
> > kernel/rcu/update.c | 31 +++++++++++++++++--------------
> > lib/Kconfig.debug | 12 ++++++++++++
> > 5 files changed, 47 insertions(+), 15 deletions(-)
> >
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 14e6f47..4bab658 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -610,6 +610,13 @@ static inline void rcu_preempt_sleep_check(void)
> > #define rcu_dereference_sparse(p, space)
> > #endif /* #else #ifdef __CHECKER__ */
> >
> > +#ifdef CONFIG_RCU_LOCKED_ACCESS
> > +extern struct locked_access_class rcu_laclass;
> > +#define rcu_dereference_access() \
> > + locked_access_point(&rcu_laclass, LOCKED_ACCESS_TYPE_READ)
> > +#else /* #ifdef CONFIG_LOCKED_ACCESS */
> > +#define rcu_dereference_access()
> > +#endif /* #else #ifdef CONFIG_LOCKED_ACCESS */
> > #define __rcu_access_pointer(p, space) \
> > ({ \
> > typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
> > @@ -622,6 +629,7 @@ static inline void rcu_preempt_sleep_check(void)
> > typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
> > RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
> > rcu_dereference_sparse(p, space); \
> > + rcu_dereference_access(); \
> > ((typeof(*p) __force __kernel *)(________p1)); \
> > })
> > #define __rcu_dereference_protected(p, c, space) \
> > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > index f5f80c5..ff048a2 100644
> > --- a/include/linux/srcu.h
> > +++ b/include/linux/srcu.h
> > @@ -64,12 +64,18 @@ struct srcu_struct {
> >
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> >
> > +# ifdef CONFIG_RCU_LOCKED_ACCESS
> > +# define RCU_KEY { .laclass = &rcu_laclass }
> > +# else
> > +# define RCU_KEY { 0 }
> > +# endif
> > +
> > int __init_srcu_struct(struct srcu_struct *sp, const char *name,
> > struct lock_class_key *key);
> >
> > #define init_srcu_struct(sp) \
> > ({ \
> > - static struct lock_class_key __srcu_key; \
> > + static struct lock_class_key __srcu_key = RCU_KEY; \
>
> This gets me the following for the TASKS01, TINY02, TREE02, TREE05,
> TREE06, and TREE08 configurations:
>
> CC mm/mmap.o
> In file included from /home/paulmck/public_git/linux-rcu/include/linux/notifier.h:15:0,
> from /home/paulmck/public_git/linux-rcu/arch/x86/include/asm/kdebug.h:4,
> from /home/paulmck/public_git/linux-rcu/include/linux/kdebug.h:4,
> from /home/paulmck/public_git/linux-rcu/kernel/notifier.c:1:
> /home/paulmck/public_git/linux-rcu/kernel/notifier.c: In function ‘srcu_init_notifier_head’:
> /home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: missing braces around initializer [-Wmissing-braces]
> static struct lock_class_key __srcu_key = RCU_KEY; \
> ^
> /home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
> if (init_srcu_struct(&nh->srcu) < 0)
> ^
> /home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: (near initialization for ‘__srcu_key.subkeys’) [-Wmissing-braces]
> static struct lock_class_key __srcu_key = RCU_KEY; \
> ^
> /home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
> if (init_srcu_struct(&nh->srcu) < 0)
> ^
>
> These have the following in their .config files:
>
> CONFIG_DEBUG_LOCK_ALLOC=y
>
> However, none of your new Kconfig options are set, which needs to be
> tested because this will be the common case. Several of them have
> CONFIG_PROVE_LOCKING=y, but others do not.
>
Oh.. this is embarrassing ;-(
Hmm.. when CONFIG_RCU_LOCKED_ACCESS=n and CONFIG_DEBUG_LOCK_ALLOC=y,
this line becomes:
static struct lock_class_key __srcu_key = { 0 };
IIUC, "={ 0 }" is the unverisal zero initializer in C, could be used for
zero initialising any structure or array, so it's OK here, right?
And may I ask your compiler's version? Because looks to me this may
be a compiler bug according to:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53119
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64709
Also I can't reproduce this with CONFIG_RCU_LOCKED_ACCESS=n and
CONFIG_DEBUG_LOCK_ALLOC=y on my machine, whose gcc version is 5.3.0.
Of course I can use a different way here and do not need a "={ 0 }"
here, because __srcu_key is static, and I just want to make sure we know
what's going on here before we use a workaround, or I just want to know
if I'm a bad C programmer ;-)
Regards,
Boqun
> I have dequeued these for the moment. Please send an updated patch
> series when you have this fixed.
>
> Thanx, Paul
>
> > \
> > __init_srcu_struct((sp), #sp, &__srcu_key); \
> > })
> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > index 996c2d5..36bd0cc 100644
> > --- a/kernel/locking/lockdep.c
> > +++ b/kernel/locking/lockdep.c
> > @@ -4623,5 +4623,8 @@ void locked_access(struct locked_access_class *laclass,
> > correlate_locked_access(laclass, acqchain, loc, type);
> > }
> > EXPORT_SYMBOL(locked_access);
> > +#ifdef CONFIG_RCU_LOCKED_ACCESS
> > +DEFINE_LACLASS(rcu);
> > +#endif
> >
> > #endif /* CONFIG_LOCKED_ACCESS */
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index 76b94e1..ba2ded6 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -235,20 +235,23 @@ EXPORT_SYMBOL_GPL(__rcu_read_unlock);
> > #endif /* #ifdef CONFIG_PREEMPT_RCU */
> >
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > -static struct lock_class_key rcu_lock_key;
> > -struct lockdep_map rcu_lock_map =
> > - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock", &rcu_lock_key);
> > -EXPORT_SYMBOL_GPL(rcu_lock_map);
> > -
> > -static struct lock_class_key rcu_bh_lock_key;
> > -struct lockdep_map rcu_bh_lock_map =
> > - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_bh", &rcu_bh_lock_key);
> > -EXPORT_SYMBOL_GPL(rcu_bh_lock_map);
> > -
> > -static struct lock_class_key rcu_sched_lock_key;
> > -struct lockdep_map rcu_sched_lock_map =
> > - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_sched", &rcu_sched_lock_key);
> > -EXPORT_SYMBOL_GPL(rcu_sched_lock_map);
> > +
> > +# ifdef CONFIG_RCU_LOCKED_ACCESS
> > +# define RCU_KEY { .laclass = &rcu_laclass }
> > +# else
> > +# define RCU_KEY { 0 }
> > +# endif
> > +
> > +#define DEFINE_RCU_LOCKDEP_MAP(flavor) \
> > + static struct lock_class_key rcu##flavor##_lock_key = RCU_KEY; \
> > + struct lockdep_map rcu##flavor##_lock_map = \
> > + STATIC_LOCKDEP_MAP_INIT("rcu" #flavor "_read_lock", \
> > + &rcu##flavor##_lock_key); \
> > + EXPORT_SYMBOL_GPL(rcu##flavor##_lock_map)
> > +
> > +DEFINE_RCU_LOCKDEP_MAP();
> > +DEFINE_RCU_LOCKDEP_MAP(_bh);
> > +DEFINE_RCU_LOCKDEP_MAP(_sched);
> >
> > static struct lock_class_key rcu_callback_key;
> > struct lockdep_map rcu_callback_map =
> > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > index 178f288..b6003d4 100644
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -1410,6 +1410,18 @@ config RCU_EQS_DEBUG
> > Say N here if you need ultimate kernel/user switch latencies
> > Say Y if you are unsure
> >
> > +config RCU_LOCKED_ACCESS
> > + bool "Track data access in RCU read-side critical sections"
> > + depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
> > + select LOCKED_ACCESS
> > + default n
> > + help
> > + Track data acces in RCU read-side critical sections,
> > + by doing so, one can examine which rcu_dereference() and its
> > + friends are called in which RCU read-side critical sections,
> > + and even more detailed, after which rcu_read_lock() and
> > + its friends are called.
> > +
> > endmenu # "RCU Debugging"
> >
> > config DEBUG_BLOCK_EXT_DEVT
> > --
> > 2.7.0
> >
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section
2016-02-16 3:01 ` Boqun Feng
@ 2016-02-16 4:08 ` Paul E. McKenney
2016-02-16 4:59 ` Boqun Feng
2016-02-16 5:55 ` Josh Triplett
1 sibling, 1 reply; 13+ messages in thread
From: Paul E. McKenney @ 2016-02-16 4:08 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin
On Tue, Feb 16, 2016 at 11:01:43AM +0800, Boqun Feng wrote:
> On Mon, Feb 15, 2016 at 05:05:53PM -0800, Paul E. McKenney wrote:
> > On Thu, Feb 04, 2016 at 12:45:12AM +0800, Boqun Feng wrote:
> > > The variables protected by an RCU read-side critical section are
> > > sometimes hard to figure out, especially when the critical section is
> > > long or has some function calls in it. However, figuring out which
> > > variable a RCU read-side critical section protects could save
> > > us a lot of time for code reviewing, bug fixing or performance tuning.
> > >
> > > This patch therefore uses the LOCKED_ACCESS to collect the information
> > > of relationship between rcu_dereference*() and rcu_read_lock*() by
> > > doing:
> > >
> > > Step 0: define a locked_access_class for RCU.
> > >
> > > Step 1: set the content of rcu_*_lock_key and __srcu_key to the
> > > address of the locked_access_class for RCU.
> > >
> > > Step 2: add locked_access_point() in __rcu_dereference_check()
> > >
> > > After that we can figure out not only in which RCU read-side critical
> > > section but also after which rcu_read_lock*() called an
> > > rcu_dereference*() is called.
> > >
> > > This feature is controlled by a config option RCU_LOCKED_ACCESS.
> > >
> > > Also clean up the initialization code of lockdep_maps for different
> > > flavors of RCU a little bit.
> > >
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > ---
> > > include/linux/rcupdate.h | 8 ++++++++
> > > include/linux/srcu.h | 8 +++++++-
> > > kernel/locking/lockdep.c | 3 +++
> > > kernel/rcu/update.c | 31 +++++++++++++++++--------------
> > > lib/Kconfig.debug | 12 ++++++++++++
> > > 5 files changed, 47 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > index 14e6f47..4bab658 100644
> > > --- a/include/linux/rcupdate.h
> > > +++ b/include/linux/rcupdate.h
> > > @@ -610,6 +610,13 @@ static inline void rcu_preempt_sleep_check(void)
> > > #define rcu_dereference_sparse(p, space)
> > > #endif /* #else #ifdef __CHECKER__ */
> > >
> > > +#ifdef CONFIG_RCU_LOCKED_ACCESS
> > > +extern struct locked_access_class rcu_laclass;
> > > +#define rcu_dereference_access() \
> > > + locked_access_point(&rcu_laclass, LOCKED_ACCESS_TYPE_READ)
> > > +#else /* #ifdef CONFIG_LOCKED_ACCESS */
> > > +#define rcu_dereference_access()
> > > +#endif /* #else #ifdef CONFIG_LOCKED_ACCESS */
> > > #define __rcu_access_pointer(p, space) \
> > > ({ \
> > > typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
> > > @@ -622,6 +629,7 @@ static inline void rcu_preempt_sleep_check(void)
> > > typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
> > > RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
> > > rcu_dereference_sparse(p, space); \
> > > + rcu_dereference_access(); \
> > > ((typeof(*p) __force __kernel *)(________p1)); \
> > > })
> > > #define __rcu_dereference_protected(p, c, space) \
> > > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > > index f5f80c5..ff048a2 100644
> > > --- a/include/linux/srcu.h
> > > +++ b/include/linux/srcu.h
> > > @@ -64,12 +64,18 @@ struct srcu_struct {
> > >
> > > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > >
> > > +# ifdef CONFIG_RCU_LOCKED_ACCESS
> > > +# define RCU_KEY { .laclass = &rcu_laclass }
> > > +# else
> > > +# define RCU_KEY { 0 }
> > > +# endif
> > > +
> > > int __init_srcu_struct(struct srcu_struct *sp, const char *name,
> > > struct lock_class_key *key);
> > >
> > > #define init_srcu_struct(sp) \
> > > ({ \
> > > - static struct lock_class_key __srcu_key; \
> > > + static struct lock_class_key __srcu_key = RCU_KEY; \
> >
> > This gets me the following for the TASKS01, TINY02, TREE02, TREE05,
> > TREE06, and TREE08 configurations:
> >
> > CC mm/mmap.o
> > In file included from /home/paulmck/public_git/linux-rcu/include/linux/notifier.h:15:0,
> > from /home/paulmck/public_git/linux-rcu/arch/x86/include/asm/kdebug.h:4,
> > from /home/paulmck/public_git/linux-rcu/include/linux/kdebug.h:4,
> > from /home/paulmck/public_git/linux-rcu/kernel/notifier.c:1:
> > /home/paulmck/public_git/linux-rcu/kernel/notifier.c: In function ‘srcu_init_notifier_head’:
> > /home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: missing braces around initializer [-Wmissing-braces]
> > static struct lock_class_key __srcu_key = RCU_KEY; \
> > ^
> > /home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
> > if (init_srcu_struct(&nh->srcu) < 0)
> > ^
> > /home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: (near initialization for ‘__srcu_key.subkeys’) [-Wmissing-braces]
> > static struct lock_class_key __srcu_key = RCU_KEY; \
> > ^
> > /home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
> > if (init_srcu_struct(&nh->srcu) < 0)
> > ^
> >
> > These have the following in their .config files:
> >
> > CONFIG_DEBUG_LOCK_ALLOC=y
> >
> > However, none of your new Kconfig options are set, which needs to be
> > tested because this will be the common case. Several of them have
> > CONFIG_PROVE_LOCKING=y, but others do not.
> >
>
> Oh.. this is embarrassing ;-(
>
> Hmm.. when CONFIG_RCU_LOCKED_ACCESS=n and CONFIG_DEBUG_LOCK_ALLOC=y,
> this line becomes:
>
> static struct lock_class_key __srcu_key = { 0 };
>
> IIUC, "={ 0 }" is the unverisal zero initializer in C, could be used for
> zero initialising any structure or array, so it's OK here, right?
>
> And may I ask your compiler's version? Because looks to me this may
> be a compiler bug according to:
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53119
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64709
gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) (KSIC)
> Also I can't reproduce this with CONFIG_RCU_LOCKED_ACCESS=n and
> CONFIG_DEBUG_LOCK_ALLOC=y on my machine, whose gcc version is 5.3.0.
>
> Of course I can use a different way here and do not need a "={ 0 }"
> here, because __srcu_key is static, and I just want to make sure we know
> what's going on here before we use a workaround, or I just want to know
> if I'm a bad C programmer ;-)
Well, it is quite possible that your code is correct in theory, but
we do need to make it so that compilers can deal with it. It would
not be the first compiler-bug workaround in RCU...
Thanx, Paul
> Regards,
> Boqun
>
> > I have dequeued these for the moment. Please send an updated patch
> > series when you have this fixed.
> >
> > Thanx, Paul
> >
> > > \
> > > __init_srcu_struct((sp), #sp, &__srcu_key); \
> > > })
> > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > > index 996c2d5..36bd0cc 100644
> > > --- a/kernel/locking/lockdep.c
> > > +++ b/kernel/locking/lockdep.c
> > > @@ -4623,5 +4623,8 @@ void locked_access(struct locked_access_class *laclass,
> > > correlate_locked_access(laclass, acqchain, loc, type);
> > > }
> > > EXPORT_SYMBOL(locked_access);
> > > +#ifdef CONFIG_RCU_LOCKED_ACCESS
> > > +DEFINE_LACLASS(rcu);
> > > +#endif
> > >
> > > #endif /* CONFIG_LOCKED_ACCESS */
> > > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > > index 76b94e1..ba2ded6 100644
> > > --- a/kernel/rcu/update.c
> > > +++ b/kernel/rcu/update.c
> > > @@ -235,20 +235,23 @@ EXPORT_SYMBOL_GPL(__rcu_read_unlock);
> > > #endif /* #ifdef CONFIG_PREEMPT_RCU */
> > >
> > > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > > -static struct lock_class_key rcu_lock_key;
> > > -struct lockdep_map rcu_lock_map =
> > > - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock", &rcu_lock_key);
> > > -EXPORT_SYMBOL_GPL(rcu_lock_map);
> > > -
> > > -static struct lock_class_key rcu_bh_lock_key;
> > > -struct lockdep_map rcu_bh_lock_map =
> > > - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_bh", &rcu_bh_lock_key);
> > > -EXPORT_SYMBOL_GPL(rcu_bh_lock_map);
> > > -
> > > -static struct lock_class_key rcu_sched_lock_key;
> > > -struct lockdep_map rcu_sched_lock_map =
> > > - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_sched", &rcu_sched_lock_key);
> > > -EXPORT_SYMBOL_GPL(rcu_sched_lock_map);
> > > +
> > > +# ifdef CONFIG_RCU_LOCKED_ACCESS
> > > +# define RCU_KEY { .laclass = &rcu_laclass }
> > > +# else
> > > +# define RCU_KEY { 0 }
> > > +# endif
> > > +
> > > +#define DEFINE_RCU_LOCKDEP_MAP(flavor) \
> > > + static struct lock_class_key rcu##flavor##_lock_key = RCU_KEY; \
> > > + struct lockdep_map rcu##flavor##_lock_map = \
> > > + STATIC_LOCKDEP_MAP_INIT("rcu" #flavor "_read_lock", \
> > > + &rcu##flavor##_lock_key); \
> > > + EXPORT_SYMBOL_GPL(rcu##flavor##_lock_map)
> > > +
> > > +DEFINE_RCU_LOCKDEP_MAP();
> > > +DEFINE_RCU_LOCKDEP_MAP(_bh);
> > > +DEFINE_RCU_LOCKDEP_MAP(_sched);
> > >
> > > static struct lock_class_key rcu_callback_key;
> > > struct lockdep_map rcu_callback_map =
> > > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > > index 178f288..b6003d4 100644
> > > --- a/lib/Kconfig.debug
> > > +++ b/lib/Kconfig.debug
> > > @@ -1410,6 +1410,18 @@ config RCU_EQS_DEBUG
> > > Say N here if you need ultimate kernel/user switch latencies
> > > Say Y if you are unsure
> > >
> > > +config RCU_LOCKED_ACCESS
> > > + bool "Track data access in RCU read-side critical sections"
> > > + depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
> > > + select LOCKED_ACCESS
> > > + default n
> > > + help
> > > + Track data acces in RCU read-side critical sections,
> > > + by doing so, one can examine which rcu_dereference() and its
> > > + friends are called in which RCU read-side critical sections,
> > > + and even more detailed, after which rcu_read_lock() and
> > > + its friends are called.
> > > +
> > > endmenu # "RCU Debugging"
> > >
> > > config DEBUG_BLOCK_EXT_DEVT
> > > --
> > > 2.7.0
> > >
> >
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section
2016-02-16 4:08 ` Paul E. McKenney
@ 2016-02-16 4:59 ` Boqun Feng
0 siblings, 0 replies; 13+ messages in thread
From: Boqun Feng @ 2016-02-16 4:59 UTC (permalink / raw)
To: Paul E. McKenney
Cc: linux-kernel, Peter Zijlstra, Ingo Molnar, Josh Triplett,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin
[-- Attachment #1: Type: text/plain, Size: 7517 bytes --]
On Mon, Feb 15, 2016 at 08:08:40PM -0800, Paul E. McKenney wrote:
> On Tue, Feb 16, 2016 at 11:01:43AM +0800, Boqun Feng wrote:
> > On Mon, Feb 15, 2016 at 05:05:53PM -0800, Paul E. McKenney wrote:
> > > On Thu, Feb 04, 2016 at 12:45:12AM +0800, Boqun Feng wrote:
> > > > The variables protected by an RCU read-side critical section are
> > > > sometimes hard to figure out, especially when the critical section is
> > > > long or has some function calls in it. However, figuring out which
> > > > variable a RCU read-side critical section protects could save
> > > > us a lot of time for code reviewing, bug fixing or performance tuning.
> > > >
> > > > This patch therefore uses the LOCKED_ACCESS to collect the information
> > > > of relationship between rcu_dereference*() and rcu_read_lock*() by
> > > > doing:
> > > >
> > > > Step 0: define a locked_access_class for RCU.
> > > >
> > > > Step 1: set the content of rcu_*_lock_key and __srcu_key to the
> > > > address of the locked_access_class for RCU.
> > > >
> > > > Step 2: add locked_access_point() in __rcu_dereference_check()
> > > >
> > > > After that we can figure out not only in which RCU read-side critical
> > > > section but also after which rcu_read_lock*() called an
> > > > rcu_dereference*() is called.
> > > >
> > > > This feature is controlled by a config option RCU_LOCKED_ACCESS.
> > > >
> > > > Also clean up the initialization code of lockdep_maps for different
> > > > flavors of RCU a little bit.
> > > >
> > > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > > ---
> > > > include/linux/rcupdate.h | 8 ++++++++
> > > > include/linux/srcu.h | 8 +++++++-
> > > > kernel/locking/lockdep.c | 3 +++
> > > > kernel/rcu/update.c | 31 +++++++++++++++++--------------
> > > > lib/Kconfig.debug | 12 ++++++++++++
> > > > 5 files changed, 47 insertions(+), 15 deletions(-)
> > > >
> > > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > > index 14e6f47..4bab658 100644
> > > > --- a/include/linux/rcupdate.h
> > > > +++ b/include/linux/rcupdate.h
> > > > @@ -610,6 +610,13 @@ static inline void rcu_preempt_sleep_check(void)
> > > > #define rcu_dereference_sparse(p, space)
> > > > #endif /* #else #ifdef __CHECKER__ */
> > > >
> > > > +#ifdef CONFIG_RCU_LOCKED_ACCESS
> > > > +extern struct locked_access_class rcu_laclass;
> > > > +#define rcu_dereference_access() \
> > > > + locked_access_point(&rcu_laclass, LOCKED_ACCESS_TYPE_READ)
> > > > +#else /* #ifdef CONFIG_LOCKED_ACCESS */
> > > > +#define rcu_dereference_access()
> > > > +#endif /* #else #ifdef CONFIG_LOCKED_ACCESS */
> > > > #define __rcu_access_pointer(p, space) \
> > > > ({ \
> > > > typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
> > > > @@ -622,6 +629,7 @@ static inline void rcu_preempt_sleep_check(void)
> > > > typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
> > > > RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
> > > > rcu_dereference_sparse(p, space); \
> > > > + rcu_dereference_access(); \
> > > > ((typeof(*p) __force __kernel *)(________p1)); \
> > > > })
> > > > #define __rcu_dereference_protected(p, c, space) \
> > > > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > > > index f5f80c5..ff048a2 100644
> > > > --- a/include/linux/srcu.h
> > > > +++ b/include/linux/srcu.h
> > > > @@ -64,12 +64,18 @@ struct srcu_struct {
> > > >
> > > > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > > >
> > > > +# ifdef CONFIG_RCU_LOCKED_ACCESS
> > > > +# define RCU_KEY { .laclass = &rcu_laclass }
> > > > +# else
> > > > +# define RCU_KEY { 0 }
> > > > +# endif
> > > > +
> > > > int __init_srcu_struct(struct srcu_struct *sp, const char *name,
> > > > struct lock_class_key *key);
> > > >
> > > > #define init_srcu_struct(sp) \
> > > > ({ \
> > > > - static struct lock_class_key __srcu_key; \
> > > > + static struct lock_class_key __srcu_key = RCU_KEY; \
> > >
> > > This gets me the following for the TASKS01, TINY02, TREE02, TREE05,
> > > TREE06, and TREE08 configurations:
> > >
> > > CC mm/mmap.o
> > > In file included from /home/paulmck/public_git/linux-rcu/include/linux/notifier.h:15:0,
> > > from /home/paulmck/public_git/linux-rcu/arch/x86/include/asm/kdebug.h:4,
> > > from /home/paulmck/public_git/linux-rcu/include/linux/kdebug.h:4,
> > > from /home/paulmck/public_git/linux-rcu/kernel/notifier.c:1:
> > > /home/paulmck/public_git/linux-rcu/kernel/notifier.c: In function ‘srcu_init_notifier_head’:
> > > /home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: missing braces around initializer [-Wmissing-braces]
> > > static struct lock_class_key __srcu_key = RCU_KEY; \
> > > ^
> > > /home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
> > > if (init_srcu_struct(&nh->srcu) < 0)
> > > ^
> > > /home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: (near initialization for ‘__srcu_key.subkeys’) [-Wmissing-braces]
> > > static struct lock_class_key __srcu_key = RCU_KEY; \
> > > ^
> > > /home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
> > > if (init_srcu_struct(&nh->srcu) < 0)
> > > ^
> > >
> > > These have the following in their .config files:
> > >
> > > CONFIG_DEBUG_LOCK_ALLOC=y
> > >
> > > However, none of your new Kconfig options are set, which needs to be
> > > tested because this will be the common case. Several of them have
> > > CONFIG_PROVE_LOCKING=y, but others do not.
> > >
> >
> > Oh.. this is embarrassing ;-(
> >
> > Hmm.. when CONFIG_RCU_LOCKED_ACCESS=n and CONFIG_DEBUG_LOCK_ALLOC=y,
> > this line becomes:
> >
> > static struct lock_class_key __srcu_key = { 0 };
> >
> > IIUC, "={ 0 }" is the unverisal zero initializer in C, could be used for
> > zero initialising any structure or array, so it's OK here, right?
> >
> > And may I ask your compiler's version? Because looks to me this may
> > be a compiler bug according to:
> >
> > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53119
> > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64709
>
> gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) (KSIC)
>
> > Also I can't reproduce this with CONFIG_RCU_LOCKED_ACCESS=n and
> > CONFIG_DEBUG_LOCK_ALLOC=y on my machine, whose gcc version is 5.3.0.
> >
> > Of course I can use a different way here and do not need a "={ 0 }"
> > here, because __srcu_key is static, and I just want to make sure we know
> > what's going on here before we use a workaround, or I just want to know
> > if I'm a bad C programmer ;-)
>
> Well, it is quite possible that your code is correct in theory, but
> we do need to make it so that compilers can deal with it. It would
> not be the first compiler-bug workaround in RCU...
>
Fair enough!
> Thanx, Paul
>
> > Regards,
> > Boqun
> >
> > > I have dequeued these for the moment. Please send an updated patch
> > > series when you have this fixed.
Moreover, I hit some "defined but not used" warnings with
CONFIG_DEBUG_LOCK_ALLOC=y and CONFIG_RCU_LOCKED_ACCESS=n, will fix that
too and send out a new series.
Regards,
Boqun
> > >
> > > Thanx, Paul
> > >
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section
2016-02-16 3:01 ` Boqun Feng
2016-02-16 4:08 ` Paul E. McKenney
@ 2016-02-16 5:55 ` Josh Triplett
1 sibling, 0 replies; 13+ messages in thread
From: Josh Triplett @ 2016-02-16 5:55 UTC (permalink / raw)
To: Boqun Feng
Cc: Paul E. McKenney, linux-kernel, Peter Zijlstra, Ingo Molnar,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, sasha.levin
On Tue, Feb 16, 2016 at 11:01:43AM +0800, Boqun Feng wrote:
> On Mon, Feb 15, 2016 at 05:05:53PM -0800, Paul E. McKenney wrote:
> > On Thu, Feb 04, 2016 at 12:45:12AM +0800, Boqun Feng wrote:
> > > The variables protected by an RCU read-side critical section are
> > > sometimes hard to figure out, especially when the critical section is
> > > long or has some function calls in it. However, figuring out which
> > > variable a RCU read-side critical section protects could save
> > > us a lot of time for code reviewing, bug fixing or performance tuning.
> > >
> > > This patch therefore uses the LOCKED_ACCESS to collect the information
> > > of relationship between rcu_dereference*() and rcu_read_lock*() by
> > > doing:
> > >
> > > Step 0: define a locked_access_class for RCU.
> > >
> > > Step 1: set the content of rcu_*_lock_key and __srcu_key to the
> > > address of the locked_access_class for RCU.
> > >
> > > Step 2: add locked_access_point() in __rcu_dereference_check()
> > >
> > > After that we can figure out not only in which RCU read-side critical
> > > section but also after which rcu_read_lock*() called an
> > > rcu_dereference*() is called.
> > >
> > > This feature is controlled by a config option RCU_LOCKED_ACCESS.
> > >
> > > Also clean up the initialization code of lockdep_maps for different
> > > flavors of RCU a little bit.
> > >
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > ---
> > > include/linux/rcupdate.h | 8 ++++++++
> > > include/linux/srcu.h | 8 +++++++-
> > > kernel/locking/lockdep.c | 3 +++
> > > kernel/rcu/update.c | 31 +++++++++++++++++--------------
> > > lib/Kconfig.debug | 12 ++++++++++++
> > > 5 files changed, 47 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > index 14e6f47..4bab658 100644
> > > --- a/include/linux/rcupdate.h
> > > +++ b/include/linux/rcupdate.h
> > > @@ -610,6 +610,13 @@ static inline void rcu_preempt_sleep_check(void)
> > > #define rcu_dereference_sparse(p, space)
> > > #endif /* #else #ifdef __CHECKER__ */
> > >
> > > +#ifdef CONFIG_RCU_LOCKED_ACCESS
> > > +extern struct locked_access_class rcu_laclass;
> > > +#define rcu_dereference_access() \
> > > + locked_access_point(&rcu_laclass, LOCKED_ACCESS_TYPE_READ)
> > > +#else /* #ifdef CONFIG_LOCKED_ACCESS */
> > > +#define rcu_dereference_access()
> > > +#endif /* #else #ifdef CONFIG_LOCKED_ACCESS */
> > > #define __rcu_access_pointer(p, space) \
> > > ({ \
> > > typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
> > > @@ -622,6 +629,7 @@ static inline void rcu_preempt_sleep_check(void)
> > > typeof(*p) *________p1 = (typeof(*p) *__force)lockless_dereference(p); \
> > > RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
> > > rcu_dereference_sparse(p, space); \
> > > + rcu_dereference_access(); \
> > > ((typeof(*p) __force __kernel *)(________p1)); \
> > > })
> > > #define __rcu_dereference_protected(p, c, space) \
> > > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > > index f5f80c5..ff048a2 100644
> > > --- a/include/linux/srcu.h
> > > +++ b/include/linux/srcu.h
> > > @@ -64,12 +64,18 @@ struct srcu_struct {
> > >
> > > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > >
> > > +# ifdef CONFIG_RCU_LOCKED_ACCESS
> > > +# define RCU_KEY { .laclass = &rcu_laclass }
> > > +# else
> > > +# define RCU_KEY { 0 }
> > > +# endif
> > > +
> > > int __init_srcu_struct(struct srcu_struct *sp, const char *name,
> > > struct lock_class_key *key);
> > >
> > > #define init_srcu_struct(sp) \
> > > ({ \
> > > - static struct lock_class_key __srcu_key; \
> > > + static struct lock_class_key __srcu_key = RCU_KEY; \
> >
> > This gets me the following for the TASKS01, TINY02, TREE02, TREE05,
> > TREE06, and TREE08 configurations:
> >
> > CC mm/mmap.o
> > In file included from /home/paulmck/public_git/linux-rcu/include/linux/notifier.h:15:0,
> > from /home/paulmck/public_git/linux-rcu/arch/x86/include/asm/kdebug.h:4,
> > from /home/paulmck/public_git/linux-rcu/include/linux/kdebug.h:4,
> > from /home/paulmck/public_git/linux-rcu/kernel/notifier.c:1:
> > /home/paulmck/public_git/linux-rcu/kernel/notifier.c: In function ‘srcu_init_notifier_head’:
> > /home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: missing braces around initializer [-Wmissing-braces]
> > static struct lock_class_key __srcu_key = RCU_KEY; \
> > ^
> > /home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
> > if (init_srcu_struct(&nh->srcu) < 0)
> > ^
> > /home/paulmck/public_git/linux-rcu/include/linux/srcu.h:78:16: warning: (near initialization for ‘__srcu_key.subkeys’) [-Wmissing-braces]
> > static struct lock_class_key __srcu_key = RCU_KEY; \
> > ^
> > /home/paulmck/public_git/linux-rcu/kernel/notifier.c:526:6: note: in expansion of macro ‘init_srcu_struct’
> > if (init_srcu_struct(&nh->srcu) < 0)
> > ^
> >
> > These have the following in their .config files:
> >
> > CONFIG_DEBUG_LOCK_ALLOC=y
> >
> > However, none of your new Kconfig options are set, which needs to be
> > tested because this will be the common case. Several of them have
> > CONFIG_PROVE_LOCKING=y, but others do not.
> >
>
> Oh.. this is embarrassing ;-(
>
> Hmm.. when CONFIG_RCU_LOCKED_ACCESS=n and CONFIG_DEBUG_LOCK_ALLOC=y,
> this line becomes:
>
> static struct lock_class_key __srcu_key = { 0 };
>
> IIUC, "={ 0 }" is the unverisal zero initializer in C, could be used for
> zero initialising any structure or array, so it's OK here, right?
Compilers seem touchy about that. Sometimes { 0 } works, and sometimes
{} works (that really should *always* work, but it doesn't).
However, for a static struct, you can always just leave off the
initializer.
- Josh Triplett
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2016-02-16 5:55 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-03 16:45 [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Boqun Feng
2016-02-03 16:45 ` [RFC 1/6] lockdep: Add helper functions of irq_context Boqun Feng
2016-02-03 16:45 ` [RFC 2/6] lockdep: LOCKED_ACCESS: Introduce locked access class and acqchain Boqun Feng
2016-02-03 16:45 ` [RFC 3/6] lockdep: LOCKED_ACCESS: Maintain the keys of acqchains Boqun Feng
2016-02-03 16:45 ` [RFC 4/6] lockdep: LOCKED_ACCESS: Introduce locked_access_point() Boqun Feng
2016-02-03 16:45 ` [RFC 5/6] lockdep: LOCKED_ACCESS: Add proc interface for locked access class Boqun Feng
2016-02-03 16:45 ` [RFC 6/6] RCU: Track rcu_dereference() in RCU read-side critical section Boqun Feng
2016-02-16 1:05 ` Paul E. McKenney
2016-02-16 3:01 ` Boqun Feng
2016-02-16 4:08 ` Paul E. McKenney
2016-02-16 4:59 ` Boqun Feng
2016-02-16 5:55 ` Josh Triplett
2016-02-15 18:03 ` [RFC 0/6] Track RCU dereferences in RCU read-side critical sections Paul E. McKenney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).