From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org Cc: linux-arch@vger.kernel.org, linux@arm.linux.org.uk, nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org, fweisbec@gmail.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, rostedt@goodmis.org, xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu, wangyun@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, netdev@vger.kernel.org, vincent.guittot@linaro.org, walken@google.com, linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v6 05/46] percpu_rwlock: Make percpu-rwlocks IRQ-safe, optimally Date: Mon, 18 Feb 2013 18:09:01 +0530 [thread overview] Message-ID: <20130218123901.26245.80637.stgit@srivatsabhat.in.ibm.com> (raw) In-Reply-To: <20130218123714.26245.61816.stgit@srivatsabhat.in.ibm.com> If interrupt handlers can also be readers, then one of the ways to make per-CPU rwlocks safe, is to disable interrupts at the reader side before trying to acquire the per-CPU rwlock and keep it disabled throughout the duration of the read-side critical section. The goal is to avoid cases such as: 1. writer is active and it holds the global rwlock for write 2. a regular reader comes in and marks itself as present (by incrementing its per-CPU refcount) before checking whether writer is active. 3. an interrupt hits the reader; [If it had not hit, the reader would have noticed that the writer is active and would have decremented its refcount and would have tried to acquire the global rwlock for read]. Since the interrupt handler also happens to be a reader, it notices the non-zero refcount (which was due to the reader who got interrupted) and thinks that this is a nested read-side critical section and proceeds to take the fastpath, which is wrong. The interrupt handler should have noticed that the writer is active and taken the rwlock for read. So, disabling interrupts can help avoid this problem (at the cost of keeping the interrupts disabled for quite long). But Oleg had a brilliant idea by which we can do much better than that: we can manage with disabling interrupts _just_ during the updates (writes to per-CPU refcounts) to safe-guard against races with interrupt handlers. Beyond that, we can keep the interrupts enabled and still be safe w.r.t interrupt handlers that can act as readers. Basically the idea is that we differentiate between the *part* of the per-CPU refcount that we use for reference counting vs the part that we use merely to make the writer wait for us to switch over to the right synchronization scheme. The scheme involves splitting the per-CPU refcounts into 2 parts: eg: the lower 16 bits are used to track the nesting depth of the reader (a "nested-counter"), and the remaining (upper) bits are used to merely mark the presence of the reader. As long as the overall reader_refcnt is non-zero, the writer waits for the reader (assuming that the reader is still actively using per-CPU refcounts for synchronization). The reader first sets one of the higher bits to mark its presence, and then uses the lower 16 bits to manage the nesting depth. So, an interrupt handler coming in as illustrated above will be able to distinguish between "this is a nested read-side critical section" vs "we have merely marked our presence to make the writer wait for us to switch" by looking at the same refcount. Thus, it makes it unnecessary to keep interrupts disabled throughout the read-side critical section, despite having the possibility of interrupt handlers being readers themselves. Implement this logic and rename the locking functions appropriately, to reflect what they do. Based-on-idea-by: Oleg Nesterov <oleg@redhat.com> Cc: David Howells <dhowells@redhat.com> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> --- include/linux/percpu-rwlock.h | 10 ++++--- lib/percpu-rwlock.c | 57 ++++++++++++++++++++++++++--------------- 2 files changed, 42 insertions(+), 25 deletions(-) diff --git a/include/linux/percpu-rwlock.h b/include/linux/percpu-rwlock.h index 5590b1e..8c9e145 100644 --- a/include/linux/percpu-rwlock.h +++ b/include/linux/percpu-rwlock.h @@ -38,11 +38,13 @@ struct percpu_rwlock { rwlock_t global_rwlock; }; -extern void percpu_read_lock(struct percpu_rwlock *); -extern void percpu_read_unlock(struct percpu_rwlock *); +extern void percpu_read_lock_irqsafe(struct percpu_rwlock *); +extern void percpu_read_unlock_irqsafe(struct percpu_rwlock *); -extern void percpu_write_lock(struct percpu_rwlock *); -extern void percpu_write_unlock(struct percpu_rwlock *); +extern void percpu_write_lock_irqsave(struct percpu_rwlock *, + unsigned long *flags); +extern void percpu_write_unlock_irqrestore(struct percpu_rwlock *, + unsigned long *flags); extern int __percpu_init_rwlock(struct percpu_rwlock *, const char *, struct lock_class_key *); diff --git a/lib/percpu-rwlock.c b/lib/percpu-rwlock.c index edefdea..ce7e440 100644 --- a/lib/percpu-rwlock.c +++ b/lib/percpu-rwlock.c @@ -30,11 +30,15 @@ #include <asm/processor.h> +#define READER_PRESENT (1UL << 16) +#define READER_REFCNT_MASK (READER_PRESENT - 1) + #define reader_yet_to_switch(pcpu_rwlock, cpu) \ (ACCESS_ONCE(per_cpu_ptr((pcpu_rwlock)->rw_state, cpu)->reader_refcnt)) -#define reader_percpu_nesting_depth(pcpu_rwlock) \ - (__this_cpu_read((pcpu_rwlock)->rw_state->reader_refcnt)) +#define reader_percpu_nesting_depth(pcpu_rwlock) \ + (__this_cpu_read((pcpu_rwlock)->rw_state->reader_refcnt) & \ + READER_REFCNT_MASK) #define reader_uses_percpu_refcnt(pcpu_rwlock) \ reader_percpu_nesting_depth(pcpu_rwlock) @@ -71,7 +75,7 @@ void percpu_free_rwlock(struct percpu_rwlock *pcpu_rwlock) pcpu_rwlock->rw_state = NULL; } -void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock) +void percpu_read_lock_irqsafe(struct percpu_rwlock *pcpu_rwlock) { preempt_disable(); @@ -79,14 +83,18 @@ void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock) * Let the writer know that a reader is active, even before we choose * our reader-side synchronization scheme. */ - this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt); + this_cpu_add(pcpu_rwlock->rw_state->reader_refcnt, READER_PRESENT); /* * If we are already using per-cpu refcounts, it is not safe to switch * the synchronization scheme. So continue using the refcounts. */ - if (reader_nested_percpu(pcpu_rwlock)) + if (reader_uses_percpu_refcnt(pcpu_rwlock)) { + this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt); + this_cpu_sub(pcpu_rwlock->rw_state->reader_refcnt, + READER_PRESENT); return; + } /* * The write to 'reader_refcnt' must be visible before we read @@ -95,9 +103,19 @@ void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock) smp_mb(); if (likely(!writer_active(pcpu_rwlock))) { - goto out; + this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt); } else { /* Writer is active, so switch to global rwlock. */ + + /* + * While we are spinning on ->global_rwlock, an + * interrupt can hit us, and the interrupt handler + * might call this function. The distinction between + * READER_PRESENT and the refcnt helps ensure that the + * interrupt handler also takes this branch and spins + * on the ->global_rwlock, as long as the writer is + * active. + */ read_lock(&pcpu_rwlock->global_rwlock); /* @@ -107,29 +125,24 @@ void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock) * refcounts. (This also helps avoid heterogeneous nesting of * readers). */ - if (writer_active(pcpu_rwlock)) { - /* - * The above writer_active() check can get reordered - * with this_cpu_dec() below, but this is OK, because - * holding the rwlock is conservative. - */ - this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt); - } else { + if (!writer_active(pcpu_rwlock)) { + this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt); read_unlock(&pcpu_rwlock->global_rwlock); } } -out: + this_cpu_sub(pcpu_rwlock->rw_state->reader_refcnt, READER_PRESENT); + /* Prevent reordering of any subsequent reads/writes */ smp_mb(); } -void percpu_read_unlock(struct percpu_rwlock *pcpu_rwlock) +void percpu_read_unlock_irqsafe(struct percpu_rwlock *pcpu_rwlock) { /* * We never allow heterogeneous nesting of readers. So it is trivial * to find out the kind of reader we are, and undo the operation - * done by our corresponding percpu_read_lock(). + * done by our corresponding percpu_read_lock_irqsafe(). */ /* Try to fast-path: a nested percpu reader is the simplest case */ @@ -158,7 +171,8 @@ void percpu_read_unlock(struct percpu_rwlock *pcpu_rwlock) preempt_enable(); } -void percpu_write_lock(struct percpu_rwlock *pcpu_rwlock) +void percpu_write_lock_irqsave(struct percpu_rwlock *pcpu_rwlock, + unsigned long *flags) { unsigned int cpu; @@ -187,10 +201,11 @@ void percpu_write_lock(struct percpu_rwlock *pcpu_rwlock) } smp_mb(); /* Complete the wait-for-readers, before taking the lock */ - write_lock(&pcpu_rwlock->global_rwlock); + write_lock_irqsave(&pcpu_rwlock->global_rwlock, *flags); } -void percpu_write_unlock(struct percpu_rwlock *pcpu_rwlock) +void percpu_write_unlock_irqrestore(struct percpu_rwlock *pcpu_rwlock, + unsigned long *flags) { unsigned int cpu; @@ -205,6 +220,6 @@ void percpu_write_unlock(struct percpu_rwlock *pcpu_rwlock) for_each_possible_cpu(cpu) per_cpu_ptr(pcpu_rwlock->rw_state, cpu)->writer_signal = false; - write_unlock(&pcpu_rwlock->global_rwlock); + write_unlock_irqrestore(&pcpu_rwlock->global_rwlock, *flags); }
WARNING: multiple messages have this Message-ID (diff)
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu, fweisbec@gmail.com, linux@arm.linux.org.uk, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, walken@google.com, vincent.guittot@linaro.org Subject: [PATCH v6 05/46] percpu_rwlock: Make percpu-rwlocks IRQ-safe, optimally Date: Mon, 18 Feb 2013 18:09:01 +0530 [thread overview] Message-ID: <20130218123901.26245.80637.stgit@srivatsabhat.in.ibm.com> (raw) Message-ID: <20130218123901.mp-k6TI1qC75xHOorH0cIUNdX_CYcA1PTuuDr9YIF1Y@z> (raw) In-Reply-To: <20130218123714.26245.61816.stgit@srivatsabhat.in.ibm.com> If interrupt handlers can also be readers, then one of the ways to make per-CPU rwlocks safe, is to disable interrupts at the reader side before trying to acquire the per-CPU rwlock and keep it disabled throughout the duration of the read-side critical section. The goal is to avoid cases such as: 1. writer is active and it holds the global rwlock for write 2. a regular reader comes in and marks itself as present (by incrementing its per-CPU refcount) before checking whether writer is active. 3. an interrupt hits the reader; [If it had not hit, the reader would have noticed that the writer is active and would have decremented its refcount and would have tried to acquire the global rwlock for read]. Since the interrupt handler also happens to be a reader, it notices the non-zero refcount (which was due to the reader who got interrupted) and thinks that this is a nested read-side critical section and proceeds to take the fastpath, which is wrong. The interrupt handler should have noticed that the writer is active and taken the rwlock for read. So, disabling interrupts can help avoid this problem (at the cost of keeping the interrupts disabled for quite long). But Oleg had a brilliant idea by which we can do much better than that: we can manage with disabling interrupts _just_ during the updates (writes to per-CPU refcounts) to safe-guard against races with interrupt handlers. Beyond that, we can keep the interrupts enabled and still be safe w.r.t interrupt handlers that can act as readers. Basically the idea is that we differentiate between the *part* of the per-CPU refcount that we use for reference counting vs the part that we use merely to make the writer wait for us to switch over to the right synchronization scheme. The scheme involves splitting the per-CPU refcounts into 2 parts: eg: the lower 16 bits are used to track the nesting depth of the reader (a "nested-counter"), and the remaining (upper) bits are used to merely mark the presence of the reader. As long as the overall reader_refcnt is non-zero, the writer waits for the reader (assuming that the reader is still actively using per-CPU refcounts for synchronization). The reader first sets one of the higher bits to mark its presence, and then uses the lower 16 bits to manage the nesting depth. So, an interrupt handler coming in as illustrated above will be able to distinguish between "this is a nested read-side critical section" vs "we have merely marked our presence to make the writer wait for us to switch" by looking at the same refcount. Thus, it makes it unnecessary to keep interrupts disabled throughout the read-side critical section, despite having the possibility of interrupt handlers being readers themselves. Implement this logic and rename the locking functions appropriately, to reflect what they do. Based-on-idea-by: Oleg Nesterov <oleg@redhat.com> Cc: David Howells <dhowells@redhat.com> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> --- include/linux/percpu-rwlock.h | 10 ++++--- lib/percpu-rwlock.c | 57 ++++++++++++++++++++++++++--------------- 2 files changed, 42 insertions(+), 25 deletions(-) diff --git a/include/linux/percpu-rwlock.h b/include/linux/percpu-rwlock.h index 5590b1e..8c9e145 100644 --- a/include/linux/percpu-rwlock.h +++ b/include/linux/percpu-rwlock.h @@ -38,11 +38,13 @@ struct percpu_rwlock { rwlock_t global_rwlock; }; -extern void percpu_read_lock(struct percpu_rwlock *); -extern void percpu_read_unlock(struct percpu_rwlock *); +extern void percpu_read_lock_irqsafe(struct percpu_rwlock *); +extern void percpu_read_unlock_irqsafe(struct percpu_rwlock *); -extern void percpu_write_lock(struct percpu_rwlock *); -extern void percpu_write_unlock(struct percpu_rwlock *); +extern void percpu_write_lock_irqsave(struct percpu_rwlock *, + unsigned long *flags); +extern void percpu_write_unlock_irqrestore(struct percpu_rwlock *, + unsigned long *flags); extern int __percpu_init_rwlock(struct percpu_rwlock *, const char *, struct lock_class_key *); diff --git a/lib/percpu-rwlock.c b/lib/percpu-rwlock.c index edefdea..ce7e440 100644 --- a/lib/percpu-rwlock.c +++ b/lib/percpu-rwlock.c @@ -30,11 +30,15 @@ #include <asm/processor.h> +#define READER_PRESENT (1UL << 16) +#define READER_REFCNT_MASK (READER_PRESENT - 1) + #define reader_yet_to_switch(pcpu_rwlock, cpu) \ (ACCESS_ONCE(per_cpu_ptr((pcpu_rwlock)->rw_state, cpu)->reader_refcnt)) -#define reader_percpu_nesting_depth(pcpu_rwlock) \ - (__this_cpu_read((pcpu_rwlock)->rw_state->reader_refcnt)) +#define reader_percpu_nesting_depth(pcpu_rwlock) \ + (__this_cpu_read((pcpu_rwlock)->rw_state->reader_refcnt) & \ + READER_REFCNT_MASK) #define reader_uses_percpu_refcnt(pcpu_rwlock) \ reader_percpu_nesting_depth(pcpu_rwlock) @@ -71,7 +75,7 @@ void percpu_free_rwlock(struct percpu_rwlock *pcpu_rwlock) pcpu_rwlock->rw_state = NULL; } -void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock) +void percpu_read_lock_irqsafe(struct percpu_rwlock *pcpu_rwlock) { preempt_disable(); @@ -79,14 +83,18 @@ void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock) * Let the writer know that a reader is active, even before we choose * our reader-side synchronization scheme. */ - this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt); + this_cpu_add(pcpu_rwlock->rw_state->reader_refcnt, READER_PRESENT); /* * If we are already using per-cpu refcounts, it is not safe to switch * the synchronization scheme. So continue using the refcounts. */ - if (reader_nested_percpu(pcpu_rwlock)) + if (reader_uses_percpu_refcnt(pcpu_rwlock)) { + this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt); + this_cpu_sub(pcpu_rwlock->rw_state->reader_refcnt, + READER_PRESENT); return; + } /* * The write to 'reader_refcnt' must be visible before we read @@ -95,9 +103,19 @@ void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock) smp_mb(); if (likely(!writer_active(pcpu_rwlock))) { - goto out; + this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt); } else { /* Writer is active, so switch to global rwlock. */ + + /* + * While we are spinning on ->global_rwlock, an + * interrupt can hit us, and the interrupt handler + * might call this function. The distinction between + * READER_PRESENT and the refcnt helps ensure that the + * interrupt handler also takes this branch and spins + * on the ->global_rwlock, as long as the writer is + * active. + */ read_lock(&pcpu_rwlock->global_rwlock); /* @@ -107,29 +125,24 @@ void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock) * refcounts. (This also helps avoid heterogeneous nesting of * readers). */ - if (writer_active(pcpu_rwlock)) { - /* - * The above writer_active() check can get reordered - * with this_cpu_dec() below, but this is OK, because - * holding the rwlock is conservative. - */ - this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt); - } else { + if (!writer_active(pcpu_rwlock)) { + this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt); read_unlock(&pcpu_rwlock->global_rwlock); } } -out: + this_cpu_sub(pcpu_rwlock->rw_state->reader_refcnt, READER_PRESENT); + /* Prevent reordering of any subsequent reads/writes */ smp_mb(); } -void percpu_read_unlock(struct percpu_rwlock *pcpu_rwlock) +void percpu_read_unlock_irqsafe(struct percpu_rwlock *pcpu_rwlock) { /* * We never allow heterogeneous nesting of readers. So it is trivial * to find out the kind of reader we are, and undo the operation - * done by our corresponding percpu_read_lock(). + * done by our corresponding percpu_read_lock_irqsafe(). */ /* Try to fast-path: a nested percpu reader is the simplest case */ @@ -158,7 +171,8 @@ void percpu_read_unlock(struct percpu_rwlock *pcpu_rwlock) preempt_enable(); } -void percpu_write_lock(struct percpu_rwlock *pcpu_rwlock) +void percpu_write_lock_irqsave(struct percpu_rwlock *pcpu_rwlock, + unsigned long *flags) { unsigned int cpu; @@ -187,10 +201,11 @@ void percpu_write_lock(struct percpu_rwlock *pcpu_rwlock) } smp_mb(); /* Complete the wait-for-readers, before taking the lock */ - write_lock(&pcpu_rwlock->global_rwlock); + write_lock_irqsave(&pcpu_rwlock->global_rwlock, *flags); } -void percpu_write_unlock(struct percpu_rwlock *pcpu_rwlock) +void percpu_write_unlock_irqrestore(struct percpu_rwlock *pcpu_rwlock, + unsigned long *flags) { unsigned int cpu; @@ -205,6 +220,6 @@ void percpu_write_unlock(struct percpu_rwlock *pcpu_rwlock) for_each_possible_cpu(cpu) per_cpu_ptr(pcpu_rwlock->rw_state, cpu)->writer_signal = false; - write_unlock(&pcpu_rwlock->global_rwlock); + write_unlock_irqrestore(&pcpu_rwlock->global_rwlock, *flags); }
next prev parent reply other threads:[~2013-02-18 12:39 UTC|newest] Thread overview: 215+ messages / expand[flat|nested] mbox.gz Atom feed top 2013-02-18 12:38 [PATCH v6 00/46] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat 2013-02-18 12:38 ` Srivatsa S. Bhat 2013-02-18 12:38 ` [PATCH v6 01/46] percpu_rwlock: Introduce the global reader-writer lock backend Srivatsa S. Bhat 2013-02-18 12:38 ` Srivatsa S. Bhat 2013-02-18 12:38 ` [PATCH v6 02/46] percpu_rwlock: Introduce per-CPU variables for the reader and the writer Srivatsa S. Bhat 2013-02-18 12:38 ` Srivatsa S. Bhat 2013-02-18 12:38 ` [PATCH v6 03/46] percpu_rwlock: Provide a way to define and init percpu-rwlocks at compile time Srivatsa S. Bhat 2013-02-18 12:38 ` Srivatsa S. Bhat 2013-02-18 12:38 ` [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks Srivatsa S. Bhat 2013-02-18 12:38 ` Srivatsa S. Bhat 2013-02-18 15:45 ` Michel Lespinasse 2013-02-18 15:45 ` Michel Lespinasse 2013-02-18 16:21 ` Srivatsa S. Bhat 2013-02-18 16:21 ` Srivatsa S. Bhat 2013-02-18 16:31 ` Steven Rostedt 2013-02-18 16:31 ` Steven Rostedt 2013-02-18 16:46 ` Srivatsa S. Bhat 2013-02-18 16:46 ` Srivatsa S. Bhat 2013-02-18 17:56 ` Srivatsa S. Bhat 2013-02-18 17:56 ` Srivatsa S. Bhat 2013-02-18 18:07 ` Michel Lespinasse 2013-02-18 18:14 ` Srivatsa S. Bhat 2013-02-18 18:14 ` Srivatsa S. Bhat 2013-02-25 15:53 ` Lai Jiangshan 2013-02-25 19:26 ` Srivatsa S. Bhat 2013-02-26 0:17 ` Lai Jiangshan 2013-02-26 0:17 ` Lai Jiangshan 2013-02-26 0:19 ` Lai Jiangshan 2013-02-26 0:19 ` Lai Jiangshan 2013-02-26 9:02 ` Srivatsa S. Bhat 2013-02-26 9:02 ` Srivatsa S. Bhat 2013-02-26 12:59 ` Lai Jiangshan 2013-02-26 12:59 ` Lai Jiangshan 2013-02-26 14:22 ` Srivatsa S. Bhat 2013-02-26 14:22 ` Srivatsa S. Bhat 2013-02-26 16:25 ` Lai Jiangshan 2013-02-26 16:25 ` Lai Jiangshan 2013-02-26 19:30 ` Srivatsa S. Bhat 2013-02-26 19:30 ` Srivatsa S. Bhat 2013-02-27 0:33 ` Lai Jiangshan 2013-02-27 0:33 ` Lai Jiangshan 2013-02-27 21:19 ` Srivatsa S. Bhat 2013-02-27 21:19 ` Srivatsa S. Bhat 2013-03-01 17:44 ` [PATCH] lglock: add read-preference local-global rwlock Lai Jiangshan 2013-03-01 17:44 ` Lai Jiangshan 2013-03-01 17:53 ` Tejun Heo 2013-03-01 17:53 ` Tejun Heo 2013-03-01 20:06 ` Srivatsa S. Bhat 2013-03-01 20:06 ` Srivatsa S. Bhat 2013-03-01 18:28 ` Oleg Nesterov 2013-03-01 18:28 ` Oleg Nesterov 2013-03-02 12:13 ` Michel Lespinasse 2013-03-02 13:14 ` [PATCH V2] " Lai Jiangshan 2013-03-02 13:14 ` Lai Jiangshan 2013-03-02 17:11 ` Srivatsa S. Bhat 2013-03-02 17:11 ` Srivatsa S. Bhat 2013-03-05 15:41 ` Lai Jiangshan 2013-03-05 15:41 ` Lai Jiangshan 2013-03-05 17:55 ` Srivatsa S. Bhat 2013-03-05 17:55 ` Srivatsa S. Bhat 2013-03-02 17:20 ` Oleg Nesterov 2013-03-02 17:20 ` Oleg Nesterov 2013-03-03 17:40 ` Oleg Nesterov 2013-03-03 17:40 ` Oleg Nesterov 2013-03-05 1:37 ` Michel Lespinasse 2013-03-05 1:37 ` Michel Lespinasse 2013-03-05 15:27 ` Lai Jiangshan 2013-03-05 15:27 ` Lai Jiangshan 2013-03-05 16:19 ` Michel Lespinasse 2013-03-05 16:19 ` Michel Lespinasse 2013-03-05 16:41 ` Oleg Nesterov 2013-03-05 16:41 ` Oleg Nesterov 2013-03-02 17:06 ` [PATCH] " Oleg Nesterov 2013-03-02 17:06 ` Oleg Nesterov 2013-03-05 15:54 ` Lai Jiangshan 2013-03-05 15:54 ` Lai Jiangshan 2013-03-05 16:32 ` Michel Lespinasse 2013-03-05 16:35 ` Oleg Nesterov 2013-03-05 16:35 ` Oleg Nesterov 2013-03-02 13:42 ` Lai Jiangshan 2013-03-02 13:42 ` Lai Jiangshan 2013-03-02 17:01 ` Oleg Nesterov 2013-03-02 17:01 ` Oleg Nesterov 2013-03-01 17:50 ` [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks Lai Jiangshan 2013-03-01 17:50 ` Lai Jiangshan 2013-03-01 19:47 ` Srivatsa S. Bhat 2013-03-01 19:47 ` Srivatsa S. Bhat 2013-03-05 16:25 ` Lai Jiangshan 2013-03-05 16:25 ` Lai Jiangshan 2013-03-05 18:27 ` Srivatsa S. Bhat 2013-03-05 18:27 ` Srivatsa S. Bhat 2013-03-01 18:10 ` Tejun Heo 2013-03-01 18:10 ` Tejun Heo 2013-03-01 19:59 ` Srivatsa S. Bhat 2013-03-01 19:59 ` Srivatsa S. Bhat 2013-02-27 11:11 ` Michel Lespinasse 2013-02-27 11:11 ` Michel Lespinasse 2013-02-27 19:25 ` Oleg Nesterov 2013-02-27 19:25 ` Oleg Nesterov 2013-02-28 11:34 ` Michel Lespinasse 2013-02-28 18:00 ` Oleg Nesterov 2013-02-28 18:20 ` Oleg Nesterov 2013-02-26 13:34 ` Lai Jiangshan 2013-02-26 15:17 ` Srivatsa S. Bhat 2013-02-26 15:17 ` Srivatsa S. Bhat 2013-02-26 14:17 ` Lai Jiangshan 2013-02-26 14:17 ` Lai Jiangshan 2013-02-26 14:37 ` Srivatsa S. Bhat 2013-02-26 14:37 ` Srivatsa S. Bhat 2013-02-18 12:39 ` Srivatsa S. Bhat [this message] 2013-02-18 12:39 ` [PATCH v6 05/46] percpu_rwlock: Make percpu-rwlocks IRQ-safe, optimally Srivatsa S. Bhat 2013-02-18 12:39 ` [PATCH v6 06/46] percpu_rwlock: Rearrange the read-lock code to fastpath nested percpu readers Srivatsa S. Bhat 2013-02-18 12:39 ` Srivatsa S. Bhat 2013-02-18 12:39 ` [PATCH v6 07/46] percpu_rwlock: Allow writers to be readers, and add lockdep annotations Srivatsa S. Bhat 2013-02-18 12:39 ` Srivatsa S. Bhat 2013-02-18 15:51 ` Michel Lespinasse 2013-02-18 15:51 ` Michel Lespinasse 2013-02-18 16:31 ` Srivatsa S. Bhat 2013-02-18 12:39 ` [PATCH v6 08/46] CPU hotplug: Provide APIs to prevent CPU offline from atomic context Srivatsa S. Bhat 2013-02-18 12:39 ` Srivatsa S. Bhat 2013-02-18 16:23 ` Michel Lespinasse 2013-02-18 16:23 ` Michel Lespinasse 2013-02-18 16:43 ` Srivatsa S. Bhat 2013-02-18 16:43 ` Srivatsa S. Bhat 2013-02-18 17:21 ` Michel Lespinasse 2013-02-18 17:21 ` Michel Lespinasse 2013-02-18 18:50 ` Srivatsa S. Bhat 2013-02-18 18:50 ` Srivatsa S. Bhat 2013-02-19 9:40 ` Michel Lespinasse 2013-02-19 9:40 ` Michel Lespinasse 2013-02-19 9:55 ` Srivatsa S. Bhat 2013-02-19 9:55 ` Srivatsa S. Bhat 2013-02-19 10:42 ` David Laight 2013-02-19 10:58 ` Srivatsa S. Bhat 2013-02-19 10:58 ` Srivatsa S. Bhat 2013-02-18 12:39 ` [PATCH v6 09/46] CPU hotplug: Convert preprocessor macros to static inline functions Srivatsa S. Bhat 2013-02-18 12:39 ` Srivatsa S. Bhat 2013-02-18 12:39 ` [PATCH v6 10/46] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly Srivatsa S. Bhat 2013-02-18 12:39 ` Srivatsa S. Bhat 2013-02-18 12:39 ` [PATCH v6 11/46] smp, cpu hotplug: Fix on_each_cpu_*() " Srivatsa S. Bhat 2013-02-18 12:39 ` Srivatsa S. Bhat 2013-02-18 12:40 ` [PATCH v6 12/46] sched/timer: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat 2013-02-18 12:40 ` Srivatsa S. Bhat 2013-02-18 12:40 ` [PATCH v6 13/46] sched/migration: Use raw_spin_lock/unlock since interrupts are already disabled Srivatsa S. Bhat 2013-02-18 12:40 ` Srivatsa S. Bhat 2013-02-18 12:40 ` [PATCH v6 14/46] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat 2013-02-18 12:40 ` Srivatsa S. Bhat 2013-02-18 12:40 ` [PATCH v6 15/46] tick: " Srivatsa S. Bhat 2013-02-18 12:40 ` Srivatsa S. Bhat 2013-02-18 12:40 ` [PATCH v6 16/46] time/clocksource: " Srivatsa S. Bhat 2013-02-18 12:40 ` Srivatsa S. Bhat 2013-02-18 12:40 ` [PATCH v6 17/46] clockevents: Use get/put_online_cpus_atomic() in clockevents_notify() Srivatsa S. Bhat 2013-02-18 12:40 ` Srivatsa S. Bhat 2013-02-18 12:40 ` [PATCH v6 18/46] softirq: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat 2013-02-18 12:40 ` Srivatsa S. Bhat 2013-02-18 12:40 ` [PATCH v6 19/46] irq: " Srivatsa S. Bhat 2013-02-18 12:40 ` Srivatsa S. Bhat 2013-02-18 12:41 ` [PATCH v6 20/46] net: " Srivatsa S. Bhat 2013-02-18 12:41 ` Srivatsa S. Bhat 2013-02-18 12:41 ` [PATCH v6 21/46] block: " Srivatsa S. Bhat 2013-02-18 12:41 ` Srivatsa S. Bhat 2013-02-18 12:41 ` [PATCH v6 22/46] crypto: pcrypt - Protect access to cpu_online_mask with get/put_online_cpus() Srivatsa S. Bhat 2013-02-18 12:41 ` [PATCH v6 23/46] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat 2013-02-18 12:41 ` Srivatsa S. Bhat 2013-02-18 12:41 ` [PATCH v6 24/46] [SCSI] fcoe: " Srivatsa S. Bhat 2013-02-18 12:41 ` Srivatsa S. Bhat 2013-02-18 12:41 ` [PATCH v6 25/46] staging: octeon: " Srivatsa S. Bhat 2013-02-18 12:41 ` [PATCH v6 26/46] x86: " Srivatsa S. Bhat 2013-02-18 12:41 ` Srivatsa S. Bhat 2013-02-18 12:42 ` [PATCH v6 27/46] perf/x86: " Srivatsa S. Bhat 2013-02-18 12:42 ` Srivatsa S. Bhat 2013-02-18 12:42 ` [PATCH v6 28/46] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline from atomic context Srivatsa S. Bhat 2013-02-18 12:42 ` Srivatsa S. Bhat 2013-02-18 12:42 ` [PATCH v6 29/46] kvm/vmx: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat 2013-02-18 12:42 ` Srivatsa S. Bhat 2013-02-18 12:42 ` [PATCH v6 30/46] x86/xen: " Srivatsa S. Bhat 2013-02-18 12:42 ` Srivatsa S. Bhat 2013-02-18 12:42 ` [PATCH v6 31/46] alpha/smp: " Srivatsa S. Bhat 2013-02-18 12:42 ` Srivatsa S. Bhat 2013-02-18 12:42 ` [PATCH v6 32/46] blackfin/smp: " Srivatsa S. Bhat 2013-02-18 12:42 ` Srivatsa S. Bhat 2013-02-18 12:42 ` [PATCH v6 33/46] cris/smp: " Srivatsa S. Bhat 2013-02-18 12:42 ` Srivatsa S. Bhat 2013-02-18 13:07 ` Jesper Nilsson 2013-02-18 13:07 ` Jesper Nilsson 2013-02-18 12:43 ` [PATCH v6 34/46] hexagon/smp: " Srivatsa S. Bhat 2013-02-18 12:43 ` Srivatsa S. Bhat 2013-02-18 12:43 ` [PATCH v6 35/46] ia64: " Srivatsa S. Bhat 2013-02-18 12:43 ` Srivatsa S. Bhat 2013-02-18 12:43 ` [PATCH v6 36/46] m32r: " Srivatsa S. Bhat 2013-02-18 12:43 ` Srivatsa S. Bhat 2013-02-18 12:43 ` [PATCH v6 37/46] MIPS: " Srivatsa S. Bhat 2013-02-18 12:43 ` Srivatsa S. Bhat 2013-02-18 12:43 ` [PATCH v6 38/46] mn10300: " Srivatsa S. Bhat 2013-02-18 12:43 ` [PATCH v6 39/46] parisc: " Srivatsa S. Bhat 2013-02-18 12:43 ` Srivatsa S. Bhat 2013-02-18 12:43 ` [PATCH v6 40/46] powerpc: " Srivatsa S. Bhat 2013-02-18 12:43 ` Srivatsa S. Bhat 2013-02-18 12:44 ` [PATCH v6 41/46] sh: " Srivatsa S. Bhat 2013-02-18 12:44 ` Srivatsa S. Bhat 2013-02-18 12:44 ` [PATCH v6 42/46] sparc: " Srivatsa S. Bhat 2013-02-18 12:44 ` Srivatsa S. Bhat 2013-02-18 12:44 ` [PATCH v6 43/46] tile: " Srivatsa S. Bhat 2013-02-18 12:44 ` Srivatsa S. Bhat 2013-02-18 12:44 ` [PATCH v6 44/46] cpu: No more __stop_machine() in _cpu_down() Srivatsa S. Bhat 2013-02-18 12:44 ` Srivatsa S. Bhat 2013-02-18 12:44 ` [PATCH v6 45/46] CPU hotplug, stop_machine: Decouple CPU hotplug from stop_machine() in Kconfig Srivatsa S. Bhat 2013-02-18 12:44 ` Srivatsa S. Bhat 2013-02-18 12:44 ` [PATCH v6 46/46] Documentation/cpu-hotplug: Remove references to stop_machine() Srivatsa S. Bhat 2013-02-22 0:31 ` [PATCH v6 00/46] CPU hotplug: stop_machine()-free CPU hotplug Rusty Russell 2013-02-22 0:31 ` Rusty Russell 2013-02-25 21:45 ` Srivatsa S. Bhat 2013-02-25 21:45 ` Srivatsa S. Bhat 2013-03-01 12:05 ` Vincent Guittot 2013-03-01 12:05 ` Vincent Guittot
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20130218123901.26245.80637.stgit@srivatsabhat.in.ibm.com \ --to=srivatsa.bhat@linux.vnet.ibm.com \ --cc=akpm@linux-foundation.org \ --cc=fweisbec@gmail.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-pm@vger.kernel.org \ --cc=linux@arm.linux.org.uk \ --cc=linuxppc-dev@lists.ozlabs.org \ --cc=mingo@kernel.org \ --cc=namhyung@kernel.org \ --cc=netdev@vger.kernel.org \ --cc=nikunj@linux.vnet.ibm.com \ --cc=oleg@redhat.com \ --cc=paulmck@linux.vnet.ibm.com \ --cc=peterz@infradead.org \ --cc=rjw@sisk.pl \ --cc=rostedt@goodmis.org \ --cc=rusty@rustcorp.com.au \ --cc=sbw@mit.edu \ --cc=tglx@linutronix.de \ --cc=tj@kernel.org \ --cc=vincent.guittot@linaro.org \ --cc=walken@google.com \ --cc=wangyun@linux.vnet.ibm.com \ --cc=xiaoguangrong@linux.vnet.ibm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).