* RCU : various patches [0/5] @ 2004-08-07 19:15 Dipankar Sarma 2004-08-07 19:17 ` RCU : clean up code [1/5] Dipankar Sarma 2004-08-09 1:17 ` RCU : various patches [0/5] Rusty Russell 0 siblings, 2 replies; 7+ messages in thread From: Dipankar Sarma @ 2004-08-07 19:15 UTC (permalink / raw) To: Andrew Morton Cc: Rusty Russell, Paul E. McKenney, linux-kernel, Robert Olsson, netdev This is a series of patches that are currently in my tree. These apply on 2.6.8-rc3-mm1 (on top the earlier 3 patches in -mm). Of these, the first one is a cleanup to avoid percpu data address calculations and also prepration for call_rcu_bh(). The call-rcu-bh patches introduce a separate RCU mechanism for softirq handlers with handler completion as another quiescent state and use it in route cache. This avoids the dst cache overflow problems Robert Olsson was seeing during his router DoS testing. All this work happened after a long private email discussion some months ago involving Alexey, Robert, Dave Miller and some of us. I am publishing the work so that people can test it. It would be nice to give the entire stack a spin in -mm. The remaining two patches are from Paul that documents RCU apis and hides smp_read_barrier_depends() using a simple macro - rcu_dereference(). I have tested the call-rcu-bh stuff with pktgen and saw no route cache overflows. The complete stack is also sanity tested. Thanks Dipankar ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RCU : clean up code [1/5] 2004-08-07 19:15 RCU : various patches [0/5] Dipankar Sarma @ 2004-08-07 19:17 ` Dipankar Sarma 2004-08-07 19:18 ` RCU : Introduce call_rcu_bh() [2/5] Dipankar Sarma 2004-08-09 1:17 ` RCU : various patches [0/5] Rusty Russell 1 sibling, 1 reply; 7+ messages in thread From: Dipankar Sarma @ 2004-08-07 19:17 UTC (permalink / raw) To: Andrew Morton Cc: Rusty Russell, Paul E. McKenney, linux-kernel, Robert Olsson, netdev Avoids per_cpu calculations and also prepares for call_rcu_bh(). Thanks Dipankar At OLS, Rusty had suggested getting rid of many per_cpu() calculations in RCU code and making the code simpler. I had already done that for the rcu-softirq patch earlier, so I am splitting that into two patch. This first patch cleans up the macros and uses pointers to the rcu per-cpu data directly to manipulate the callback queues. This is useful for the call-rcu-bh patch (to follow) which introduces a new RCU mechanism - call_rcu_bh(). Both generic and softirq rcu can then use the same code, they work different global and percpu data. Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com> include/linux/rcupdate.h | 38 ++++--- kernel/rcupdate.c | 229 ++++++++++++++++++++++++----------------------- kernel/sched.c | 2 3 files changed, 143 insertions(+), 126 deletions(-) diff -puN include/linux/rcupdate.h~rcu-code-cleanup include/linux/rcupdate.h --- linux-2.6.8-rc3-mm1/include/linux/rcupdate.h~rcu-code-cleanup 2004-08-07 15:28:30.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/include/linux/rcupdate.h 2004-08-07 15:28:30.000000000 +0530 @@ -101,47 +101,51 @@ struct rcu_data { struct rcu_head **curtail; struct rcu_head *donelist; struct rcu_head **donetail; + int cpu; }; DECLARE_PER_CPU(struct rcu_data, rcu_data); extern struct rcu_ctrlblk rcu_ctrlblk; -#define RCU_quiescbatch(cpu) (per_cpu(rcu_data, (cpu)).quiescbatch) -#define RCU_qsctr(cpu) (per_cpu(rcu_data, (cpu)).qsctr) -#define RCU_last_qsctr(cpu) (per_cpu(rcu_data, (cpu)).last_qsctr) -#define RCU_qs_pending(cpu) (per_cpu(rcu_data, (cpu)).qs_pending) -#define RCU_batch(cpu) (per_cpu(rcu_data, (cpu)).batch) -#define RCU_nxtlist(cpu) (per_cpu(rcu_data, (cpu)).nxtlist) -#define RCU_curlist(cpu) (per_cpu(rcu_data, (cpu)).curlist) -#define RCU_nxttail(cpu) (per_cpu(rcu_data, (cpu)).nxttail) -#define RCU_curtail(cpu) (per_cpu(rcu_data, (cpu)).curtail) -#define RCU_donelist(cpu) (per_cpu(rcu_data, (cpu)).donelist) -#define RCU_donetail(cpu) (per_cpu(rcu_data, (cpu)).donetail) +/* + * Increment the quiscent state counter. + */ +static inline void rcu_qsctr_inc(int cpu) +{ + struct rcu_data *rdp = &per_cpu(rcu_data, cpu); + rdp->qsctr++; +} -static inline int rcu_pending(int cpu) +static inline int __rcu_pending(struct rcu_ctrlblk *rcp, + struct rcu_data *rdp) { /* This cpu has pending rcu entries and the grace period * for them has completed. */ - if (RCU_curlist(cpu) && - !rcu_batch_before(rcu_ctrlblk.completed,RCU_batch(cpu))) + if (rdp->curlist && !rcu_batch_before(rcp->completed, rdp->batch)) return 1; /* This cpu has no pending entries, but there are new entries */ - if (!RCU_curlist(cpu) && RCU_nxtlist(cpu)) + if (!rdp->curlist && rdp->nxtlist) return 1; - if (RCU_donelist(cpu)) + /* This cpu has finished callbacks to invoke */ + if (rdp->donelist) return 1; /* The rcu core waits for a quiescent state from the cpu */ - if (RCU_quiescbatch(cpu) != rcu_ctrlblk.cur || RCU_qs_pending(cpu)) + if (rdp->quiescbatch != rcp->cur || rdp->qs_pending) return 1; /* nothing to do */ return 0; } +static inline int rcu_pending(int cpu) +{ + return __rcu_pending(&rcu_ctrlblk, &per_cpu(rcu_data, cpu)); +} + #define rcu_read_lock() preempt_disable() #define rcu_read_unlock() preempt_enable() diff -puN kernel/rcupdate.c~rcu-code-cleanup kernel/rcupdate.c --- linux-2.6.8-rc3-mm1/kernel/rcupdate.c~rcu-code-cleanup 2004-08-07 15:28:30.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/kernel/rcupdate.c 2004-08-07 15:28:30.000000000 +0530 @@ -17,9 +17,10 @@ * * Copyright (C) IBM Corporation, 2001 * - * Author: Dipankar Sarma <dipankar@in.ibm.com> + * Authors: Dipankar Sarma <dipankar@in.ibm.com> + * Manfred Spraul <manfred@colorfullife.com> * - * Based on the original work by Paul McKenney <paul.mckenney@us.ibm.com> + * Based on the original work by Paul McKenney <paulmck@us.ibm.com> * and inputs from Rusty Russell, Andrea Arcangeli and Andi Kleen. * Papers: * http://www.rdrop.com/users/paulmck/paper/rclockpdcsproof.pdf @@ -51,19 +52,20 @@ struct rcu_ctrlblk rcu_ctrlblk = { .cur = -300, .completed = -300 , .lock = SEQCNT_ZERO }; /* Bookkeeping of the progress of the grace period */ -struct { - spinlock_t mutex; /* Guard this struct and writes to rcu_ctrlblk */ - cpumask_t rcu_cpu_mask; /* CPUs that need to switch in order */ +struct rcu_state { + spinlock_t lock; /* Guard this struct and writes to rcu_ctrlblk */ + cpumask_t cpumask; /* CPUs that need to switch in order */ /* for current batch to proceed. */ -} rcu_state ____cacheline_maxaligned_in_smp = - {.mutex = SPIN_LOCK_UNLOCKED, .rcu_cpu_mask = CPU_MASK_NONE }; +}; + +struct rcu_state rcu_state ____cacheline_maxaligned_in_smp = + {.lock = SPIN_LOCK_UNLOCKED, .cpumask = CPU_MASK_NONE }; DEFINE_PER_CPU(struct rcu_data, rcu_data) = { 0L }; /* Fake initialization required by compiler */ static DEFINE_PER_CPU(struct tasklet_struct, rcu_tasklet) = {NULL}; -#define RCU_tasklet(cpu) (per_cpu(rcu_tasklet, cpu)) static int maxbatch = 10; /** @@ -79,15 +81,15 @@ static int maxbatch = 10; void fastcall call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)) { - int cpu; unsigned long flags; + struct rcu_data *rdp; head->func = func; head->next = NULL; local_irq_save(flags); - cpu = smp_processor_id(); - *RCU_nxttail(cpu) = head; - RCU_nxttail(cpu) = &head->next; + rdp = &__get_cpu_var(rcu_data); + *rdp->nxttail = head; + rdp->nxttail = &head->next; local_irq_restore(flags); } @@ -95,23 +97,23 @@ void fastcall call_rcu(struct rcu_head * * Invoke the completed RCU callbacks. They are expected to be in * a per-cpu list. */ -static void rcu_do_batch(int cpu) +static void rcu_do_batch(struct rcu_data *rdp) { struct rcu_head *next, *list; int count = 0; - list = RCU_donelist(cpu); + list = rdp->donelist; while (list) { - next = RCU_donelist(cpu) = list->next; + next = rdp->donelist = list->next; list->func(list); list = next; if (++count >= maxbatch) break; } - if (!RCU_donelist(cpu)) - RCU_donetail(cpu) = &RCU_donelist(cpu); + if (!rdp->donelist) + rdp->donetail = &rdp->donelist; else - tasklet_schedule(&RCU_tasklet(cpu)); + tasklet_schedule(&per_cpu(rcu_tasklet, rdp->cpu)); } /* @@ -119,15 +121,15 @@ static void rcu_do_batch(int cpu) * The grace period handling consists out of two steps: * - A new grace period is started. * This is done by rcu_start_batch. The start is not broadcasted to - * all cpus, they must pick this up by comparing rcu_ctrlblk.cur with - * RCU_quiescbatch(cpu). All cpus are recorded in the - * rcu_state.rcu_cpu_mask bitmap. + * all cpus, they must pick this up by comparing rcp->cur with + * rdp->quiescbatch. All cpus are recorded in the + * rcu_state.cpumask bitmap. * - All cpus must go through a quiescent state. * Since the start of the grace period is not broadcasted, at least two * calls to rcu_check_quiescent_state are required: * The first call just notices that a new grace period is running. The * following calls check if there was a quiescent state since the beginning - * of the grace period. If so, it updates rcu_state.rcu_cpu_mask. If + * of the grace period. If so, it updates rcu_state.cpumask. If * the bitmap is empty, then the grace period is completed. * rcu_check_quiescent_state calls rcu_start_batch(0) to start the next grace * period (if necessary). @@ -135,22 +137,22 @@ static void rcu_do_batch(int cpu) /* * Register a new batch of callbacks, and start it up if there is currently no * active batch and the batch to be registered has not already occurred. - * Caller must hold rcu_state.mutex. + * Caller must hold rcu_state.lock. */ -static void rcu_start_batch(int next_pending) +static void rcu_start_batch(struct rcu_ctrlblk *rcp, struct rcu_state *rsp, + int next_pending) { if (next_pending) - rcu_ctrlblk.next_pending = 1; + rcp->next_pending = 1; - if (rcu_ctrlblk.next_pending && - rcu_ctrlblk.completed == rcu_ctrlblk.cur) { + if (rcp->next_pending && + rcp->completed == rcp->cur) { /* Can't change, since spin lock held. */ - cpus_andnot(rcu_state.rcu_cpu_mask, cpu_online_map, - nohz_cpu_mask); - write_seqcount_begin(&rcu_ctrlblk.lock); - rcu_ctrlblk.next_pending = 0; - rcu_ctrlblk.cur++; - write_seqcount_end(&rcu_ctrlblk.lock); + cpus_andnot(rsp->cpumask, cpu_online_map, nohz_cpu_mask); + write_seqcount_begin(&rcp->lock); + rcp->next_pending = 0; + rcp->cur++; + write_seqcount_end(&rcp->lock); } } @@ -159,13 +161,13 @@ static void rcu_start_batch(int next_pen * Clear it from the cpu mask and complete the grace period if it was the last * cpu. Start another grace period if someone has further entries pending */ -static void cpu_quiet(int cpu) +static void cpu_quiet(int cpu, struct rcu_ctrlblk *rcp, struct rcu_state *rsp) { - cpu_clear(cpu, rcu_state.rcu_cpu_mask); - if (cpus_empty(rcu_state.rcu_cpu_mask)) { + cpu_clear(cpu, rsp->cpumask); + if (cpus_empty(rsp->cpumask)) { /* batch completed ! */ - rcu_ctrlblk.completed = rcu_ctrlblk.cur; - rcu_start_batch(0); + rcp->completed = rcp->cur; + rcu_start_batch(rcp, rsp, 0); } } @@ -174,15 +176,14 @@ static void cpu_quiet(int cpu) * switch). If so and if it already hasn't done so in this RCU * quiescent cycle, then indicate that it has done so. */ -static void rcu_check_quiescent_state(void) +static void rcu_check_quiescent_state(struct rcu_ctrlblk *rcp, + struct rcu_state *rsp, struct rcu_data *rdp) { - int cpu = smp_processor_id(); - - if (RCU_quiescbatch(cpu) != rcu_ctrlblk.cur) { + if (rdp->quiescbatch != rcp->cur) { /* new grace period: record qsctr value. */ - RCU_qs_pending(cpu) = 1; - RCU_last_qsctr(cpu) = RCU_qsctr(cpu); - RCU_quiescbatch(cpu) = rcu_ctrlblk.cur; + rdp->qs_pending = 1; + rdp->last_qsctr = rdp->qsctr; + rdp->quiescbatch = rcp->cur; return; } @@ -190,7 +191,7 @@ static void rcu_check_quiescent_state(vo * qs_pending is checked instead of the actual bitmap to avoid * cacheline trashing. */ - if (!RCU_qs_pending(cpu)) + if (!rdp->qs_pending) return; /* @@ -198,19 +199,19 @@ static void rcu_check_quiescent_state(vo * we may miss one quiescent state of that CPU. That is * tolerable. So no need to disable interrupts. */ - if (RCU_qsctr(cpu) == RCU_last_qsctr(cpu)) + if (rdp->qsctr == rdp->last_qsctr) return; - RCU_qs_pending(cpu) = 0; + rdp->qs_pending = 0; - spin_lock(&rcu_state.mutex); + spin_lock(&rsp->lock); /* - * RCU_quiescbatch/batch.cur and the cpu bitmap can come out of sync + * rdp->quiescbatch/rcp->cur and the cpu bitmap can come out of sync * during cpu startup. Ignore the quiescent state. */ - if (likely(RCU_quiescbatch(cpu) == rcu_ctrlblk.cur)) - cpu_quiet(cpu); + if (likely(rdp->quiescbatch == rcp->cur)) + cpu_quiet(rdp->cpu, rcp, rsp); - spin_unlock(&rcu_state.mutex); + spin_unlock(&rsp->lock); } @@ -220,33 +221,39 @@ static void rcu_check_quiescent_state(vo * locking requirements, the list it's pulling from has to belong to a cpu * which is dead and hence not processing interrupts. */ -static void rcu_move_batch(struct rcu_head *list, struct rcu_head **tail) +static void rcu_move_batch(struct rcu_data *this_rdp, struct rcu_head *list, + struct rcu_head **tail) { - int cpu; - local_irq_disable(); - cpu = smp_processor_id(); - *RCU_nxttail(cpu) = list; + *this_rdp->nxttail = list; if (list) - RCU_nxttail(cpu) = tail; + this_rdp->nxttail = tail; local_irq_enable(); } -static void rcu_offline_cpu(int cpu) +static void __rcu_offline_cpu(struct rcu_data *this_rdp, + struct rcu_ctrlblk *rcp, struct rcu_state *rsp, struct rcu_data *rdp) { /* if the cpu going offline owns the grace period * we can block indefinitely waiting for it, so flush * it here */ - spin_lock_bh(&rcu_state.mutex); - if (rcu_ctrlblk.cur != rcu_ctrlblk.completed) - cpu_quiet(cpu); - spin_unlock_bh(&rcu_state.mutex); + spin_lock_bh(&rsp->lock); + if (rcp->cur != rcp->completed) + cpu_quiet(rdp->cpu, rcp, rsp); + spin_unlock_bh(&rsp->lock); + rcu_move_batch(this_rdp, rdp->curlist, rdp->curtail); + rcu_move_batch(this_rdp, rdp->nxtlist, rdp->nxttail); - rcu_move_batch(RCU_curlist(cpu), RCU_curtail(cpu)); - rcu_move_batch(RCU_nxtlist(cpu), RCU_nxttail(cpu)); +} +static void rcu_offline_cpu(int cpu) +{ + struct rcu_data *this_rdp = &get_cpu_var(rcu_data); - tasklet_kill_immediate(&RCU_tasklet(cpu), cpu); + __rcu_offline_cpu(this_rdp, &rcu_ctrlblk, &rcu_state, + &per_cpu(rcu_data, cpu)); + put_cpu_var(rcu_data); + tasklet_kill_immediate(&per_cpu(rcu_tasklet, cpu), cpu); } #else @@ -257,81 +264,87 @@ static void rcu_offline_cpu(int cpu) #endif -void rcu_restart_cpu(int cpu) -{ - spin_lock_bh(&rcu_state.mutex); - RCU_quiescbatch(cpu) = rcu_ctrlblk.completed; - RCU_qs_pending(cpu) = 0; - spin_unlock_bh(&rcu_state.mutex); -} - /* * This does the RCU processing work from tasklet context. */ -static void rcu_process_callbacks(unsigned long unused) +static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp, + struct rcu_state *rsp, struct rcu_data *rdp) { - int cpu = smp_processor_id(); - - if (RCU_curlist(cpu) && - !rcu_batch_before(rcu_ctrlblk.completed, RCU_batch(cpu))) { - *RCU_donetail(cpu) = RCU_curlist(cpu); - RCU_donetail(cpu) = RCU_curtail(cpu); - RCU_curlist(cpu) = NULL; - RCU_curtail(cpu) = &RCU_curlist(cpu); + if (rdp->curlist && !rcu_batch_before(rcp->completed, rdp->batch)) { + *rdp->donetail = rdp->curlist; + rdp->donetail = rdp->curtail; + rdp->curlist = NULL; + rdp->curtail = &rdp->curlist; } local_irq_disable(); - if (RCU_nxtlist(cpu) && !RCU_curlist(cpu)) { + if (rdp->nxtlist && !rdp->curlist) { int next_pending, seq; - RCU_curlist(cpu) = RCU_nxtlist(cpu); - RCU_curtail(cpu) = RCU_nxttail(cpu); - RCU_nxtlist(cpu) = NULL; - RCU_nxttail(cpu) = &RCU_nxtlist(cpu); + rdp->curlist = rdp->nxtlist; + rdp->curtail = rdp->nxttail; + rdp->nxtlist = NULL; + rdp->nxttail = &rdp->nxtlist; local_irq_enable(); /* * start the next batch of callbacks */ do { - seq = read_seqcount_begin(&rcu_ctrlblk.lock); + seq = read_seqcount_begin(&rcp->lock); /* determine batch number */ - RCU_batch(cpu) = rcu_ctrlblk.cur + 1; - next_pending = rcu_ctrlblk.next_pending; - } while (read_seqcount_retry(&rcu_ctrlblk.lock, seq)); + rdp->batch = rcp->cur + 1; + next_pending = rcp->next_pending; + } while (read_seqcount_retry(&rcp->lock, seq)); if (!next_pending) { /* and start it/schedule start if it's a new batch */ - spin_lock(&rcu_state.mutex); - rcu_start_batch(1); - spin_unlock(&rcu_state.mutex); + spin_lock(&rsp->lock); + rcu_start_batch(rcp, rsp, 1); + spin_unlock(&rsp->lock); } } else { local_irq_enable(); } - rcu_check_quiescent_state(); - if (RCU_donelist(cpu)) - rcu_do_batch(cpu); + rcu_check_quiescent_state(rcp, rsp, rdp); + if (rdp->donelist) + rcu_do_batch(rdp); +} + +static void rcu_process_callbacks(unsigned long unused) +{ + __rcu_process_callbacks(&rcu_ctrlblk, &rcu_state, + &__get_cpu_var(rcu_data)); } void rcu_check_callbacks(int cpu, int user) { + struct rcu_data *rdp = &__get_cpu_var(rcu_data); if (user || (idle_cpu(cpu) && !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) - RCU_qsctr(cpu)++; - tasklet_schedule(&RCU_tasklet(cpu)); + rdp->qsctr++; + tasklet_schedule(&per_cpu(rcu_tasklet, rdp->cpu)); +} + +static void rcu_init_percpu_data(int cpu, struct rcu_ctrlblk *rcp, + struct rcu_data *rdp) +{ + memset(rdp, 0, sizeof(*rdp)); + rdp->curtail = &rdp->curlist; + rdp->nxttail = &rdp->nxtlist; + rdp->donetail = &rdp->donelist; + rdp->quiescbatch = rcp->completed; + rdp->qs_pending = 0; + rdp->cpu = cpu; } static void __devinit rcu_online_cpu(int cpu) { - memset(&per_cpu(rcu_data, cpu), 0, sizeof(struct rcu_data)); - tasklet_init(&RCU_tasklet(cpu), rcu_process_callbacks, 0UL); - RCU_curtail(cpu) = &RCU_curlist(cpu); - RCU_nxttail(cpu) = &RCU_nxtlist(cpu); - RCU_donetail(cpu) = &RCU_donelist(cpu); - RCU_quiescbatch(cpu) = rcu_ctrlblk.completed; - RCU_qs_pending(cpu) = 0; + struct rcu_data *rdp = &per_cpu(rcu_data, cpu); + + rcu_init_percpu_data(cpu, &rcu_ctrlblk, rdp); + tasklet_init(&per_cpu(rcu_tasklet, cpu), rcu_process_callbacks, 0UL); } static int __devinit rcu_cpu_notify(struct notifier_block *self, diff -puN kernel/sched.c~rcu-code-cleanup kernel/sched.c --- linux-2.6.8-rc3-mm1/kernel/sched.c~rcu-code-cleanup 2004-08-07 15:28:30.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/kernel/sched.c 2004-08-07 15:28:30.000000000 +0530 @@ -2548,7 +2548,7 @@ need_resched: switch_tasks: prefetch(next); clear_tsk_need_resched(prev); - RCU_qsctr(task_cpu(prev))++; + rcu_qsctr_inc(task_cpu(prev)); prev->sleep_avg -= run_time; if ((long)prev->sleep_avg <= 0) { _ ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RCU : Introduce call_rcu_bh() [2/5] 2004-08-07 19:17 ` RCU : clean up code [1/5] Dipankar Sarma @ 2004-08-07 19:18 ` Dipankar Sarma 2004-08-07 19:20 ` RCU : Use call_rcu_bh() in route cache [3/5] Dipankar Sarma 0 siblings, 1 reply; 7+ messages in thread From: Dipankar Sarma @ 2004-08-07 19:18 UTC (permalink / raw) To: Andrew Morton Cc: Rusty Russell, Paul E. McKenney, linux-kernel, Robert Olsson, netdev Introduces call_rcu_bh() to be used when critical sections are mostly in softirq context. Thanks Dipankar This patch introduces a new api - call_rcu_bh(). This is to be used for RCU callbacks for whom the critical sections are mostly in softirq context. These callbacks consider completion of a softirq handler to be a quiescent state. So, in order to make reader critical sections safe in process context, rcu_read_lock_bh() and rcu_read_unlock_bh() must be used. Use of softirq handler completion as a quiescent state speeds up RCU grace periods and prevents too many callbacks getting queued up in softirq-heavy workloads like network stack. Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com> include/linux/rcupdate.h | 14 +++++++++++- kernel/rcupdate.c | 52 ++++++++++++++++++++++++++++++++++++++++++----- kernel/softirq.c | 7 +++++- 3 files changed, 66 insertions(+), 7 deletions(-) diff -puN include/linux/rcupdate.h~rcu-call-rcu-bh include/linux/rcupdate.h --- linux-2.6.8-rc3-mm1/include/linux/rcupdate.h~rcu-call-rcu-bh 2004-08-07 15:28:51.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/include/linux/rcupdate.h 2004-08-07 15:28:51.000000000 +0530 @@ -105,7 +105,9 @@ struct rcu_data { }; DECLARE_PER_CPU(struct rcu_data, rcu_data); +DECLARE_PER_CPU(struct rcu_data, rcu_bh_data); extern struct rcu_ctrlblk rcu_ctrlblk; +extern struct rcu_ctrlblk rcu_bh_ctrlblk; /* * Increment the quiscent state counter. @@ -115,6 +117,11 @@ static inline void rcu_qsctr_inc(int cpu struct rcu_data *rdp = &per_cpu(rcu_data, cpu); rdp->qsctr++; } +static inline void rcu_bh_qsctr_inc(int cpu) +{ + struct rcu_data *rdp = &per_cpu(rcu_bh_data, cpu); + rdp->qsctr++; +} static inline int __rcu_pending(struct rcu_ctrlblk *rcp, struct rcu_data *rdp) @@ -143,11 +150,14 @@ static inline int __rcu_pending(struct r static inline int rcu_pending(int cpu) { - return __rcu_pending(&rcu_ctrlblk, &per_cpu(rcu_data, cpu)); + return __rcu_pending(&rcu_ctrlblk, &per_cpu(rcu_data, cpu)) || + __rcu_pending(&rcu_bh_ctrlblk, &per_cpu(rcu_bh_data, cpu)); } #define rcu_read_lock() preempt_disable() #define rcu_read_unlock() preempt_enable() +#define rcu_read_lock_bh() local_bh_disable() +#define rcu_read_unlock_bh() local_bh_enable() extern void rcu_init(void); extern void rcu_check_callbacks(int cpu, int user); @@ -156,6 +166,8 @@ extern void rcu_restart_cpu(int cpu); /* Exported interfaces */ extern void FASTCALL(call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *head))); +extern void FASTCALL(call_rcu_bh(struct rcu_head *head, + void (*func)(struct rcu_head *head))); extern void synchronize_kernel(void); #endif /* __KERNEL__ */ diff -puN kernel/rcupdate.c~rcu-call-rcu-bh kernel/rcupdate.c --- linux-2.6.8-rc3-mm1/kernel/rcupdate.c~rcu-call-rcu-bh 2004-08-07 15:28:51.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/kernel/rcupdate.c 2004-08-07 15:28:51.000000000 +0530 @@ -50,6 +50,8 @@ /* Definition for rcupdate control block. */ struct rcu_ctrlblk rcu_ctrlblk = { .cur = -300, .completed = -300 , .lock = SEQCNT_ZERO }; +struct rcu_ctrlblk rcu_bh_ctrlblk = + { .cur = -300, .completed = -300 , .lock = SEQCNT_ZERO }; /* Bookkeeping of the progress of the grace period */ struct rcu_state { @@ -60,9 +62,11 @@ struct rcu_state { struct rcu_state rcu_state ____cacheline_maxaligned_in_smp = {.lock = SPIN_LOCK_UNLOCKED, .cpumask = CPU_MASK_NONE }; - +struct rcu_state rcu_bh_state ____cacheline_maxaligned_in_smp = + {.lock = SPIN_LOCK_UNLOCKED, .cpumask = CPU_MASK_NONE }; DEFINE_PER_CPU(struct rcu_data, rcu_data) = { 0L }; +DEFINE_PER_CPU(struct rcu_data, rcu_bh_data) = { 0L }; /* Fake initialization required by compiler */ static DEFINE_PER_CPU(struct tasklet_struct, rcu_tasklet) = {NULL}; @@ -93,6 +97,34 @@ void fastcall call_rcu(struct rcu_head * local_irq_restore(flags); } +/** + * call_rcu_bh - Queue an RCU update request for which softirq handler + * completion is a quiescent state. + * @head: structure to be used for queueing the RCU updates. + * @func: actual update function to be invoked after the grace period + * + * The update function will be invoked as soon as all CPUs have performed + * a context switch or been seen in the idle loop or in a user process + * or has exited a softirq handler that it may have been executing. + * The read-side of critical section that use call_rcu_bh() for updation must + * be protected by rcu_read_lock_bh()/rcu_read_unlock_bh() if it is + * in process context. + */ +void fastcall call_rcu_bh(struct rcu_head *head, + void (*func)(struct rcu_head *rcu)) +{ + unsigned long flags; + struct rcu_data *rdp; + + head->func = func; + head->next = NULL; + local_irq_save(flags); + rdp = &__get_cpu_var(rcu_bh_data); + *rdp->nxttail = head; + rdp->nxttail = &head->next; + local_irq_restore(flags); +} + /* * Invoke the completed RCU callbacks. They are expected to be in * a per-cpu list. @@ -249,10 +281,14 @@ static void __rcu_offline_cpu(struct rcu static void rcu_offline_cpu(int cpu) { struct rcu_data *this_rdp = &get_cpu_var(rcu_data); + struct rcu_data *this_bh_rdp = &get_cpu_var(rcu_bh_data); __rcu_offline_cpu(this_rdp, &rcu_ctrlblk, &rcu_state, &per_cpu(rcu_data, cpu)); + __rcu_offline_cpu(this_rdp, &rcu_bh_ctrlblk, &rcu_bh_state, + &per_cpu(rcu_bh_data, cpu)); put_cpu_var(rcu_data); + put_cpu_var(rcu_bh_data); tasklet_kill_immediate(&per_cpu(rcu_tasklet, cpu), cpu); } @@ -315,16 +351,20 @@ static void rcu_process_callbacks(unsign { __rcu_process_callbacks(&rcu_ctrlblk, &rcu_state, &__get_cpu_var(rcu_data)); + __rcu_process_callbacks(&rcu_bh_ctrlblk, &rcu_bh_state, + &__get_cpu_var(rcu_bh_data)); } void rcu_check_callbacks(int cpu, int user) { - struct rcu_data *rdp = &__get_cpu_var(rcu_data); if (user || (idle_cpu(cpu) && !in_softirq() && - hardirq_count() <= (1 << HARDIRQ_SHIFT))) - rdp->qsctr++; - tasklet_schedule(&per_cpu(rcu_tasklet, rdp->cpu)); + hardirq_count() <= (1 << HARDIRQ_SHIFT))) { + rcu_qsctr_inc(cpu); + rcu_bh_qsctr_inc(cpu); + } else if (!in_softirq()) + rcu_bh_qsctr_inc(cpu); + tasklet_schedule(&per_cpu(rcu_tasklet, cpu)); } static void rcu_init_percpu_data(int cpu, struct rcu_ctrlblk *rcp, @@ -342,8 +382,10 @@ static void rcu_init_percpu_data(int cpu static void __devinit rcu_online_cpu(int cpu) { struct rcu_data *rdp = &per_cpu(rcu_data, cpu); + struct rcu_data *bh_rdp = &per_cpu(rcu_bh_data, cpu); rcu_init_percpu_data(cpu, &rcu_ctrlblk, rdp); + rcu_init_percpu_data(cpu, &rcu_bh_ctrlblk, bh_rdp); tasklet_init(&per_cpu(rcu_tasklet, cpu), rcu_process_callbacks, 0UL); } diff -puN kernel/softirq.c~rcu-call-rcu-bh kernel/softirq.c --- linux-2.6.8-rc3-mm1/kernel/softirq.c~rcu-call-rcu-bh 2004-08-07 15:28:51.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/kernel/softirq.c 2004-08-07 15:28:51.000000000 +0530 @@ -15,6 +15,7 @@ #include <linux/percpu.h> #include <linux/cpu.h> #include <linux/kthread.h> +#include <linux/rcupdate.h> #include <asm/irq.h> /* @@ -75,10 +76,12 @@ asmlinkage void __do_softirq(void) struct softirq_action *h; __u32 pending; int max_restart = MAX_SOFTIRQ_RESTART; + int cpu; pending = local_softirq_pending(); local_bh_disable(); + cpu = smp_processor_id(); restart: /* Reset the pending bitmask before enabling irqs */ local_softirq_pending() = 0; @@ -88,8 +91,10 @@ restart: h = softirq_vec; do { - if (pending & 1) + if (pending & 1) { h->action(h); + rcu_bh_qsctr_inc(cpu); + } h++; pending >>= 1; } while (pending); _ ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RCU : Use call_rcu_bh() in route cache [3/5] 2004-08-07 19:18 ` RCU : Introduce call_rcu_bh() [2/5] Dipankar Sarma @ 2004-08-07 19:20 ` Dipankar Sarma 2004-08-07 19:21 ` RCU : Document RCU api [4/5] Dipankar Sarma 0 siblings, 1 reply; 7+ messages in thread From: Dipankar Sarma @ 2004-08-07 19:20 UTC (permalink / raw) To: Andrew Morton Cc: Rusty Russell, Paul E. McKenney, linux-kernel, Robert Olsson, netdev Use call_rcu_bh() in route cache. This allows faster grace periods and avoids dst cache overflows during DoS testing. Thanks Dipankar This patch uses the call_rcu_bh() api in route cache code to facilitate quicker RCU grace periods. Quicker grace periods avoid overflow of dst cache in heavily loaded routers as seen in Robert Olsson's testing. Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com> net/decnet/dn_route.c | 28 ++++++++++++++-------------- net/ipv4/route.c | 26 +++++++++++++------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff -puN net/decnet/dn_route.c~rcu-use-call-rcu-bh net/decnet/dn_route.c --- linux-2.6.8-rc3-mm1/net/decnet/dn_route.c~rcu-use-call-rcu-bh 2004-08-07 15:29:16.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/decnet/dn_route.c 2004-08-07 15:29:16.000000000 +0530 @@ -146,14 +146,14 @@ static __inline__ unsigned dn_hash(unsig static inline void dnrt_free(struct dn_route *rt) { - call_rcu(&rt->u.dst.rcu_head, dst_rcu_free); + call_rcu_bh(&rt->u.dst.rcu_head, dst_rcu_free); } static inline void dnrt_drop(struct dn_route *rt) { if (rt) dst_release(&rt->u.dst); - call_rcu(&rt->u.dst.rcu_head, dst_rcu_free); + call_rcu_bh(&rt->u.dst.rcu_head, dst_rcu_free); } static void dn_dst_check_expire(unsigned long dummy) @@ -1174,9 +1174,9 @@ static int __dn_route_output_key(struct struct dn_route *rt = NULL; if (!(flags & MSG_TRYHARD)) { - rcu_read_lock(); + rcu_read_lock_bh(); for(rt = dn_rt_hash_table[hash].chain; rt; rt = rt->u.rt_next) { - read_barrier_depends(); + smp_read_barrier_depends(); if ((flp->fld_dst == rt->fl.fld_dst) && (flp->fld_src == rt->fl.fld_src) && #ifdef CONFIG_DECNET_ROUTE_FWMARK @@ -1187,12 +1187,12 @@ static int __dn_route_output_key(struct rt->u.dst.lastuse = jiffies; dst_hold(&rt->u.dst); rt->u.dst.__use++; - rcu_read_unlock(); + rcu_read_unlock_bh(); *pprt = &rt->u.dst; return 0; } } - rcu_read_unlock(); + rcu_read_unlock_bh(); } return dn_route_output_slow(pprt, flp, flags); @@ -1647,21 +1647,21 @@ int dn_cache_dump(struct sk_buff *skb, s continue; if (h > s_h) s_idx = 0; - rcu_read_lock(); + rcu_read_lock_bh(); for(rt = dn_rt_hash_table[h].chain, idx = 0; rt; rt = rt->u.rt_next, idx++) { - read_barrier_depends(); + smp_read_barrier_depends(); if (idx < s_idx) continue; skb->dst = dst_clone(&rt->u.dst); if (dn_rt_fill_info(skb, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq, RTM_NEWROUTE, 1) <= 0) { dst_release(xchg(&skb->dst, NULL)); - rcu_read_unlock(); + rcu_read_unlock_bh(); goto done; } dst_release(xchg(&skb->dst, NULL)); } - rcu_read_unlock(); + rcu_read_unlock_bh(); } done: @@ -1681,7 +1681,7 @@ static struct dn_route *dn_rt_cache_get_ struct dn_rt_cache_iter_state *s = seq->private; for(s->bucket = dn_rt_hash_mask; s->bucket >= 0; --s->bucket) { - rcu_read_lock(); + rcu_read_lock_bh(); rt = dn_rt_hash_table[s->bucket].chain; if (rt) break; @@ -1697,10 +1697,10 @@ static struct dn_route *dn_rt_cache_get_ smp_read_barrier_depends(); rt = rt->u.rt_next; while(!rt) { - rcu_read_unlock(); + rcu_read_unlock_bh(); if (--s->bucket < 0) break; - rcu_read_lock(); + rcu_read_lock_bh(); rt = dn_rt_hash_table[s->bucket].chain; } return rt; @@ -1727,7 +1727,7 @@ static void *dn_rt_cache_seq_next(struct static void dn_rt_cache_seq_stop(struct seq_file *seq, void *v) { if (v) - rcu_read_unlock(); + rcu_read_unlock_bh(); } static int dn_rt_cache_seq_show(struct seq_file *seq, void *v) diff -puN net/ipv4/route.c~rcu-use-call-rcu-bh net/ipv4/route.c --- linux-2.6.8-rc3-mm1/net/ipv4/route.c~rcu-use-call-rcu-bh 2004-08-07 15:29:16.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/ipv4/route.c 2004-08-07 15:29:16.000000000 +0530 @@ -226,11 +226,11 @@ static struct rtable *rt_cache_get_first struct rt_cache_iter_state *st = seq->private; for (st->bucket = rt_hash_mask; st->bucket >= 0; --st->bucket) { - rcu_read_lock(); + rcu_read_lock_bh(); r = rt_hash_table[st->bucket].chain; if (r) break; - rcu_read_unlock(); + rcu_read_unlock_bh(); } return r; } @@ -242,10 +242,10 @@ static struct rtable *rt_cache_get_next( smp_read_barrier_depends(); r = r->u.rt_next; while (!r) { - rcu_read_unlock(); + rcu_read_unlock_bh(); if (--st->bucket < 0) break; - rcu_read_lock(); + rcu_read_lock_bh(); r = rt_hash_table[st->bucket].chain; } return r; @@ -281,7 +281,7 @@ static void *rt_cache_seq_next(struct se static void rt_cache_seq_stop(struct seq_file *seq, void *v) { if (v && v != SEQ_START_TOKEN) - rcu_read_unlock(); + rcu_read_unlock_bh(); } static int rt_cache_seq_show(struct seq_file *seq, void *v) @@ -439,13 +439,13 @@ static struct file_operations rt_cpu_seq static __inline__ void rt_free(struct rtable *rt) { - call_rcu(&rt->u.dst.rcu_head, dst_rcu_free); + call_rcu_bh(&rt->u.dst.rcu_head, dst_rcu_free); } static __inline__ void rt_drop(struct rtable *rt) { ip_rt_put(rt); - call_rcu(&rt->u.dst.rcu_head, dst_rcu_free); + call_rcu_bh(&rt->u.dst.rcu_head, dst_rcu_free); } static __inline__ int rt_fast_clean(struct rtable *rth) @@ -2231,7 +2231,7 @@ int __ip_route_output_key(struct rtable hash = rt_hash_code(flp->fl4_dst, flp->fl4_src ^ (flp->oif << 5), flp->fl4_tos); - rcu_read_lock(); + rcu_read_lock_bh(); for (rth = rt_hash_table[hash].chain; rth; rth = rth->u.rt_next) { smp_read_barrier_depends(); if (rth->fl.fl4_dst == flp->fl4_dst && @@ -2247,13 +2247,13 @@ int __ip_route_output_key(struct rtable dst_hold(&rth->u.dst); rth->u.dst.__use++; RT_CACHE_STAT_INC(out_hit); - rcu_read_unlock(); + rcu_read_unlock_bh(); *rp = rth; return 0; } RT_CACHE_STAT_INC(out_hlist_search); } - rcu_read_unlock(); + rcu_read_unlock_bh(); return ip_route_output_slow(rp, flp); } @@ -2463,7 +2463,7 @@ int ip_rt_dump(struct sk_buff *skb, str if (h < s_h) continue; if (h > s_h) s_idx = 0; - rcu_read_lock(); + rcu_read_lock_bh(); for (rt = rt_hash_table[h].chain, idx = 0; rt; rt = rt->u.rt_next, idx++) { smp_read_barrier_depends(); @@ -2474,12 +2474,12 @@ int ip_rt_dump(struct sk_buff *skb, str cb->nlh->nlmsg_seq, RTM_NEWROUTE, 1) <= 0) { dst_release(xchg(&skb->dst, NULL)); - rcu_read_unlock(); + rcu_read_unlock_bh(); goto done; } dst_release(xchg(&skb->dst, NULL)); } - rcu_read_unlock(); + rcu_read_unlock_bh(); } done: _ ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RCU : Document RCU api [4/5] 2004-08-07 19:20 ` RCU : Use call_rcu_bh() in route cache [3/5] Dipankar Sarma @ 2004-08-07 19:21 ` Dipankar Sarma 2004-08-07 19:24 ` RCU : Abstracted RCU dereferencing [5/5] Dipankar Sarma 0 siblings, 1 reply; 7+ messages in thread From: Dipankar Sarma @ 2004-08-07 19:21 UTC (permalink / raw) To: Andrew Morton Cc: Rusty Russell, Paul E. McKenney, linux-kernel, Robert Olsson, netdev Patch from Paul for additional documentation of api. Thanks Dipankar Updated based on feedback, and to apply to 2.6.8-rc3. I will be adding more detailed documentation to the Documentation directory in a separate patch. Signed-off-by: Paul McKenney <paulmck@us.ibm.com> include/linux/rcupdate.h | 65 ++++++++++++++++++++++++++++++++++++++++++++++- kernel/rcupdate.c | 39 +++++++++++++++++----------- 2 files changed, 88 insertions(+), 16 deletions(-) diff -puN include/linux/rcupdate.h~rcu-api-doc include/linux/rcupdate.h --- linux-2.6.8-rc3-mm1/include/linux/rcupdate.h~rcu-api-doc 2004-08-07 15:29:49.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/include/linux/rcupdate.h 2004-08-07 15:29:49.000000000 +0530 @@ -154,11 +154,74 @@ static inline int rcu_pending(int cpu) __rcu_pending(&rcu_bh_ctrlblk, &per_cpu(rcu_bh_data, cpu)); } +/** + * rcu_read_lock - mark the beginning of an RCU read-side critical section. + * + * When synchronize_kernel() is invoked on one CPU while other CPUs + * are within RCU read-side critical sections, then the + * synchronize_kernel() is guaranteed to block until after all the other + * CPUs exit their critical sections. Similarly, if call_rcu() is invoked + * on one CPU while other CPUs are within RCU read-side critical + * sections, invocation of the corresponding RCU callback is deferred + * until after the all the other CPUs exit their critical sections. + * + * Note, however, that RCU callbacks are permitted to run concurrently + * with RCU read-side critical sections. One way that this can happen + * is via the following sequence of events: (1) CPU 0 enters an RCU + * read-side critical section, (2) CPU 1 invokes call_rcu() to register + * an RCU callback, (3) CPU 0 exits the RCU read-side critical section, + * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU + * callback is invoked. This is legal, because the RCU read-side critical + * section that was running concurrently with the call_rcu() (and which + * therefore might be referencing something that the corresponding RCU + * callback would free up) has completed before the corresponding + * RCU callback is invoked. + * + * RCU read-side critical sections may be nested. Any deferred actions + * will be deferred until the outermost RCU read-side critical section + * completes. + * + * It is illegal to block while in an RCU read-side critical section. + */ #define rcu_read_lock() preempt_disable() + +/** + * rcu_read_unlock - marks the end of an RCU read-side critical section. + * + * See rcu_read_lock() for more information. + */ #define rcu_read_unlock() preempt_enable() + +/* + * So where is rcu_write_lock()? It does not exist, as there is no + * way for writers to lock out RCU readers. This is a feature, not + * a bug -- this property is what provides RCU's performance benefits. + * Of course, writers must coordinate with each other. The normal + * spinlock primitives work well for this, but any other technique may be + * used as well. RCU does not care how the writers keep out of each + * others' way, as long as they do so. + */ + +/** + * rcu_read_lock_bh - mark the beginning of a softirq-only RCU critical section + * + * This is equivalent of rcu_read_lock(), but to be used when updates + * are being done using call_rcu_bh(). Since call_rcu_bh() callbacks + * consider completion of a softirq handler to be a quiescent state, + * a process in RCU read-side critical section must be protected by + * disabling softirqs. Read-side critical sections in interrupt context + * can use just rcu_read_lock(). + * + */ #define rcu_read_lock_bh() local_bh_disable() -#define rcu_read_unlock_bh() local_bh_enable() +/* + * rcu_read_unlock_bh - marks the end of a softirq-only RCU critical section + * + * See rcu_read_lock_bh() for more information. + */ +#define rcu_read_unlock_bh() local_bh_enable() + extern void rcu_init(void); extern void rcu_check_callbacks(int cpu, int user); extern void rcu_restart_cpu(int cpu); diff -puN kernel/rcupdate.c~rcu-api-doc kernel/rcupdate.c --- linux-2.6.8-rc3-mm1/kernel/rcupdate.c~rcu-api-doc 2004-08-07 15:29:49.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/kernel/rcupdate.c 2004-08-07 15:29:49.000000000 +0530 @@ -73,14 +73,15 @@ static DEFINE_PER_CPU(struct tasklet_str static int maxbatch = 10; /** - * call_rcu - Queue an RCU update request. + * call_rcu - Queue an RCU callback for invocation after a grace period. * @head: structure to be used for queueing the RCU updates. * @func: actual update function to be invoked after the grace period * - * The update function will be invoked as soon as all CPUs have performed - * a context switch or been seen in the idle loop or in a user process. - * The read-side of critical section that use call_rcu() for updation must - * be protected by rcu_read_lock()/rcu_read_unlock(). + * The update function will be invoked some time after a full grace + * period elapses, in other words after all currently executing RCU + * read-side critical sections have completed. RCU read-side critical + * sections are delimited by rcu_read_lock() and rcu_read_unlock(), + * and may be nested. */ void fastcall call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)) @@ -98,17 +99,20 @@ void fastcall call_rcu(struct rcu_head * } /** - * call_rcu_bh - Queue an RCU update request for which softirq handler - * completion is a quiescent state. + * call_rcu_bh - Queue an RCU for invocation after a quicker grace period. * @head: structure to be used for queueing the RCU updates. * @func: actual update function to be invoked after the grace period * - * The update function will be invoked as soon as all CPUs have performed - * a context switch or been seen in the idle loop or in a user process - * or has exited a softirq handler that it may have been executing. - * The read-side of critical section that use call_rcu_bh() for updation must - * be protected by rcu_read_lock_bh()/rcu_read_unlock_bh() if it is - * in process context. + * The update function will be invoked some time after a full grace + * period elapses, in other words after all currently executing RCU + * read-side critical sections have completed. call_rcu_bh() assumes + * that the read-side critical sections end on completion of a softirq + * handler. This means that read-side critical sections in process + * context must not be interrupted by softirqs. This interface is to be + * used when most of the read-side critical sections are in softirq context. + * RCU read-side critical sections are delimited by rcu_read_lock() and + * rcu_read_unlock(), * if in interrupt context or rcu_read_lock_bh() + * and rcu_read_unlock_bh(), if in process context. These may be nested. */ void fastcall call_rcu_bh(struct rcu_head *head, void (*func)(struct rcu_head *rcu)) @@ -439,8 +443,13 @@ static void wakeme_after_rcu(struct rcu_ } /** - * synchronize-kernel - wait until all the CPUs have gone - * through a "quiescent" state. It may sleep. + * synchronize_kernel - wait until a grace period has elapsed. + * + * Control will return to the caller some time after a full grace + * period has elapsed, in other words after all currently executing RCU + * read-side critical sections have completed. RCU read-side critical + * sections are delimited by rcu_read_lock() and rcu_read_unlock(), + * and may be nested. */ void synchronize_kernel(void) { _ ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RCU : Abstracted RCU dereferencing [5/5] 2004-08-07 19:21 ` RCU : Document RCU api [4/5] Dipankar Sarma @ 2004-08-07 19:24 ` Dipankar Sarma 0 siblings, 0 replies; 7+ messages in thread From: Dipankar Sarma @ 2004-08-07 19:24 UTC (permalink / raw) To: Andrew Morton Cc: Rusty Russell, Paul E. McKenney, linux-kernel, Robert Olsson, netdev Use abstracted RCU API to dereference RCU protected data. Hides barrier details. Patch from Paul McKenney. Thanks Dipankar This patch introduced an rcu_dereference() macro that replaces most uses of smp_read_barrier_depends(). The new macro has the advantage of explicitly documenting which pointers are protected by RCU -- in contrast, it is sometimes difficult to figure out which pointer is being protected by a given smp_read_barrier_depends() call. Signed-off-by: Paul McKenney <paulmck@us.ibm.com> arch/x86_64/kernel/mce.c | 8 +++----- fs/dcache.c | 10 ++++------ include/linux/list.h | 27 ++++++++++++++------------- include/linux/rcupdate.h | 16 ++++++++++++++++ ipc/util.c | 11 +++++------ net/bridge/br_input.c | 3 +-- net/core/dev.c | 3 +-- net/core/netfilter.c | 3 +-- net/decnet/dn_route.c | 17 ++++++++--------- net/ipv4/icmp.c | 3 +-- net/ipv4/ip_input.c | 3 +-- net/ipv4/route.c | 24 ++++++++++-------------- net/ipv6/icmp.c | 3 +-- net/ipv6/ip6_input.c | 3 +-- 14 files changed, 67 insertions(+), 67 deletions(-) diff -puN arch/x86_64/kernel/mce.c~rcu-use-deref-macro arch/x86_64/kernel/mce.c --- linux-2.6.8-rc3-mm1/arch/x86_64/kernel/mce.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/arch/x86_64/kernel/mce.c 2004-08-07 15:30:07.000000000 +0530 @@ -49,8 +49,7 @@ static void mce_log(struct mce *mce) mce->finished = 0; smp_wmb(); for (;;) { - entry = mcelog.next; - read_barrier_depends(); + entry = rcu_dereference(mcelog.next); /* When the buffer fills up discard new entries. Assume that the earlier errors are the more interesting. */ if (entry >= MCE_LOG_LEN) { @@ -340,9 +339,8 @@ static ssize_t mce_read(struct file *fil int i, err; down(&mce_read_sem); - next = mcelog.next; - read_barrier_depends(); - + next = rcu_dereference(mcelog.next); + /* Only supports full reads right now */ if (*off != 0 || usize < MCE_LOG_LEN*sizeof(struct mce)) { up(&mce_read_sem); diff -puN fs/dcache.c~rcu-use-deref-macro fs/dcache.c --- linux-2.6.8-rc3-mm1/fs/dcache.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/fs/dcache.c 2004-08-07 15:30:07.000000000 +0530 @@ -620,7 +620,7 @@ void shrink_dcache_parent(struct dentry * * Prune the dentries that are anonymous * - * parsing d_hash list does not read_barrier_depends() as it + * parsing d_hash list does not hlist_for_each_rcu() as it * done under dcache_lock. * */ @@ -977,11 +977,10 @@ struct dentry * __d_lookup(struct dentry rcu_read_lock(); - hlist_for_each (node, head) { + hlist_for_each_rcu(node, head) { struct dentry *dentry; struct qstr *qstr; - smp_read_barrier_depends(); dentry = hlist_entry(node, struct dentry, d_hash); smp_rmb(); @@ -1008,8 +1007,7 @@ struct dentry * __d_lookup(struct dentry if (dentry->d_parent != parent) goto next; - qstr = &dentry->d_name; - smp_read_barrier_depends(); + qstr = rcu_dereference(&dentry->d_name); if (parent->d_op && parent->d_op->d_compare) { if (parent->d_op->d_compare(parent, qstr, name)) goto next; @@ -1062,7 +1060,7 @@ int d_validate(struct dentry *dentry, st spin_lock(&dcache_lock); base = d_hash(dparent, dentry->d_name.hash); hlist_for_each(lhp,base) { - /* read_barrier_depends() not required for d_hash list + /* hlist_for_each_rcu() not required for d_hash list * as it is parsed under dcache_lock */ if (dentry == hlist_entry(lhp, struct dentry, d_hash)) { diff -puN include/linux/list.h~rcu-use-deref-macro include/linux/list.h --- linux-2.6.8-rc3-mm1/include/linux/list.h~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/include/linux/list.h 2004-08-07 16:36:41.000000000 +0530 @@ -423,11 +423,11 @@ static inline void list_splice_init(stru */ #define list_for_each_rcu(pos, head) \ for (pos = (head)->next, prefetch(pos->next); pos != (head); \ - pos = pos->next, ({ smp_read_barrier_depends(); 0;}), prefetch(pos->next)) + pos = rcu_dereference(pos->next), prefetch(pos->next)) #define __list_for_each_rcu(pos, head) \ for (pos = (head)->next; pos != (head); \ - pos = pos->next, ({ smp_read_barrier_depends(); 0;})) + pos = rcu_dereference(pos->next)) /** * list_for_each_safe_rcu - iterate over an rcu-protected list safe @@ -442,7 +442,7 @@ static inline void list_splice_init(stru */ #define list_for_each_safe_rcu(pos, n, head) \ for (pos = (head)->next, n = pos->next; pos != (head); \ - pos = n, ({ smp_read_barrier_depends(); 0;}), n = pos->next) + pos = rcu_dereference(n), n = pos->next) /** * list_for_each_entry_rcu - iterate over rcu list of given type @@ -458,8 +458,8 @@ static inline void list_splice_init(stru for (pos = list_entry((head)->next, typeof(*pos), member), \ prefetch(pos->member.next); \ &pos->member != (head); \ - pos = list_entry(pos->member.next, typeof(*pos), member), \ - ({ smp_read_barrier_depends(); 0;}), \ + pos = rcu_dereference(list_entry(pos->member.next, \ + typeof(*pos), member)), \ prefetch(pos->member.next)) @@ -475,7 +475,7 @@ static inline void list_splice_init(stru */ #define list_for_each_continue_rcu(pos, head) \ for ((pos) = (pos)->next, prefetch((pos)->next); (pos) != (head); \ - (pos) = (pos)->next, ({ smp_read_barrier_depends(); 0;}), prefetch((pos)->next)) + (pos) = rcu_dereference((pos)->next), prefetch((pos)->next)) /* * Double linked lists with a single pointer list head. @@ -581,12 +581,9 @@ static inline void hlist_add_head(struct * or hlist_del_rcu(), running on this same list. * However, it is perfectly legal to run concurrently with * the _rcu list-traversal primitives, such as - * hlist_for_each_entry(), but only if smp_read_barrier_depends() - * is used to prevent memory-consistency problems on Alpha CPUs. - * Regardless of the type of CPU, the list-traversal primitive - * must be guarded by rcu_read_lock(). - * - * OK, so why don't we have an hlist_for_each_entry_rcu()??? + * hlist_for_each_rcu(), used to prevent memory-consistency + * problems on Alpha CPUs. Regardless of the type of CPU, the + * list-traversal primitive must be guarded by rcu_read_lock(). */ static inline void hlist_add_head_rcu(struct hlist_node *n, struct hlist_head *h) @@ -631,6 +628,10 @@ static inline void hlist_add_after(struc for (pos = (head)->first; pos && ({ n = pos->next; 1; }); \ pos = n) +#define hlist_for_each_rcu(pos, head) \ + for ((pos) = (head)->first; pos && ({ prefetch((pos)->next); 1; }); \ + (pos) = rcu_dereference((pos)->next)) + /** * hlist_for_each_entry - iterate over list of given type * @tpos: the type * to use as a loop counter. @@ -696,7 +697,7 @@ static inline void hlist_add_after(struc for (pos = (head)->first; \ pos && ({ prefetch(pos->next); 1;}) && \ ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;}); \ - pos = pos->next, ({ smp_read_barrier_depends(); 0; }) ) + pos = rcu_dereference(pos->next)) #else #warning "don't include kernel headers in userspace" diff -puN include/linux/rcupdate.h~rcu-use-deref-macro include/linux/rcupdate.h --- linux-2.6.8-rc3-mm1/include/linux/rcupdate.h~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/include/linux/rcupdate.h 2004-08-07 15:30:07.000000000 +0530 @@ -221,6 +221,22 @@ static inline int rcu_pending(int cpu) * See rcu_read_lock_bh() for more information. */ #define rcu_read_unlock_bh() local_bh_enable() + +/** + * rcu_dereference - fetch an RCU-protected pointer in an + * RCU read-side critical section. This pointer may later + * be safely dereferenced. + * + * Inserts memory barriers on architectures that require them + * (currently only the Alpha), and, more importantly, documents + * exactly which pointers are protected by RCU. + */ + +#define rcu_dereference(p) ({ \ + typeof(p) _________p1 = p; \ + smp_read_barrier_depends(); \ + (_________p1); \ + }) extern void rcu_init(void); extern void rcu_check_callbacks(int cpu, int user); diff -puN ipc/util.c~rcu-use-deref-macro ipc/util.c --- linux-2.6.8-rc3-mm1/ipc/util.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/ipc/util.c 2004-08-07 15:30:07.000000000 +0530 @@ -100,7 +100,7 @@ int ipc_findkey(struct ipc_ids* ids, key int max_id = ids->max_id; /* - * read_barrier_depends is not needed here + * rcu_dereference() is not needed here * since ipc_ids.sem is held */ for (id = 0; id <= max_id; id++) { @@ -171,7 +171,7 @@ int ipc_addid(struct ipc_ids* ids, struc size = grow_ary(ids,size); /* - * read_barrier_depends() is not needed here since + * rcu_dereference()() is not needed here since * ipc_ids.sem is held */ for (id = 0; id < size; id++) { @@ -220,7 +220,7 @@ struct kern_ipc_perm* ipc_rmid(struct ip BUG(); /* - * do not need a read_barrier_depends() here to force ordering + * do not need a rcu_dereference()() here to force ordering * on Alpha, since the ipc_ids.sem is held. */ p = ids->entries[lid].p; @@ -515,13 +515,12 @@ struct kern_ipc_perm* ipc_lock(struct ip * Note: The following two read barriers are corresponding * to the two write barriers in grow_ary(). They guarantee * the writes are seen in the same order on the read side. - * smp_rmb() has effect on all CPUs. read_barrier_depends() + * smp_rmb() has effect on all CPUs. rcu_dereference() * is used if there are data dependency between two reads, and * has effect only on Alpha. */ smp_rmb(); /* prevent indexing old array with new size */ - entries = ids->entries; - read_barrier_depends(); /*prevent seeing new array unitialized */ + entries = rcu_dereference(ids->entries); out = entries[lid].p; if(out == NULL) { rcu_read_unlock(); diff -puN net/bridge/br_input.c~rcu-use-deref-macro net/bridge/br_input.c --- linux-2.6.8-rc3-mm1/net/bridge/br_input.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/bridge/br_input.c 2004-08-07 15:30:07.000000000 +0530 @@ -56,8 +56,7 @@ int br_handle_frame_finish(struct sk_buf dest = skb->mac.ethernet->h_dest; rcu_read_lock(); - p = skb->dev->br_port; - smp_read_barrier_depends(); + p = rcu_dereference(skb->dev->br_port); if (p == NULL || p->state == BR_STATE_DISABLED) { kfree_skb(skb); diff -puN net/core/dev.c~rcu-use-deref-macro net/core/dev.c --- linux-2.6.8-rc3-mm1/net/core/dev.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/core/dev.c 2004-08-07 15:30:07.000000000 +0530 @@ -1332,8 +1332,7 @@ int dev_queue_xmit(struct sk_buff *skb) * also serializes access to the device queue. */ - q = dev->qdisc; - smp_read_barrier_depends(); + q = rcu_dereference(dev->qdisc); #ifdef CONFIG_NET_CLS_ACT skb->tc_verd = SET_TC_AT(skb->tc_verd,AT_EGRESS); #endif diff -puN net/core/netfilter.c~rcu-use-deref-macro net/core/netfilter.c --- linux-2.6.8-rc3-mm1/net/core/netfilter.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/core/netfilter.c 2004-08-07 15:30:07.000000000 +0530 @@ -783,13 +783,12 @@ void nf_log_packet(int pf, nf_logfn *logfn; rcu_read_lock(); - logfn = nf_logging[pf]; + logfn = rcu_dereference(nf_logging[pf]); if (logfn) { va_start(args, fmt); vsnprintf(prefix, sizeof(prefix), fmt, args); va_end(args); /* We must read logging before nf_logfn[pf] */ - smp_read_barrier_depends(); logfn(hooknum, skb, in, out, prefix); } else if (!reported) { printk(KERN_WARNING "nf_log_packet: can\'t log yet, " diff -puN net/decnet/dn_route.c~rcu-use-deref-macro net/decnet/dn_route.c --- linux-2.6.8-rc3-mm1/net/decnet/dn_route.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/decnet/dn_route.c 2004-08-07 15:30:07.000000000 +0530 @@ -1,4 +1,3 @@ - /* * DECnet An implementation of the DECnet protocol suite for the LINUX * operating system. DECnet is implemented using the BSD Socket @@ -1175,8 +1174,8 @@ static int __dn_route_output_key(struct if (!(flags & MSG_TRYHARD)) { rcu_read_lock_bh(); - for(rt = dn_rt_hash_table[hash].chain; rt; rt = rt->u.rt_next) { - smp_read_barrier_depends(); + for(rt = rcu_dereference(dn_rt_hash_table[hash].chain); rt; + rt = rcu_dereference(rt->u.rt_next)) { if ((flp->fld_dst == rt->fl.fld_dst) && (flp->fld_src == rt->fl.fld_src) && #ifdef CONFIG_DECNET_ROUTE_FWMARK @@ -1454,8 +1453,8 @@ int dn_route_input(struct sk_buff *skb) return 0; rcu_read_lock(); - for(rt = dn_rt_hash_table[hash].chain; rt != NULL; rt = rt->u.rt_next) { - read_barrier_depends(); + for(rt = rcu_dereference(dn_rt_hash_table[hash].chain); rt != NULL; + rt = rcu_dereference(rt->u.rt_next)) { if ((rt->fl.fld_src == cb->src) && (rt->fl.fld_dst == cb->dst) && (rt->fl.oif == 0) && @@ -1648,8 +1647,9 @@ int dn_cache_dump(struct sk_buff *skb, s if (h > s_h) s_idx = 0; rcu_read_lock_bh(); - for(rt = dn_rt_hash_table[h].chain, idx = 0; rt; rt = rt->u.rt_next, idx++) { - smp_read_barrier_depends(); + for(rt = rcu_dereference(dn_rt_hash_table[h].chain), idx = 0; + rt; + rt = rcu_dereference(rt->u.rt_next), idx++) { if (idx < s_idx) continue; skb->dst = dst_clone(&rt->u.dst); @@ -1692,9 +1692,8 @@ static struct dn_route *dn_rt_cache_get_ static struct dn_route *dn_rt_cache_get_next(struct seq_file *seq, struct dn_route *rt) { - struct dn_rt_cache_iter_state *s = seq->private; + struct dn_rt_cache_iter_state *s = rcu_dereference(seq->private); - smp_read_barrier_depends(); rt = rt->u.rt_next; while(!rt) { rcu_read_unlock_bh(); diff -puN net/ipv4/icmp.c~rcu-use-deref-macro net/ipv4/icmp.c --- linux-2.6.8-rc3-mm1/net/ipv4/icmp.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/ipv4/icmp.c 2004-08-07 15:30:07.000000000 +0530 @@ -705,8 +705,7 @@ static void icmp_unreach(struct sk_buff read_unlock(&raw_v4_lock); rcu_read_lock(); - ipprot = inet_protos[hash]; - smp_read_barrier_depends(); + ipprot = rcu_dereference(inet_protos[hash]); if (ipprot && ipprot->err_handler) ipprot->err_handler(skb, info); rcu_read_unlock(); diff -puN net/ipv4/ip_input.c~rcu-use-deref-macro net/ipv4/ip_input.c --- linux-2.6.8-rc3-mm1/net/ipv4/ip_input.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/ipv4/ip_input.c 2004-08-07 15:30:07.000000000 +0530 @@ -231,10 +231,9 @@ static inline int ip_local_deliver_finis if (raw_sk) raw_v4_input(skb, skb->nh.iph, hash); - if ((ipprot = inet_protos[hash]) != NULL) { + if ((ipprot = rcu_dereference(inet_protos[hash])) != NULL) { int ret; - smp_read_barrier_depends(); if (!ipprot->no_policy && !xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) { kfree_skb(skb); diff -puN net/ipv4/route.c~rcu-use-deref-macro net/ipv4/route.c --- linux-2.6.8-rc3-mm1/net/ipv4/route.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/ipv4/route.c 2004-08-07 15:30:07.000000000 +0530 @@ -237,9 +237,8 @@ static struct rtable *rt_cache_get_first static struct rtable *rt_cache_get_next(struct seq_file *seq, struct rtable *r) { - struct rt_cache_iter_state *st = seq->private; + struct rt_cache_iter_state *st = rcu_dereference(seq->private); - smp_read_barrier_depends(); r = r->u.rt_next; while (!r) { rcu_read_unlock_bh(); @@ -1004,10 +1003,9 @@ void ip_rt_redirect(u32 old_gw, u32 dadd rthp=&rt_hash_table[hash].chain; rcu_read_lock(); - while ((rth = *rthp) != NULL) { + while ((rth = rcu_dereference(*rthp)) != NULL) { struct rtable *rt; - smp_read_barrier_depends(); if (rth->fl.fl4_dst != daddr || rth->fl.fl4_src != skeys[i] || rth->fl.fl4_tos != tos || @@ -1259,9 +1257,8 @@ unsigned short ip_rt_frag_needed(struct unsigned hash = rt_hash_code(daddr, skeys[i], tos); rcu_read_lock(); - for (rth = rt_hash_table[hash].chain; rth; - rth = rth->u.rt_next) { - smp_read_barrier_depends(); + for (rth = rcu_dereference(rt_hash_table[hash].chain); rth; + rth = rcu_dereference(rth->u.rt_next)) { if (rth->fl.fl4_dst == daddr && rth->fl.fl4_src == skeys[i] && rth->rt_dst == daddr && @@ -1864,8 +1861,8 @@ int ip_route_input(struct sk_buff *skb, hash = rt_hash_code(daddr, saddr ^ (iif << 5), tos); rcu_read_lock(); - for (rth = rt_hash_table[hash].chain; rth; rth = rth->u.rt_next) { - smp_read_barrier_depends(); + for (rth = rcu_dereference(rt_hash_table[hash].chain); rth; + rth = rcu_dereference(rth->u.rt_next)) { if (rth->fl.fl4_dst == daddr && rth->fl.fl4_src == saddr && rth->fl.iif == iif && @@ -2232,8 +2229,8 @@ int __ip_route_output_key(struct rtable hash = rt_hash_code(flp->fl4_dst, flp->fl4_src ^ (flp->oif << 5), flp->fl4_tos); rcu_read_lock_bh(); - for (rth = rt_hash_table[hash].chain; rth; rth = rth->u.rt_next) { - smp_read_barrier_depends(); + for (rth = rcu_dereference(rt_hash_table[hash].chain); rth; + rth = rcu_dereference(rth->u.rt_next)) { if (rth->fl.fl4_dst == flp->fl4_dst && rth->fl.fl4_src == flp->fl4_src && rth->fl.iif == 0 && @@ -2464,9 +2461,8 @@ int ip_rt_dump(struct sk_buff *skb, str if (h > s_h) s_idx = 0; rcu_read_lock_bh(); - for (rt = rt_hash_table[h].chain, idx = 0; rt; - rt = rt->u.rt_next, idx++) { - smp_read_barrier_depends(); + for (rt = rcu_dereference(rt_hash_table[h].chain), idx = 0; rt; + rt = rcu_dereference(rt->u.rt_next), idx++) { if (idx < s_idx) continue; skb->dst = dst_clone(&rt->u.dst); diff -puN net/ipv6/icmp.c~rcu-use-deref-macro net/ipv6/icmp.c --- linux-2.6.8-rc3-mm1/net/ipv6/icmp.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/ipv6/icmp.c 2004-08-07 15:30:07.000000000 +0530 @@ -530,8 +530,7 @@ static void icmpv6_notify(struct sk_buff hash = nexthdr & (MAX_INET_PROTOS - 1); rcu_read_lock(); - ipprot = inet6_protos[hash]; - smp_read_barrier_depends(); + ipprot = rcu_dereference(inet6_protos[hash]); if (ipprot && ipprot->err_handler) ipprot->err_handler(skb, NULL, type, code, inner_offset, info); rcu_read_unlock(); diff -puN net/ipv6/ip6_input.c~rcu-use-deref-macro net/ipv6/ip6_input.c --- linux-2.6.8-rc3-mm1/net/ipv6/ip6_input.c~rcu-use-deref-macro 2004-08-07 15:30:07.000000000 +0530 +++ linux-2.6.8-rc3-mm1-dipankar/net/ipv6/ip6_input.c 2004-08-07 15:30:07.000000000 +0530 @@ -167,10 +167,9 @@ resubmit: ipv6_raw_deliver(skb, nexthdr); hash = nexthdr & (MAX_INET_PROTOS - 1); - if ((ipprot = inet6_protos[hash]) != NULL) { + if ((ipprot = rcu_dereference(inet6_protos[hash])) != NULL) { int ret; - smp_read_barrier_depends(); if (ipprot->flags & INET6_PROTO_FINAL) { struct ipv6hdr *hdr; _ ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RCU : various patches [0/5] 2004-08-07 19:15 RCU : various patches [0/5] Dipankar Sarma 2004-08-07 19:17 ` RCU : clean up code [1/5] Dipankar Sarma @ 2004-08-09 1:17 ` Rusty Russell 1 sibling, 0 replies; 7+ messages in thread From: Rusty Russell @ 2004-08-09 1:17 UTC (permalink / raw) To: Dipankar Sarma; +Cc: Andrew Morton, Paul E. McKenney, Robert Olsson, netdev On Sun, 2004-08-08 at 05:15, Dipankar Sarma wrote: > This is a series of patches that are currently in my tree. Like the cleanup. The call_rcu_bh looks solid too. Thanks, Rusty. -- Anyone who quotes me in their signature is an idiot -- Rusty Russell ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2004-08-09 1:17 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2004-08-07 19:15 RCU : various patches [0/5] Dipankar Sarma 2004-08-07 19:17 ` RCU : clean up code [1/5] Dipankar Sarma 2004-08-07 19:18 ` RCU : Introduce call_rcu_bh() [2/5] Dipankar Sarma 2004-08-07 19:20 ` RCU : Use call_rcu_bh() in route cache [3/5] Dipankar Sarma 2004-08-07 19:21 ` RCU : Document RCU api [4/5] Dipankar Sarma 2004-08-07 19:24 ` RCU : Abstracted RCU dereferencing [5/5] Dipankar Sarma 2004-08-09 1:17 ` RCU : various patches [0/5] Rusty Russell
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).