* [PATCH] PER_CPU [4/4] - PER_CPU-irq_stat
@ 2004-07-09 9:54 Shai Fultheim
2004-07-12 0:02 ` Anton Blanchard
0 siblings, 1 reply; 6+ messages in thread
From: Shai Fultheim @ 2004-07-09 9:54 UTC (permalink / raw)
To: 'Andrew Morton'
Cc: 'Linux Kernel ML', 'Jes Sorensen', mort
[-- Attachment #1: Type: text/plain, Size: 5892 bytes --]
[SECOND SUBBMITAL - Thanks for all the comments]
Andrew,
Please find below one out of collection of patched that move NR_CPU array
variables to the per-cpu area. Please consider applying, any comment will
highly appreciated.
Patches (altogether) tested using make allmodconfig, and defconfig, and
booted my system very nicely.
1/4. PER_CPU-cpu_gdt_table
2/4. PER_CPU-init_tss
3/4. PER_CPU-cpu_tlbstate
4/4. PER_CPU-irq_stat
PER_CPU-irq_stat:
arch/i386/kernel/apic.c | 2 +-
arch/i386/kernel/io_apic.c | 2 +-
arch/i386/kernel/irq.c | 3 ++-
arch/i386/kernel/nmi.c | 4 ++--
arch/i386/kernel/process.c | 2 +-
include/linux/irq_cpustat.h | 4 ++--
kernel/softirq.c | 4 ++--
7 files changed, 11 insertions(+), 10 deletions(-)
Signed-off-by: Martin Hicks <mort@wildopensource.com>
Signed-off-by: Shai Fultheim <shai@scalex86.org>
=================================================================================
# This is a BitKeeper generated diff -Nru style patch.
#
# ChangeSet
# 2004/07/09 01:33:31-07:00 shai@compile.(none)
# softirq.c, irq_cpustat.h, process.c, nmi.c, irq.c, io_apic.c, apic.c:
# PER_CPU-irq_stat
# Convert irq_stat into a per_cpu variable.
#
# Signed-off-by: Martin Hicks <mort@wildopensource.com>
# Signed-off-by: Shai Fultheim <shai@scalex86.org>
#
# kernel/softirq.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +2 -2
# PER_CPU-irq_stat
#
# include/linux/irq_cpustat.h
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +2 -2
# PER_CPU-irq_stat
#
# arch/i386/kernel/process.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +1 -1
# PER_CPU-irq_stat
#
# arch/i386/kernel/nmi.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +2 -2
# PER_CPU-irq_stat
#
# arch/i386/kernel/irq.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +2 -1
# PER_CPU-irq_stat
#
# arch/i386/kernel/io_apic.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +1 -1
# PER_CPU-irq_stat
#
# arch/i386/kernel/apic.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +1 -1
# PER_CPU-irq_stat
#
diff -Nru a/arch/i386/kernel/apic.c b/arch/i386/kernel/apic.c
--- a/arch/i386/kernel/apic.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/apic.c 2004-07-09 01:34:13 -07:00
@@ -1093,7 +1093,7 @@
/*
* the NMI deadlock-detector uses this.
*/
- irq_stat[cpu].apic_timer_irqs++;
+ per_cpu(irq_stat, cpu).apic_timer_irqs++;
/*
* NOTE! We'd better ACK the irq immediately,
diff -Nru a/arch/i386/kernel/io_apic.c b/arch/i386/kernel/io_apic.c
--- a/arch/i386/kernel/io_apic.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/io_apic.c 2004-07-09 01:34:13 -07:00
@@ -273,7 +273,7 @@
#define IRQ_DELTA(cpu,irq) (irq_cpu_data[cpu].irq_delta[irq])
#define IDLE_ENOUGH(cpu,now) \
- (idle_cpu(cpu) && ((now) - irq_stat[(cpu)].idle_timestamp > 1))
+ (idle_cpu(cpu) && ((now) - per_cpu(irq_stat, (cpu)).idle_timestamp > 1))
#define IRQ_ALLOWED(cpu, allowed_mask) cpu_isset(cpu, allowed_mask)
diff -Nru a/arch/i386/kernel/irq.c b/arch/i386/kernel/irq.c
--- a/arch/i386/kernel/irq.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/irq.c 2004-07-09 01:34:13 -07:00
@@ -187,7 +187,8 @@
seq_printf(p, "LOC: ");
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
- seq_printf(p, "%10u ", irq_stat[j].apic_timer_irqs);
+ seq_printf(p, "%10u ",
+ per_cpu(irq_stat, j).apic_timer_irqs);
seq_putc(p, '\n');
#endif
seq_printf(p, "ERR: %10u\n", atomic_read(&irq_err_count));
diff -Nru a/arch/i386/kernel/nmi.c b/arch/i386/kernel/nmi.c
--- a/arch/i386/kernel/nmi.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/nmi.c 2004-07-09 01:34:13 -07:00
@@ -106,7 +106,7 @@
printk(KERN_INFO "testing NMI watchdog ... ");
for (cpu = 0; cpu < NR_CPUS; cpu++)
- prev_nmi_count[cpu] = irq_stat[cpu].__nmi_count;
+ prev_nmi_count[cpu] = per_cpu(irq_stat, cpu).__nmi_count;
local_irq_enable();
mdelay((10*1000)/nmi_hz); // wait 10 ticks
@@ -469,7 +469,7 @@
*/
int sum, cpu = smp_processor_id();
- sum = irq_stat[cpu].apic_timer_irqs;
+ sum = per_cpu(irq_stat, cpu).apic_timer_irqs;
if (last_irq_sums[cpu] == sum) {
/*
diff -Nru a/arch/i386/kernel/process.c b/arch/i386/kernel/process.c
--- a/arch/i386/kernel/process.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/process.c 2004-07-09 01:34:13 -07:00
@@ -147,7 +147,7 @@
if (!idle)
idle = default_idle;
- irq_stat[smp_processor_id()].idle_timestamp = jiffies;
+ __get_cpu_var(irq_stat).idle_timestamp = jiffies;
idle();
}
schedule();
diff -Nru a/include/linux/irq_cpustat.h b/include/linux/irq_cpustat.h
--- a/include/linux/irq_cpustat.h 2004-07-09 01:34:13 -07:00
+++ b/include/linux/irq_cpustat.h 2004-07-09 01:34:13 -07:00
@@ -18,8 +18,8 @@
*/
#ifndef __ARCH_IRQ_STAT
-extern irq_cpustat_t irq_stat[]; /* defined in asm/hardirq.h */
-#define __IRQ_STAT(cpu, member) (irq_stat[cpu].member)
+DECLARE_PER_CPU(irq_cpustat_t, irq_stat); /* defined in kernel/softirq.h */
+#define __IRQ_STAT(cpu, member) (per_cpu(irq_stat, cpu).member)
#endif
/* arch independent irq_stat fields */
diff -Nru a/kernel/softirq.c b/kernel/softirq.c
--- a/kernel/softirq.c 2004-07-09 01:34:13 -07:00
+++ b/kernel/softirq.c 2004-07-09 01:34:13 -07:00
@@ -36,8 +36,8 @@
*/
#ifndef __ARCH_IRQ_STAT
-irq_cpustat_t irq_stat[NR_CPUS] ____cacheline_aligned;
-EXPORT_SYMBOL(irq_stat);
+DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_maxaligned_in_smp;
+EXPORT_PER_CPU_SYMBOL(irq_stat);
#endif
static struct softirq_action softirq_vec[32] __cacheline_aligned_in_smp;
=================================================================================
-----------------
Shai Fultheim
Scalex86.org
[-- Attachment #2: rev-1.1822.patch --]
[-- Type: application/octet-stream, Size: 4756 bytes --]
# This is a BitKeeper generated diff -Nru style patch.
#
# ChangeSet
# 2004/07/09 01:33:31-07:00 shai@compile.(none)
# softirq.c, irq_cpustat.h, process.c, nmi.c, irq.c, io_apic.c, apic.c:
# PER_CPU-irq_stat
# Convert irq_stat into a per_cpu variable.
#
# Signed-off-by: Martin Hicks <mort@wildopensource.com>
# Signed-off-by: Shai Fultheim <shai@scalex86.org>
#
# kernel/softirq.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +2 -2
# PER_CPU-irq_stat
#
# include/linux/irq_cpustat.h
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +2 -2
# PER_CPU-irq_stat
#
# arch/i386/kernel/process.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +1 -1
# PER_CPU-irq_stat
#
# arch/i386/kernel/nmi.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +2 -2
# PER_CPU-irq_stat
#
# arch/i386/kernel/irq.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +2 -1
# PER_CPU-irq_stat
#
# arch/i386/kernel/io_apic.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +1 -1
# PER_CPU-irq_stat
#
# arch/i386/kernel/apic.c
# 2004/07/09 01:29:29-07:00 shai@compile.(none) +1 -1
# PER_CPU-irq_stat
#
diff -Nru a/arch/i386/kernel/apic.c b/arch/i386/kernel/apic.c
--- a/arch/i386/kernel/apic.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/apic.c 2004-07-09 01:34:13 -07:00
@@ -1093,7 +1093,7 @@
/*
* the NMI deadlock-detector uses this.
*/
- irq_stat[cpu].apic_timer_irqs++;
+ per_cpu(irq_stat, cpu).apic_timer_irqs++;
/*
* NOTE! We'd better ACK the irq immediately,
diff -Nru a/arch/i386/kernel/io_apic.c b/arch/i386/kernel/io_apic.c
--- a/arch/i386/kernel/io_apic.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/io_apic.c 2004-07-09 01:34:13 -07:00
@@ -273,7 +273,7 @@
#define IRQ_DELTA(cpu,irq) (irq_cpu_data[cpu].irq_delta[irq])
#define IDLE_ENOUGH(cpu,now) \
- (idle_cpu(cpu) && ((now) - irq_stat[(cpu)].idle_timestamp > 1))
+ (idle_cpu(cpu) && ((now) - per_cpu(irq_stat, (cpu)).idle_timestamp > 1))
#define IRQ_ALLOWED(cpu, allowed_mask) cpu_isset(cpu, allowed_mask)
diff -Nru a/arch/i386/kernel/irq.c b/arch/i386/kernel/irq.c
--- a/arch/i386/kernel/irq.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/irq.c 2004-07-09 01:34:13 -07:00
@@ -187,7 +187,8 @@
seq_printf(p, "LOC: ");
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
- seq_printf(p, "%10u ", irq_stat[j].apic_timer_irqs);
+ seq_printf(p, "%10u ",
+ per_cpu(irq_stat, j).apic_timer_irqs);
seq_putc(p, '\n');
#endif
seq_printf(p, "ERR: %10u\n", atomic_read(&irq_err_count));
diff -Nru a/arch/i386/kernel/nmi.c b/arch/i386/kernel/nmi.c
--- a/arch/i386/kernel/nmi.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/nmi.c 2004-07-09 01:34:13 -07:00
@@ -106,7 +106,7 @@
printk(KERN_INFO "testing NMI watchdog ... ");
for (cpu = 0; cpu < NR_CPUS; cpu++)
- prev_nmi_count[cpu] = irq_stat[cpu].__nmi_count;
+ prev_nmi_count[cpu] = per_cpu(irq_stat, cpu).__nmi_count;
local_irq_enable();
mdelay((10*1000)/nmi_hz); // wait 10 ticks
@@ -469,7 +469,7 @@
*/
int sum, cpu = smp_processor_id();
- sum = irq_stat[cpu].apic_timer_irqs;
+ sum = per_cpu(irq_stat, cpu).apic_timer_irqs;
if (last_irq_sums[cpu] == sum) {
/*
diff -Nru a/arch/i386/kernel/process.c b/arch/i386/kernel/process.c
--- a/arch/i386/kernel/process.c 2004-07-09 01:34:13 -07:00
+++ b/arch/i386/kernel/process.c 2004-07-09 01:34:13 -07:00
@@ -147,7 +147,7 @@
if (!idle)
idle = default_idle;
- irq_stat[smp_processor_id()].idle_timestamp = jiffies;
+ __get_cpu_var(irq_stat).idle_timestamp = jiffies;
idle();
}
schedule();
diff -Nru a/include/linux/irq_cpustat.h b/include/linux/irq_cpustat.h
--- a/include/linux/irq_cpustat.h 2004-07-09 01:34:13 -07:00
+++ b/include/linux/irq_cpustat.h 2004-07-09 01:34:13 -07:00
@@ -18,8 +18,8 @@
*/
#ifndef __ARCH_IRQ_STAT
-extern irq_cpustat_t irq_stat[]; /* defined in asm/hardirq.h */
-#define __IRQ_STAT(cpu, member) (irq_stat[cpu].member)
+DECLARE_PER_CPU(irq_cpustat_t, irq_stat); /* defined in kernel/softirq.h */
+#define __IRQ_STAT(cpu, member) (per_cpu(irq_stat, cpu).member)
#endif
/* arch independent irq_stat fields */
diff -Nru a/kernel/softirq.c b/kernel/softirq.c
--- a/kernel/softirq.c 2004-07-09 01:34:13 -07:00
+++ b/kernel/softirq.c 2004-07-09 01:34:13 -07:00
@@ -36,8 +36,8 @@
*/
#ifndef __ARCH_IRQ_STAT
-irq_cpustat_t irq_stat[NR_CPUS] ____cacheline_aligned;
-EXPORT_SYMBOL(irq_stat);
+DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_maxaligned_in_smp;
+EXPORT_PER_CPU_SYMBOL(irq_stat);
#endif
static struct softirq_action softirq_vec[32] __cacheline_aligned_in_smp;
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] PER_CPU [4/4] - PER_CPU-irq_stat
2004-07-09 9:54 [PATCH] PER_CPU [4/4] - PER_CPU-irq_stat Shai Fultheim
@ 2004-07-12 0:02 ` Anton Blanchard
2004-07-12 4:44 ` Shai Fultheim
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Anton Blanchard @ 2004-07-12 0:02 UTC (permalink / raw)
To: Shai Fultheim
Cc: 'Andrew Morton', 'Linux Kernel ML',
'Jes Sorensen', mort
Hi,
> Please find below one out of collection of patched that move NR_CPU array
> variables to the per-cpu area. Please consider applying, any comment will
> highly appreciated.
...
> diff -Nru a/kernel/softirq.c b/kernel/softirq.c
> --- a/kernel/softirq.c 2004-07-09 01:34:13 -07:00
> +++ b/kernel/softirq.c 2004-07-09 01:34:13 -07:00
> @@ -36,8 +36,8 @@
> */
>
> #ifndef __ARCH_IRQ_STAT
> -irq_cpustat_t irq_stat[NR_CPUS] ____cacheline_aligned;
> -EXPORT_SYMBOL(irq_stat);
> +DEFINE_PER_CPU(irq_cpustat_t, irq_stat) ____cacheline_maxaligned_in_smp;
> +EXPORT_PER_CPU_SYMBOL(irq_stat);
> #endif
Is there a need for the cacheline alignment? We want to keep that per
cpu data area as packed as possible, we only want to explicitly pad if
we need to (eg other cpus are accessing that variable a lot).
Also it looks like we will have to push the above change into the other
architectures.
Anton
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH] PER_CPU [4/4] - PER_CPU-irq_stat
2004-07-12 0:02 ` Anton Blanchard
@ 2004-07-12 4:44 ` Shai Fultheim
[not found] ` <200407120444.i6C4iAws031156@fire-2.osdl.org>
[not found] ` <20040712044410.46293162B72@lists.samba.org>
2 siblings, 0 replies; 6+ messages in thread
From: Shai Fultheim @ 2004-07-12 4:44 UTC (permalink / raw)
To: 'Anton Blanchard'
Cc: 'Andrew Morton', 'Linux Kernel ML',
'Jes Sorensen', mort
> > diff -Nru a/kernel/softirq.c b/kernel/softirq.c
> > --- a/kernel/softirq.c 2004-07-09 01:34:13 -07:00
> > +++ b/kernel/softirq.c 2004-07-09 01:34:13 -07:00
> > @@ -36,8 +36,8 @@
> > */
> >
> > #ifndef __ARCH_IRQ_STAT
> > -irq_cpustat_t irq_stat[NR_CPUS] ____cacheline_aligned;
> > -EXPORT_SYMBOL(irq_stat);
> > +DEFINE_PER_CPU(irq_cpustat_t, irq_stat)
> ____cacheline_maxaligned_in_smp;
> > +EXPORT_PER_CPU_SYMBOL(irq_stat);
> > #endif
>
> Is there a need for the cacheline alignment? We want to keep that per
> cpu data area as packed as possible, we only want to explicitly pad if
> we need to (eg other cpus are accessing that variable a lot).
>
> Also it looks like we will have to push the above change into the other
> architectures.
>
> Anton
>
IMHO, we want to keep irq_stat aligned from performance perspectives. You
can never know if the info before and after it in the per-cpu area are going
to be cached (and thefore crossing cache-line boundary will cost us more).
Anyhow, since that also accesses by other CPUs (not a lot...), I think it's
better to keep it aligned (the utilization of per-cpu areas is so low now
that it doesn't really matter).
--shai
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] PER_CPU [4/4] - PER_CPU-irq_stat
[not found] ` <200407120444.i6C4iAws031156@fire-2.osdl.org>
@ 2004-07-12 5:11 ` Andrew Morton
2004-07-12 5:19 ` Shai Fultheim
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2004-07-12 5:11 UTC (permalink / raw)
To: Shai Fultheim; +Cc: anton, linux-kernel, jes, mort
"Shai Fultheim" <shai@scalex86.org> wrote:
>
> > > diff -Nru a/kernel/softirq.c b/kernel/softirq.c
> > > --- a/kernel/softirq.c 2004-07-09 01:34:13 -07:00
> > > +++ b/kernel/softirq.c 2004-07-09 01:34:13 -07:00
> > > @@ -36,8 +36,8 @@
> > > */
> > >
> > > #ifndef __ARCH_IRQ_STAT
> > > -irq_cpustat_t irq_stat[NR_CPUS] ____cacheline_aligned;
> > > -EXPORT_SYMBOL(irq_stat);
> > > +DEFINE_PER_CPU(irq_cpustat_t, irq_stat)
> > ____cacheline_maxaligned_in_smp;
> > > +EXPORT_PER_CPU_SYMBOL(irq_stat);
> > > #endif
> >
> > Is there a need for the cacheline alignment? We want to keep that per
> > cpu data area as packed as possible, we only want to explicitly pad if
> > we need to (eg other cpus are accessing that variable a lot).
> >
> > Also it looks like we will have to push the above change into the other
> > architectures.
> >
> > Anton
> >
>
> IMHO, we want to keep irq_stat aligned from performance perspectives. You
> can never know if the info before and after it in the per-cpu area are going
> to be cached (and thefore crossing cache-line boundary will cost us more).
>
> Anyhow, since that also accesses by other CPUs (not a lot...), I think it's
> better to keep it aligned (the utilization of per-cpu areas is so low now
> that it doesn't really matter).
>
That seems a bit debatable.
But anyway, as Anton points out, the patch breaks !x86 architectures.
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH] PER_CPU [4/4] - PER_CPU-irq_stat
2004-07-12 5:11 ` Andrew Morton
@ 2004-07-12 5:19 ` Shai Fultheim
0 siblings, 0 replies; 6+ messages in thread
From: Shai Fultheim @ 2004-07-12 5:19 UTC (permalink / raw)
To: 'Andrew Morton'; +Cc: anton, linux-kernel, jes, mort
"Andrew Morton" <akpm@osdl.org>:
>
> "Shai Fultheim" <shai@scalex86.org> wrote:
> >
> > > > diff -Nru a/kernel/softirq.c b/kernel/softirq.c
> > > > --- a/kernel/softirq.c 2004-07-09 01:34:13 -07:00
> > > > +++ b/kernel/softirq.c 2004-07-09 01:34:13 -07:00
> > > > @@ -36,8 +36,8 @@
> > > > */
> > > >
> > > > #ifndef __ARCH_IRQ_STAT
> > > > -irq_cpustat_t irq_stat[NR_CPUS] ____cacheline_aligned;
> > > > -EXPORT_SYMBOL(irq_stat);
> > > > +DEFINE_PER_CPU(irq_cpustat_t, irq_stat)
> > > ____cacheline_maxaligned_in_smp;
> > > > +EXPORT_PER_CPU_SYMBOL(irq_stat);
> > > > #endif
> > >
> > > Is there a need for the cacheline alignment? We want to keep that per
> > > cpu data area as packed as possible, we only want to explicitly pad if
> > > we need to (eg other cpus are accessing that variable a lot).
> > >
> > > Also it looks like we will have to push the above change into the
> other
> > > architectures.
> > >
> > > Anton
> > >
> >
> > IMHO, we want to keep irq_stat aligned from performance perspectives.
> You
> > can never know if the info before and after it in the per-cpu area are
> going
> > to be cached (and thefore crossing cache-line boundary will cost us
> more).
> >
> > Anyhow, since that also accesses by other CPUs (not a lot...), I think
> it's
> > better to keep it aligned (the utilization of per-cpu areas is so low
> now
> > that it doesn't really matter).
> >
>
> That seems a bit debatable.
>
> But anyway, as Anton points out, the patch breaks !x86 architectures.
>
It seems that I missed that, will send update soon.
-----------------
Shai Fultheim
Scalex86.org
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] PER_CPU [4/4] - PER_CPU-irq_stat
[not found] ` <20040712044410.46293162B72@lists.samba.org>
@ 2004-07-12 8:57 ` Anton Blanchard
0 siblings, 0 replies; 6+ messages in thread
From: Anton Blanchard @ 2004-07-12 8:57 UTC (permalink / raw)
To: Shai Fultheim
Cc: 'Andrew Morton', 'Linux Kernel ML',
'Jes Sorensen', mort
> Anyhow, since that also accesses by other CPUs (not a lot...), I think it's
> better to keep it aligned (the utilization of per-cpu areas is so low now
> that it doesn't really matter).
Ive seen the per cpu data area exceed 32kB on ppc64. Considering the L1
dcache on POWER4 is only 32kB Id prefer not to bloat it any more than
necessary.
Anton
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2004-07-12 13:52 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-07-09 9:54 [PATCH] PER_CPU [4/4] - PER_CPU-irq_stat Shai Fultheim
2004-07-12 0:02 ` Anton Blanchard
2004-07-12 4:44 ` Shai Fultheim
[not found] ` <200407120444.i6C4iAws031156@fire-2.osdl.org>
2004-07-12 5:11 ` Andrew Morton
2004-07-12 5:19 ` Shai Fultheim
[not found] ` <20040712044410.46293162B72@lists.samba.org>
2004-07-12 8:57 ` Anton Blanchard
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.