public inbox for linux-ia64@vger.kernel.org
 help / color / mirror / Atom feed
* Uncached memory allocator for ia64.
@ 2004-09-14 15:16 Robin Holt
  2004-09-15  8:23 ` David Mosberger
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Robin Holt @ 2004-09-14 15:16 UTC (permalink / raw)
  To: linux-ia64

In an effort to get the SGI Special Memory driver into the kernel,
Christoph Hellwig pointed me at a discussion of a general purpose uncached
memory allocator.  The thread is here:

http://www.gelato.unsw.edu.au/linux-ia64/0307/6218.html

I would like to reopen this discussion to determine the scope of work
that would need to be done.

I would like to start with the general, what are we trying to solve?
I can not think of a single reason aside from the previously discussed
min state area for the kernel to ever need to work with memory uncached.

Assuming there is no reason, can we pare this discussion back to a page
based allocator?  That would be much simpler to work with and would not
need to recombine fragments.

Given a page based allocator, can we just use the code that is in the
fetchop driver?  It does a per-node page based allocation.  Can that
code be renamed to no longer mention mspec?  Where in the tree should
this functionality live?  Would it be acceptable to always work with
physical addresses?

Thanks,
Robin Holt


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
@ 2004-09-15  8:23 ` David Mosberger
  2004-09-15 11:04 ` Robin Holt
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: David Mosberger @ 2004-09-15  8:23 UTC (permalink / raw)
  To: linux-ia64

>>>>> On Tue, 14 Sep 2004 10:16:29 -0500, Robin Holt <holt@sgi.com> said:

  Robin> I would like to start with the general, what are we trying to
  Robin> solve?  I can not think of a single reason aside from the
  Robin> previously discussed min state area for the kernel to ever
  Robin> need to work with memory uncached.

Uh, what about device drivers that want to map physical memory with
write-combine?  Isn't that effectively what your fetchop driver does?

  Robin> Assuming there is no reason, can we pare this discussion back
  Robin> to a page based allocator?  That would be much simpler to
  Robin> work with and would not need to recombine fragments.

Quite possibly.  It certainly seems reasonable to start that way.

	--david

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
  2004-09-15  8:23 ` David Mosberger
@ 2004-09-15 11:04 ` Robin Holt
  2004-09-15 11:14 ` David Mosberger
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Robin Holt @ 2004-09-15 11:04 UTC (permalink / raw)
  To: linux-ia64

On Wed, Sep 15, 2004 at 01:23:55AM -0700, David Mosberger wrote:
> >>>>> On Tue, 14 Sep 2004 10:16:29 -0500, Robin Holt <holt@sgi.com> said:
> 
>   Robin> I would like to start with the general, what are we trying to
>   Robin> solve?  I can not think of a single reason aside from the
>   Robin> previously discussed min state area for the kernel to ever
>   Robin> need to work with memory uncached.
> 
> Uh, what about device drivers that want to map physical memory with
> write-combine?  Isn't that effectively what your fetchop driver does?

That is exactly what it does, but I was wondering if there are other
examples of drivers that do this.  If not, I would still like to
push for doing the minimum necessary to keep from designing something
that has no users.

> 
>   Robin> Assuming there is no reason, can we pare this discussion back
>   Robin> to a page based allocator?  That would be much simpler to
>   Robin> work with and would not need to recombine fragments.
> 
> Quite possibly.  It certainly seems reasonable to start that way.
> 
> 	--david
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
  2004-09-15  8:23 ` David Mosberger
  2004-09-15 11:04 ` Robin Holt
@ 2004-09-15 11:14 ` David Mosberger
  2004-09-17 14:34 ` Robin Holt
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: David Mosberger @ 2004-09-15 11:14 UTC (permalink / raw)
  To: linux-ia64

>>>>> On Wed, 15 Sep 2004 06:04:51 -0500, Robin Holt <holt@sgi.com> said:

  Robin> That is exactly what it does, but I was wondering if there are other
  Robin> examples of drivers that do this.  If not, I would still like to
  Robin> push for doing the minimum necessary to keep from designing something
  Robin> that has no users.

There certainly have been requests for that in the past.  I think one
such request came from someone building cluster interconnects.  Also,
on some platforms it is helpful to map the AGP memory via
write-combine, though we fixed that particular issue for ia64 by
requiring write-back mapping support.

	--david

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
                   ` (2 preceding siblings ...)
  2004-09-15 11:14 ` David Mosberger
@ 2004-09-17 14:34 ` Robin Holt
  2004-09-23 21:09 ` Robin Holt
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Robin Holt @ 2004-09-17 14:34 UTC (permalink / raw)
  To: linux-ia64


I have done a little testing on the uncached.  I think the problem may
be bigger than I originally expected.

I made a simple driver.  On load, it allocated an entire granule and, I
think, correctly did all the flushes called for in the processor manual
including the PAL call.  A user could then mmap the entire chunk as
uncached and work with it.  I did not get any sort of MCAs from this run.

I then started the same app which referenced the first word of each page
uncached.  I added a timer interrupt which scanned all the page structs
on the node from which the granule was allocated and had a reference
to the page inside of an impossible if statement (next to impossible as
the machine would have to be up for a large number of years).  This, I
believe, resulted in the speculation of the cache line dirty.  By running
this test for about 8 minutes, I was able to cause an MCA due to having
both a cached and uncached reference to the same cache line on the FSB.

Note, I was running all the pages structs for the node and not just the
ones for this granule.

Based on this test, I was wondering if it is safe to reuse a granule and
leave the page structs in place.  Is this test representative of events
which could happen?  Can we destroy the page structs on a running system?

Thank you in advance for any direction anybody can give me.

Thanks,
Robin Holt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
                   ` (3 preceding siblings ...)
  2004-09-17 14:34 ` Robin Holt
@ 2004-09-23 21:09 ` Robin Holt
  2004-09-23 22:12 ` Luck, Tony
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Robin Holt @ 2004-09-23 21:09 UTC (permalink / raw)
  To: linux-ia64

On Fri, Sep 17, 2004 at 09:34:58AM -0500, Robin Holt wrote:
> 
> I have done a little testing on the uncached.  I think the problem
> may be bigger than I originally expected.

> I made a simple driver.  On load, it allocated an entire granule and,
> I think, correctly did all the flushes called for in the processor
> manual including the PAL call.  A user could then mmap the entire
> chunk as uncached and work with it.  I did not get any sort of MCAs
> from this run.

To be a little more specific, I was in section 4.4.6.1 Disabling Prefetch
and Removing Cacheability.  Jack Steiner made a comment to the effect
of there were additional steps that he knew someone had determined
were necessary.  Unfortunately, he is on vacation now.

> 
> I then started the same app which referenced the first word of each page
> uncached.  I added a timer interrupt which scanned all the page structs
> on the node from which the granule was allocated and had a reference
> to the page inside of an impossible if statement (next to impossible
> as the machine would have to be up for a large number of years).
> This, I believe, resulted in the speculation of the cache line dirty.
> By running this test for about 8 minutes, I was able to cause an MCA
> due to having both a cached and uncached reference to the same cache
> line on the FSB.

> Note, I was running all the pages structs for the node and not just
> the ones for this granule.

> Based on this test, I was wondering if it is safe to reuse a granule
> and leave the page structs in place.  Is this test representative
> of events which could happen?  Can we destroy the page structs on a
> running system?

> Thank you in advance for any direction anybody can give me.

I am not sure what will be acceptable at this point.  Should I write
an uncached allocator which grabs the granules at boot time before they
are ever initialized for cacheable use?  If so, would it be acceptable
to just shrink each efi memory map entry by a command line specified
size during the efi_memmap_walk callout?  At this point I am so vague
on what I should be doing that I am afraid to do much of anything.

Thanks again,
Robin Holt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
                   ` (4 preceding siblings ...)
  2004-09-23 21:09 ` Robin Holt
@ 2004-09-23 22:12 ` Luck, Tony
  2004-09-23 23:01 ` Robin Holt
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Luck, Tony @ 2004-09-23 22:12 UTC (permalink / raw)
  To: linux-ia64


>I am not sure what will be acceptable at this point.  Should I write
>an uncached allocator which grabs the granules at boot time before they
>are ever initialized for cacheable use?  If so, would it be acceptable
>to just shrink each efi memory map entry by a command line specified
>size during the efi_memmap_walk callout?  At this point I am so vague
>on what I should be doing that I am afraid to do much of anything.

We already make adjustments to the efi memory map (to trim sections to
granule boundaries) ... but another hack on top of the layers of ugliness
there already is going to make things worse.  Perhaps someday we can
delete it all and start over.

Grabbing your memory out of the map before any of the rest of Linux
ever sees it sounds to be a good idea ... it avoids wasting memory
for page structures in mem_map that can only get you into trouble
if anyone ever looks at them.

If your allocator only needs a small number of pages from each node, then
it is possible that you'd be able to feed it the trimmed off scraps
from incomplete granules, rather than pull a whole granule.  So you
might want to run your scan through the efi map before anyone else
messes with it.

-Tony

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
                   ` (5 preceding siblings ...)
  2004-09-23 22:12 ` Luck, Tony
@ 2004-09-23 23:01 ` Robin Holt
  2004-09-25 12:40 ` Robin Holt
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Robin Holt @ 2004-09-23 23:01 UTC (permalink / raw)
  To: linux-ia64

On Thu, Sep 23, 2004 at 03:12:40PM -0700, Luck, Tony wrote:
> 
> >I am not sure what will be acceptable at this point.  Should I write
> >an uncached allocator which grabs the granules at boot time before they
> >are ever initialized for cacheable use?  If so, would it be acceptable
> >to just shrink each efi memory map entry by a command line specified
> >size during the efi_memmap_walk callout?  At this point I am so vague
> >on what I should be doing that I am afraid to do much of anything.
> 
> We already make adjustments to the efi memory map (to trim sections to
> granule boundaries) ... but another hack on top of the layers of ugliness
> there already is going to make things worse.  Perhaps someday we can
> delete it all and start over.
> 
> Grabbing your memory out of the map before any of the rest of Linux
> ever sees it sounds to be a good idea ... it avoids wasting memory
> for page structures in mem_map that can only get you into trouble
> if anyone ever looks at them.
> 
> If your allocator only needs a small number of pages from each node, then
> it is possible that you'd be able to feed it the trimmed off scraps
> from incomplete granules, rather than pull a whole granule.  So you
> might want to run your scan through the efi map before anyone else
> messes with it.

Sounds like I have a direction.  I will try to put together a patch
tomorrow morning to at least get the discussion started.

Thanks,
Robin

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
                   ` (6 preceding siblings ...)
  2004-09-23 23:01 ` Robin Holt
@ 2004-09-25 12:40 ` Robin Holt
  2004-09-29 14:24 ` David Mosberger
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Robin Holt @ 2004-09-25 12:40 UTC (permalink / raw)
  To: linux-ia64

On Thu, Sep 23, 2004 at 03:12:40PM -0700, Luck, Tony wrote:
> 
> >I am not sure what will be acceptable at this point.  Should I write
> >an uncached allocator which grabs the granules at boot time before they
> >are ever initialized for cacheable use?  If so, would it be acceptable
> >to just shrink each efi memory map entry by a command line specified
> >size during the efi_memmap_walk callout?  At this point I am so vague
> >on what I should be doing that I am afraid to do much of anything.
> 
> We already make adjustments to the efi memory map (to trim sections to
> granule boundaries) ... but another hack on top of the layers of ugliness
> there already is going to make things worse.  Perhaps someday we can
> delete it all and start over.
> 
> Grabbing your memory out of the map before any of the rest of Linux
> ever sees it sounds to be a good idea ... it avoids wasting memory
> for page structures in mem_map that can only get you into trouble
> if anyone ever looks at them.
> 
> If your allocator only needs a small number of pages from each node, then
> it is possible that you'd be able to feed it the trimmed off scraps
> from incomplete granules, rather than pull a whole granule.  So you
> might want to run your scan through the efi map before anyone else
> messes with it.
> 

I have a first pass at this.  This has not even been compiled yet.
It is only a check to ensure I am on the right track.

Robin

--------   uncached_allocator   --------
Index: linux-2.6/arch/ia64/mm/discontig.c
=================================--- linux-2.6.orig/arch/ia64/mm/discontig.c	2004-09-24 10:47:03.000000000 -0500
+++ linux-2.6/arch/ia64/mm/discontig.c	2004-09-25 07:11:17.000000000 -0500
@@ -544,6 +544,58 @@
 	printk("%d free buffer pages\n", nr_free_buffer_pages());
 }
 
+struct node_uncached_regions {
+        long            paddr;
+        int             uncached_pages;
+        unsigned long   bits[1];        /* Bitmap for managing pages. */
+};
+
+static struct node_uncached_regions       *node_uncached_regions[MAX_COMPACT_NODES];
+
+/* Just for discussion */
+#define UNCACHED_GRANULES_PER_NODE	2
+
+/* I am assuming start is granule aligned.  I need to verify that further. */
+void reserve_uncached_memory(unsigned long start, unsigned long len, void *arg, int nid)
+{
+	void (*func)(unsigned long, unsigned long, int);
+
+	func = arg;
+
+	if ((UNCACHED_GRANULES_PER_NODE = 0) ||
+	    (UNCACHED_GRANULES_PER_NODE * IA64_GRANULE_SIZE >= len)) {
+		(*func)(start, len, nid);
+		return;
+	}
+
+	if (node_uncached_regions[nid] = NULL) {
+		unsigned long grs;
+		int bytes, uncached_pages;
+		struct node_uncached_regions *uncached_region;
+
+		uncached_pages = UNCACHED_GRANULES_PER_NODE * IA64_GRANULE_SIZE / PAGE_SIZE;
+		bytes = sizeof(struct node_uncached_regions) + uncached_pages/8;
+		uncached_region = alloc_bootmem_node(NODE_DATA(nid), bytes);
+		if (uncached_region = NULL) {
+			(*func)(start, len, nid);
+			return;
+		}
+		memset(uncached_region, 0, bytes);
+		uncached_region->paddr = start;
+		uncached_region->uncached_pages = uncached_pages;
+		node_uncached_regions[nid] = uncached_region;
+	}
+
+	if ((node_uncached_regions[nid] != NULL) &&
+	    (node_uncached_regions[nid].paddr = start_address)) {
+		start += UNCACHED_GRANULES_PER_NODE * IA64_GRANULE_SIZE;
+		len -= UNCACHED_GRANULES_PER_NODE * IA64_GRANULE_SIZE;
+	}
+		
+	(*func)(start, len, nid);
+}
+
+
 /**
  * call_pernode_memory - use SRAT to call callback functions with node info
  * @start: physical start of range
@@ -560,7 +612,6 @@
 void call_pernode_memory(unsigned long start, unsigned long len, void *arg)
 {
 	unsigned long rs, re, end = start + len;
-	void (*func)(unsigned long, unsigned long, int);
 	int i;
 
 	start = PAGE_ALIGN(start);
@@ -568,12 +619,10 @@
 	if (start >= end)
 		return;
 
-	func = arg;
-
 	if (!num_node_memblks) {
 		/* No SRAT table, so assume one node (node 0) */
 		if (start < end)
-			(*func)(start, end - start, 0);
+			reserve_uncached_memory(start, end-start, arg, 0);
 		return;
 	}
 
@@ -583,7 +632,7 @@
 			 node_memblk[i].size);
 
 		if (rs < re)
-			(*func)(rs, re - rs, node_memblk[i].nid);
+			reserve_uncached_memory(rs, re - rs, arg, node_memblk[i].nid);
 
 		if (re = end)
 			break;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
                   ` (7 preceding siblings ...)
  2004-09-25 12:40 ` Robin Holt
@ 2004-09-29 14:24 ` David Mosberger
  2004-09-29 15:43 ` Robin Holt
  2004-09-29 16:02 ` David Mosberger
  10 siblings, 0 replies; 12+ messages in thread
From: David Mosberger @ 2004-09-29 14:24 UTC (permalink / raw)
  To: linux-ia64

>>>>> On Fri, 17 Sep 2004 09:34:58 -0500, Robin Holt <holt@sgi.com> said:

  Robin> I have done a little testing on the uncached.  I think the
  Robin> problem may be bigger than I originally expected.

  Robin> I made a simple driver.  On load, it allocated an entire
  Robin> granule and, I think, correctly did all the flushes called
  Robin> for in the processor manual including the PAL call.  A user
  Robin> could then mmap the entire chunk as uncached and work with
  Robin> it.  I did not get any sort of MCAs from this run.

OK.

  Robin> I then started the same app which referenced the first word
  Robin> of each page uncached.  I added a timer interrupt which
  Robin> scanned all the page structs on the node from which the
  Robin> granule was allocated and had a reference to the page inside
  Robin> of an impossible if statement (next to impossible as the
  Robin> machine would have to be up for a large number of years).
  Robin> This, I believe, resulted in the speculation of the cache
  Robin> line dirty.

So you had something along the lines of:

 if (never_really_true)
   *(char *) page_address(pg) = 42;

and you got an MCA when "pg" was something pointing to the uncached
memory area?  I don't see how that would be possible unless there
already were a WB TLB entry for the granule that contains
page_address(pg).

Can you make the test-program available?  I'd be interested in trying
it out on a local machine.

	--david

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
                   ` (8 preceding siblings ...)
  2004-09-29 14:24 ` David Mosberger
@ 2004-09-29 15:43 ` Robin Holt
  2004-09-29 16:02 ` David Mosberger
  10 siblings, 0 replies; 12+ messages in thread
From: Robin Holt @ 2004-09-29 15:43 UTC (permalink / raw)
  To: linux-ia64

On Wed, Sep 29, 2004 at 07:24:47AM -0700, David Mosberger wrote:
> >>>>> On Fri, 17 Sep 2004 09:34:58 -0500, Robin Holt <holt@sgi.com> said:
> 
>   Robin> I have done a little testing on the uncached.  I think the
>   Robin> problem may be bigger than I originally expected.
> 
>   Robin> I made a simple driver.  On load, it allocated an entire
>   Robin> granule and, I think, correctly did all the flushes called
>   Robin> for in the processor manual including the PAL call.  A user
>   Robin> could then mmap the entire chunk as uncached and work with
>   Robin> it.  I did not get any sort of MCAs from this run.
> 
> OK.
> 
>   Robin> I then started the same app which referenced the first word
>   Robin> of each page uncached.  I added a timer interrupt which
>   Robin> scanned all the page structs on the node from which the
>   Robin> granule was allocated and had a reference to the page inside
>   Robin> of an impossible if statement (next to impossible as the
>   Robin> machine would have to be up for a large number of years).
>   Robin> This, I believe, resulted in the speculation of the cache
>   Robin> line dirty.
> 
> So you had something along the lines of:
> 
>  if (never_really_true)
>    *(char *) page_address(pg) = 42;
> 
> and you got an MCA when "pg" was something pointing to the uncached
> memory area?  I don't see how that would be possible unless there
> already were a WB TLB entry for the granule that contains
> page_address(pg).
> 
> Can you make the test-program available?  I'd be interested in trying
> it out on a local machine.

That was exactly what I did.  Unfortunately, I blew that away over a
week ago.  Sorry.  I guess I could try to recreate it.

One other thing that was going on was page zereoing of the last page
in the previous granule.  It was always an MCA on the first page of the
uncached region.  I had forgotten about that little tidbit before.  Sorry.
The more I think about it, the zereoing of the previous page may have
been the key to this failing.  Inside the timer, it would run through
all the pages exactly as you indicated.  I would then call memset with
the previous page.

For my never_really_true, I would use the upper bit of the SN2 Real-time
clock.  It remained zero throughout the tests.

Do you want me to attempt to recreate this test for you?

Thanks,
Robin

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Uncached memory allocator for ia64.
  2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
                   ` (9 preceding siblings ...)
  2004-09-29 15:43 ` Robin Holt
@ 2004-09-29 16:02 ` David Mosberger
  10 siblings, 0 replies; 12+ messages in thread
From: David Mosberger @ 2004-09-29 16:02 UTC (permalink / raw)
  To: linux-ia64

>>>>> On Wed, 29 Sep 2004 10:43:23 -0500, Robin Holt <holt@sgi.com> said:

  Robin> One other thing that was going on was page zereoing of the
  Robin> last page in the previous granule.  It was always an MCA on
  Robin> the first page of the uncached region.  I had forgotten about
  Robin> that little tidbit before.  Sorry.  The more I think about
  Robin> it, the zereoing of the previous page may have been the key
  Robin> to this failing.  Inside the timer, it would run through all
  Robin> the pages exactly as you indicated.  I would then call memset
  Robin> with the previous page.

  Robin> Do you want me to attempt to recreate this test for you?

Well, I really think we need to get to the root of this.  What you did
_should_ work and if it doesn't, we need to understand why not.

The Alternate DTLB handler never installs a TLB-entry for speculative
accesses or for non-access instructions (such as lfetch), so it would
take an outright bug if memset() were to cause a WB TLB-entry to be
inserted for the uncached granule.  I don't think that's likely (if
memset() were broken in this way, we should have noticed _much_
earlier) but if memset() was indeed the culprit, we definitely would
want to know.

So yeah, if you could reproduce the test-case and see if it was the
memset(), that would be great!

	--david

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2004-09-29 16:02 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-09-14 15:16 Uncached memory allocator for ia64 Robin Holt
2004-09-15  8:23 ` David Mosberger
2004-09-15 11:04 ` Robin Holt
2004-09-15 11:14 ` David Mosberger
2004-09-17 14:34 ` Robin Holt
2004-09-23 21:09 ` Robin Holt
2004-09-23 22:12 ` Luck, Tony
2004-09-23 23:01 ` Robin Holt
2004-09-25 12:40 ` Robin Holt
2004-09-29 14:24 ` David Mosberger
2004-09-29 15:43 ` Robin Holt
2004-09-29 16:02 ` David Mosberger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox