* [RFC] on-chip coherent memory API for DMA
@ 2004-06-29 14:21 James Bottomley
2004-07-01 12:43 ` Jamey Hicks
2004-07-01 14:12 ` David Brownell
0 siblings, 2 replies; 9+ messages in thread
From: James Bottomley @ 2004-06-29 14:21 UTC (permalink / raw)
To: Ian Molton; +Cc: Linux Kernel, tony, jamey.hicks, joshua, david-b, Russell King
The purpose of this API is to find a way of declaring on-chip memory as
a pool for dma_alloc_coherent() that can be useful to all architectures.
The proposed API is:
nt dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
dma_addr_t device_addr, size_t size, int flags)
This API basically declares a region of memory to be handed out by
dma_alloc_coherent when it's asked for coherent memory for this device.
bus_addr is the physical address to which the memory is currently
assigned in the bus responding region
device_addr is the physical address the device needs to be programmed
with actually to address this memory.
size is the size of the area (must be multiples of PAGE_SIZE).
The flags is where all the magic is. They can be or'd together and are
DMA_MEMORY_MAP - request that the memory returned from
dma_alloc_coherent() be directly writeable.
DMA_MEMORY_IO - request that the memory returned from
dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
One or both of these flags must be present
DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
dma_alloc_coherent of any child devices of this one (for memory residing
on a bridge).
DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
Do not allow dma_alloc_coherent() to fall back to system memory when
it's out of memory in the declared region.
The return value would be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO if
only DMA_MEMORY_MAP were passed in) or zero for failure.
I think also, it's reasonable only to have a single declared region per
device.
Implementation details
Obviously, the big change is that dma_alloc_coherent() may now be
handing out memory that can't be directly written to (in the case of a
DMA_MEMORY_IO return).
I envisage implementing an internal per device resource allocator in
drivers/base into which each platform allocator can plug do do the heavy
allocation lifting (they'd still get to do the platform magic to make
the returned region visible as memory if necessary).
The API would be platform optional, with platforms not wishing to
implement it simply hard coding a return 0.
There would also be a corresponding
void dma_release_declared_memory(struct device *dev)
whose job would be to clean up unconditionally (and not check if the
memory were still in use).
I already have this coded up on x86 and implemented for a SCSI card I
have with four channels and a shared 2MB memory area on chip. I'll post
the implementations when I get it cleaned up.
James
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC] on-chip coherent memory API for DMA
2004-06-29 14:21 [RFC] on-chip coherent memory API for DMA James Bottomley
@ 2004-07-01 12:43 ` Jamey Hicks
2004-07-01 14:12 ` David Brownell
1 sibling, 0 replies; 9+ messages in thread
From: Jamey Hicks @ 2004-07-01 12:43 UTC (permalink / raw)
To: James Bottomley
Cc: Ian Molton, Linux Kernel, tony, joshua, david-b, Russell King
James Bottomley wrote:
>The purpose of this API is to find a way of declaring on-chip memory as
>a pool for dma_alloc_coherent() that can be useful to all architectures.
>
>The proposed API is:
>
>nt dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
>dma_addr_t device_addr, size_t size, int flags)
>
>This API basically declares a region of memory to be handed out by
>dma_alloc_coherent when it's asked for coherent memory for this device.
>
>bus_addr is the physical address to which the memory is currently
>assigned in the bus responding region
>
>device_addr is the physical address the device needs to be programmed
>with actually to address this memory.
>
>size is the size of the area (must be multiples of PAGE_SIZE).
>
>The flags is where all the magic is. They can be or'd together and are
>
>DMA_MEMORY_MAP - request that the memory returned from
>dma_alloc_coherent() be directly writeable.
>
>DMA_MEMORY_IO - request that the memory returned from
>dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
>
>One or both of these flags must be present
>
>DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
>dma_alloc_coherent of any child devices of this one (for memory residing
>on a bridge).
>
>DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
>Do not allow dma_alloc_coherent() to fall back to system memory when
>it's out of memory in the declared region.
>
>The return value would be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
>must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO if
>only DMA_MEMORY_MAP were passed in) or zero for failure.
>
>I think also, it's reasonable only to have a single declared region per
>device.
>
>Implementation details
>
>Obviously, the big change is that dma_alloc_coherent() may now be
>handing out memory that can't be directly written to (in the case of a
>DMA_MEMORY_IO return).
>
>I envisage implementing an internal per device resource allocator in
>drivers/base into which each platform allocator can plug do do the heavy
>allocation lifting (they'd still get to do the platform magic to make
>the returned region visible as memory if necessary).
>
>The API would be platform optional, with platforms not wishing to
>implement it simply hard coding a return 0.
>
>There would also be a corresponding
>
>void dma_release_declared_memory(struct device *dev)
>
>whose job would be to clean up unconditionally (and not check if the
>memory were still in use).
>
>I already have this coded up on x86 and implemented for a SCSI card I
>have with four channels and a shared 2MB memory area on chip. I'll post
>the implementations when I get it cleaned up.
>
>
>
The proposed API looks like it will solve our problem. I will work with
Ian to test this with our drivers.
Jamey
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC] on-chip coherent memory API for DMA
2004-06-29 14:21 [RFC] on-chip coherent memory API for DMA James Bottomley
2004-07-01 12:43 ` Jamey Hicks
@ 2004-07-01 14:12 ` David Brownell
2004-07-01 14:26 ` James Bottomley
1 sibling, 1 reply; 9+ messages in thread
From: David Brownell @ 2004-07-01 14:12 UTC (permalink / raw)
To: James Bottomley
Cc: Ian Molton, Linux Kernel, tony, jamey.hicks, joshua, Russell King
James Bottomley wrote:
>
> dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
> dma_addr_t device_addr, size_t size, int flags)
>
> ...
>
> The flags is where all the magic is. They can be or'd together and are
>
> DMA_MEMORY_MAP - request that the memory returned from
> dma_alloc_coherent() be directly writeable.
>
> DMA_MEMORY_IO - request that the memory returned from
> dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
The API looked OK except this part didn't make sense to me, since
as I understand things dma_alloc_coherent() is guaranteed to have
the DMA_MEMORY_MAP semantics at all times ... the CPU virtual address
returned may always be directly written. That's certainly how all
the code I've seen using dma_alloc_coherent() works.
It'd make more sense if the routine were "dma_declare_memory()", and
DMA_MEMORY_MAP meant it was OK to return from dma_alloc_coherent(),
while DMA_MEMORY_IO meant the dma_alloc_coherent() would always fail.
If I understand what you're trying to do, DMA_MEMORY_IO supports a
new kind of DMA memory, and is necessary to work on those IBM boxes
you were talking about ... where dma_alloc_coherent() can't work,
and the "indirectly accessible" memory would need to be allocated
using some new alloc/free API. Or were you maybe trying to get at
that "can be mmapped to userspace" distinction?
Also in terms of implementation, I noticed that if there's a
dev->dma_mem, the GFP_* flags are ignored. For __GFP_NOFAIL
that seems buglike, but not critical. (Just looked at x86.)
Might be worth just passing the flags down so that behavior
can be upgraded later.
- Dave
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC] on-chip coherent memory API for DMA
2004-07-01 14:12 ` David Brownell
@ 2004-07-01 14:26 ` James Bottomley
2004-07-01 14:45 ` David Brownell
0 siblings, 1 reply; 9+ messages in thread
From: James Bottomley @ 2004-07-01 14:26 UTC (permalink / raw)
To: David Brownell
Cc: Ian Molton, Linux Kernel, tony, jamey.hicks, joshua, Russell King
On Thu, 2004-07-01 at 09:12, David Brownell wrote:
> The API looked OK except this part didn't make sense to me, since
> as I understand things dma_alloc_coherent() is guaranteed to have
> the DMA_MEMORY_MAP semantics at all times ... the CPU virtual address
> returned may always be directly written. That's certainly how all
> the code I've seen using dma_alloc_coherent() works.
> It'd make more sense if the routine were "dma_declare_memory()", and
> DMA_MEMORY_MAP meant it was OK to return from dma_alloc_coherent(),
> while DMA_MEMORY_IO meant the dma_alloc_coherent() would always fail.
You need an allocator paired with IO memory. If the driver allows for
DMA_MEMORY_IO then it's not unreasonable to expect it to have such
memory returned by dma_alloc_coherent() rather than adding yet another
allocator API.
> If I understand what you're trying to do, DMA_MEMORY_IO supports a
> new kind of DMA memory, and is necessary to work on those IBM boxes
> you were talking about ... where dma_alloc_coherent() can't work,
> and the "indirectly accessible" memory would need to be allocated
> using some new alloc/free API. Or were you maybe trying to get at
> that "can be mmapped to userspace" distinction?
No, this IO vs MAP thing is can be mapped directly to *kernel* space.
i.e. whether the driver can use it via ordinary dereferencing or has to
use the I/O memory accessor functions.
> Also in terms of implementation, I noticed that if there's a
> dev->dma_mem, the GFP_* flags are ignored. For __GFP_NOFAIL
> that seems buglike, but not critical. (Just looked at x86.)
> Might be worth just passing the flags down so that behavior
> can be upgraded later.
Actually, there's no point respecting the flags for the on chip region.
Either the memory is there or it isn't. If it isn't there, then you
either fall through to the ordinary allocator (where the flags are
respected) or fail if the DMA_MEMORY_EXCLUSIVE flag was specified.
James
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC] on-chip coherent memory API for DMA
2004-07-01 14:26 ` James Bottomley
@ 2004-07-01 14:45 ` David Brownell
2004-07-01 18:04 ` James Bottomley
0 siblings, 1 reply; 9+ messages in thread
From: David Brownell @ 2004-07-01 14:45 UTC (permalink / raw)
To: James Bottomley
Cc: Ian Molton, Linux Kernel, tony, jamey.hicks, joshua, Russell King
James Bottomley wrote:
> On Thu, 2004-07-01 at 09:12, David Brownell wrote:
>
>>The API looked OK except this part didn't make sense to me, since
>>as I understand things dma_alloc_coherent() is guaranteed to have
>>the DMA_MEMORY_MAP semantics at all times ... the CPU virtual address
>>returned may always be directly written. That's certainly how all
>>the code I've seen using dma_alloc_coherent() works.
>
>
>>It'd make more sense if the routine were "dma_declare_memory()", and
>>DMA_MEMORY_MAP meant it was OK to return from dma_alloc_coherent(),
>>while DMA_MEMORY_IO meant the dma_alloc_coherent() would always fail.
>
>
> You need an allocator paired with IO memory. If the driver allows for
> DMA_MEMORY_IO then it's not unreasonable to expect it to have such
> memory returned by dma_alloc_coherent() rather than adding yet another
> allocator API.
Seems unreasonable to me, unless there's also an API to change
the mode of the dma_alloc_coherent() memory from the normal
"CPU can read/write as usual" to the exotic "need to use special
memory accessors". (And another to report what mode the API is
in at the current moment.)
And I don't like modal APIs like that, which is why it'd make
more sense to me to have a new allocator API for this new
kind of DMA memory. (Which IS for that IBM processor, yes?)
Alternatively, modify dma_alloc_coherent() to say what kind
of address it must return. Since this is a "generic" DMA
API, the caller of dma_alloc_coherent() shouldn't need to be
guessing how they may actually use the memory returned.
That new "must guess" requirement will break some code...
>>Also in terms of implementation, I noticed that if there's a
>>dev->dma_mem, the GFP_* flags are ignored. For __GFP_NOFAIL
>>that seems buglike, but not critical. (Just looked at x86.)
>>Might be worth just passing the flags down so that behavior
>>can be upgraded later.
>
>
> Actually, there's no point respecting the flags for the on chip region.
> Either the memory is there or it isn't. If it isn't there, then you
> either fall through to the ordinary allocator (where the flags are
> respected) or fail if the DMA_MEMORY_EXCLUSIVE flag was specified.
So -- you're saying it's not a bug that a __GFP_NOFAIL|__GFP_WAIT
allocation be able to fail? Curious. I'd have thought the API
was clear about that. Allocating 128 MB from a 1 MB region must
of course fail, but allocating one page just needs a wait/wakeup.
- Dave
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC] on-chip coherent memory API for DMA
2004-07-01 14:45 ` David Brownell
@ 2004-07-01 18:04 ` James Bottomley
2004-07-01 20:14 ` David Brownell
0 siblings, 1 reply; 9+ messages in thread
From: James Bottomley @ 2004-07-01 18:04 UTC (permalink / raw)
To: David Brownell
Cc: Ian Molton, Linux Kernel, tony, jamey.hicks, joshua, Russell King
On Thu, 2004-07-01 at 09:45, David Brownell wrote:
> Seems unreasonable to me, unless there's also an API to change
> the mode of the dma_alloc_coherent() memory from the normal
> "CPU can read/write as usual" to the exotic "need to use special
> memory accessors". (And another to report what mode the API is
> in at the current moment.)
No. That's why you specify how you'd like the memory to be treated. If
you don't want the memory to be accessible only via IO accessors, then
you specify DMA_MEMORY_MAP and take the failure if the platform can't
handle it.
> And I don't like modal APIs like that, which is why it'd make
> more sense to me to have a new allocator API for this new
> kind of DMA memory. (Which IS for that IBM processor, yes?)
There is no "new" kind of memory. This is currently how *all* I/O
memory is accessed. DMA_MEMORY_MAP is actually the new bit since it
allows I/O memory to be treated as normal memory.
> Alternatively, modify dma_alloc_coherent() to say what kind
> of address it must return. Since this is a "generic" DMA
> API, the caller of dma_alloc_coherent() shouldn't need to be
> guessing how they may actually use the memory returned.
> That new "must guess" requirement will break some code...
There is no "must guess". Either the driver specifies DMA_MEMORY_MAP or
DMA_MEMORY_IO. If you specify DMA_MEMORY_IO then you have to use
accessors for dma_alloc_coherent() memory. If you never wish to bother
with it, simply specify DMA_MEMORY_MAP and take the failure on platforms
which cannot comply.
> So -- you're saying it's not a bug that a __GFP_NOFAIL|__GFP_WAIT
> allocation be able to fail? Curious. I'd have thought the API
> was clear about that. Allocating 128 MB from a 1 MB region must
> of course fail, but allocating one page just needs a wait/wakeup.
It can *only* happen if you specify DMA_MEMORY_EXCLUSIVE; that preempts
the GFP_ flags and the application must be coded with this in mind.
Otherwise it will respect the GFP_ flags.
James
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC] on-chip coherent memory API for DMA
2004-07-01 18:04 ` James Bottomley
@ 2004-07-01 20:14 ` David Brownell
2004-07-01 20:48 ` James Bottomley
0 siblings, 1 reply; 9+ messages in thread
From: David Brownell @ 2004-07-01 20:14 UTC (permalink / raw)
To: James Bottomley
Cc: Ian Molton, Linux Kernel, tony, jamey.hicks, joshua, Russell King
James Bottomley wrote:
> On Thu, 2004-07-01 at 09:45, David Brownell wrote:
>
>>Seems unreasonable to me, unless there's also an API to change
>>the mode of the dma_alloc_coherent() memory from the normal
>>"CPU can read/write as usual" to the exotic "need to use special
>>memory accessors". (And another to report what mode the API is
>>in at the current moment.)
>
>
> No. That's why you specify how you'd like the memory to be treated. If
> you don't want the memory to be accessible only via IO accessors, then
> you specify DMA_MEMORY_MAP and take the failure if the platform can't
> handle it.
That can work when the scope of "DMA" knowledge is just
one driver ... but when that driver is plugging into a
framework, it's as likely to be some other code (maybe
a higher level driver) that just wants RAM address space
which, for whatever reasons, is DMA-coherent. And hey,
the way to get this is dma_alloc_coherent ... or in some
cases, pci_alloc_consistent.
Which is why my comment was that the new feature of
returning some kind of memory cookie usable on that one
IBM box (etc) should just use a different allocator API.
It doesn't allocate RAM "similarly to __get_free_pages";
it'd be returning something drivers can't treat as RAM.
>>And I don't like modal APIs like that, which is why it'd make
>>more sense to me to have a new allocator API for this new
>>kind of DMA memory. (Which IS for that IBM processor, yes?)
>
>
> There is no "new" kind of memory. This is currently how *all* I/O
> memory is accessed. DMA_MEMORY_MAP is actually the new bit since it
> allows I/O memory to be treated as normal memory.
This isn't I/O address space, needing PIO I/O accessors,
and as returned by the new DMA_MEMORY_IO mode. (And why
wouldn't ioremap already handle that?)
This is how to allocate DMA-ready buffers that have certain
characteristics aren't useful only to the lowest level
drivers in the stack. Drivers depend on alloc_coherent to
behave in the way you (originally) said DMA_MEMORY_MAP works.
The more detailed API specs (DMA-mapping.txt not DMA-API.txt)
are very clear that the behavior is like RAM.
>>So -- you're saying it's not a bug that a __GFP_NOFAIL|__GFP_WAIT
>>allocation be able to fail? Curious. I'd have thought the API
>>was clear about that. Allocating 128 MB from a 1 MB region must
>>of course fail, but allocating one page just needs a wait/wakeup.
>
>
> It can *only* happen if you specify DMA_MEMORY_EXCLUSIVE; that preempts
> the GFP_ flags and the application must be coded with this in mind.
> Otherwise it will respect the GFP_ flags.
You'd have to change the spec to allow that. Wouldn't it be
a lot simpler to just pass the GFP flags down to that lowlevel
code, so it can eventually start doing what the highlevel code
told it to do? :)
Special cases make for fragile systems.
- Dave
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC] on-chip coherent memory API for DMA
2004-07-01 20:14 ` David Brownell
@ 2004-07-01 20:48 ` James Bottomley
2004-07-02 3:07 ` David Brownell
0 siblings, 1 reply; 9+ messages in thread
From: James Bottomley @ 2004-07-01 20:48 UTC (permalink / raw)
To: David Brownell
Cc: Ian Molton, Linux Kernel, tony, jamey.hicks, joshua, Russell King
On Thu, 2004-07-01 at 15:14, David Brownell wrote:
> That can work when the scope of "DMA" knowledge is just
> one driver ... but when that driver is plugging into a
> framework, it's as likely to be some other code (maybe
> a higher level driver) that just wants RAM address space
> which, for whatever reasons, is DMA-coherent. And hey,
> the way to get this is dma_alloc_coherent ... or in some
> cases, pci_alloc_consistent.
If the driver can't cope then you *only* use DMA_MEMORY_MAPPED.
> Which is why my comment was that the new feature of
> returning some kind of memory cookie usable on that one
> IBM box (etc) should just use a different allocator API.
> It doesn't allocate RAM "similarly to __get_free_pages";
> it'd be returning something drivers can't treat as RAM.
Well, I don't believe it will be necessary. However, when an actual
user comes along, we'll find out.
> This isn't I/O address space, needing PIO I/O accessors,
> and as returned by the new DMA_MEMORY_IO mode. (And why
> wouldn't ioremap already handle that?)
The purpose of the API is to allow a mode of operation on all platforms
linux supports.
> This is how to allocate DMA-ready buffers that have certain
> characteristics aren't useful only to the lowest level
> drivers in the stack. Drivers depend on alloc_coherent to
> behave in the way you (originally) said DMA_MEMORY_MAP works.
> The more detailed API specs (DMA-mapping.txt not DMA-API.txt)
> are very clear that the behavior is like RAM.
It is no-longer real memory once you use this API. Even if the
processor can treat DMA_MEMORY_MAP memory as "real", you'll probably
find that a device off on another bus cannot even see it. However, as
long as you keep the memory between the processor and the device then
you can treat it identical to RAM.
> > It can *only* happen if you specify DMA_MEMORY_EXCLUSIVE; that preempts
> > the GFP_ flags and the application must be coded with this in mind.
> > Otherwise it will respect the GFP_ flags.
>
> You'd have to change the spec to allow that. Wouldn't it be
> a lot simpler to just pass the GFP flags down to that lowlevel
> code, so it can eventually start doing what the highlevel code
> told it to do? :)
>
> Special cases make for fragile systems.
The intention of the flags option for dma_alloc_coherent() was only for
memory allocation instructions; the allocation can fail for other
reasons that unavailability of memory depending on how the API is
implemented, so __GFP_NOFAIL doesn't actually work now in every case.
Thus this doesn't actually represent a departure.
James
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC] on-chip coherent memory API for DMA
2004-07-01 20:48 ` James Bottomley
@ 2004-07-02 3:07 ` David Brownell
0 siblings, 0 replies; 9+ messages in thread
From: David Brownell @ 2004-07-02 3:07 UTC (permalink / raw)
To: James Bottomley
Cc: Ian Molton, Linux Kernel, tony, jamey.hicks, joshua, Russell King
James Bottomley wrote:
> On Thu, 2004-07-01 at 15:14, David Brownell wrote:
>
>>That can work when the scope of "DMA" knowledge is just
>>one driver ... but when that driver is plugging into a
>>framework, it's as likely to be some other code (maybe
>>a higher level driver) that just wants RAM address space
>>which, for whatever reasons, is DMA-coherent. And hey,
>>the way to get this is dma_alloc_coherent ... or in some
>>cases, pci_alloc_consistent.
>
>
> If the driver can't cope then you *only* use DMA_MEMORY_MAP
That would be the norm for all those low-level drivers,
certainly. Except maybe on that one mysterious box,
where the CPU can't access that memory directly ... ;)
>>Which is why my comment was that the new feature of
>>returning some kind of memory cookie usable on that one
>>IBM box (etc) should just use a different allocator API.
>>It doesn't allocate RAM "similarly to __get_free_pages";
>>it'd be returning something drivers can't treat as RAM.
>
>
> Well, I don't believe it will be necessary. However, when an actual
> user comes along, we'll find out.
OK, I can easily view DMA_MEMORY_IO as an API experiment.
> It is no-longer real memory once you use this API. Even if the
> processor can treat DMA_MEMORY_MAP memory as "real", you'll probably
> find that a device off on another bus cannot even see it. However, as
> long as you keep the memory between the processor and the device then
> you can treat it identical to RAM.
I'm not sure I see what you're saying. The only guarantees on
the memory are that "the" CPU and the device can both access
it like memory. Other devices are out-of-scope, as is location
(anywhere both can access it like normal memory, not just stuff
that's "between" the two on some bus). It's DMA_MEMORY_IO that
you said would not be RAM-like ("directly writable"), and would
need I/O memory accessors like readl/writel/etc ... to the
device it looks like normal RAM, but not to the host.
> The intention of the flags option for dma_alloc_coherent() was only for
> memory allocation instructions; the allocation can fail for other
> reasons that unavailability of memory depending on how the API is
> implemented, so __GFP_NOFAIL doesn't actually work now in every case.
I personally think __GFP_WAIT is the most important one, but
some folk have other priorities. Regardless, I _was_ talking
about passing flags down to the memory allocator, so it sounds
like it was just an oversight in this initial version.
- Dave
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2004-07-02 3:11 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-06-29 14:21 [RFC] on-chip coherent memory API for DMA James Bottomley
2004-07-01 12:43 ` Jamey Hicks
2004-07-01 14:12 ` David Brownell
2004-07-01 14:26 ` James Bottomley
2004-07-01 14:45 ` David Brownell
2004-07-01 18:04 ` James Bottomley
2004-07-01 20:14 ` David Brownell
2004-07-01 20:48 ` James Bottomley
2004-07-02 3:07 ` David Brownell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox