public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-01 19:21 Adam J. Richter
  2003-01-01 19:48 ` James Bottomley
  0 siblings, 1 reply; 34+ messages in thread
From: Adam J. Richter @ 2003-01-01 19:21 UTC (permalink / raw)
  To: akpm, James.Bottomley; +Cc: david-b, linux-kernel

James Bottomley wrote:
> void *
> dma_alloc_coherent(struct device *dev, size_t size,
>-                            dma_addr_t *dma_handle)
>+                            dma_addr_t *dma_handle, int flag)

	I thought Andrew Morton's request for a gfp flag was for
allocating memory from a pool (for example, a "read ahead" will want
to abort if memory is unavailable rather than wait).

	The big DMA allocations, however, will always occur during
initialization, and I think will always have the intermediate policy
of "I can block, but I should fail rather than block for more than a
few seconds and potentially deadlock from loading too many drivers on
a system with very little memory."

	Can someone show me or invent an example of two different uses
of dma_alloc_coherent that really should use different policies on
whether to block or not?

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-02 22:07 Adam J. Richter
  2003-01-03  0:20 ` Russell King
                   ` (3 more replies)
  0 siblings, 4 replies; 34+ messages in thread
From: Adam J. Richter @ 2003-01-02 22:07 UTC (permalink / raw)
  To: david-b; +Cc: akpm, James.Bottomley, linux-kernel

>Note that this "add gfp flags to dma_alloc_coherent()" issue is a tangent
>to the dma_pool topic ... it's a different "generic device DMA" issue.

	I think we understand each other, but let's walk through
the three level of it: mempool_alloc, {dma,pci}_pool_alloc, and
{dma,pci}_alloc_consistent.

	We agree that mempool_alloc should take gfp_flags so that
it can fail rather than block on optional IO such as read-aheads.

	Whether pci_pool_alloc (which does not guarantee a reserve of
memory) should continue to take gfp_flags is not something we were
discussing, but is an interesting question for the future.  I see
that pci_pool_alloc is only called with GFP_ATOMIC in two places:

	uhci_alloc_td in drivers/usb/host/uhci-hcd.c
	alloc_safe_buffer in arch/arm/mach-sa1100/sa1111-pcibuf.c

	The use in uhci_alloc_td means that USB IO can fail under
memory load.  Conceivably, swapping could even deadlock this way.
uhci_alloc_td should be going through a mempool which individual
drivers should expand and shrink as they are added and removed so that
transfers that are sent with GFP_KERNEL will be guaranteed not to fail
for lack of memory (and gfp_flags should be passed down so that it
will be possible to abort transfers like read-aheads rather than
blocking, but that's not essential).

	The pci_pool_alloc  in sa1111-buf.c is more interesting.
alloc_safe_buffer is used to implement an unusual version of
pci_map_single.  It is unclear to me whether this approach is optimal.
I'll look into this more.

	Anyhow, the question that we actually were talking about is
whether dma_alloc_consistent should take gfp_flags.  So far, the only
potential example that I'm aware of is that the question of whether the
call to pci_pool_alloc in sa1111-pcibuf.c needs.to call down to
{pci,dma}_alloc_consistent with GFP_ATOMIC.  It's something that I
just noticed and will look into.


>We already have the pci_pool allocator that knows how to cope (timeout and
>retry) with the awkward semantics of pci_alloc_consistent() ...

	pci_pool_alloc currently implments mempool's GFP_KERNEL
semantics (block indefinitely, never fail), but, unlike mempool, it
does not guarantee a minimum memory allocation, so there is no way
to guarantee that it will ever return.  It can hang if there is
insufficient memory.  It looks like you cannot even control-C out
of it.

	By the way, since you call pci_alloc_consistent's GFP_KERNEL
semantics "awkward", are you saying that you think it should wait
indefinitely for memory to become available?

>likewise,
>we can tell that those semantics are problematic, both because they need
>that kind of workaround and because they complicate reusing "better"
>allocator code (like mm/slab.c) on top of the DMA page allocators.

	Maybe it's irrelevant, but I don't understand what you mean by
"better" here.


>> 	Let me clarify or revise my request.  By "show me or invent an
>> example" I mean describe a case where this would be used, as in
>> specific hardware devices that Linux has trouble supporting right now,
>> or specific programs that can't be run efficiently under Linux, etc.

>That doesn't really strike me as a reasonable revision, since that
>wasn't an issue an improved dma_alloc_coherent() syntax was intended
>to address directly.

	It is always reasonable to ask for an example of where a
requested change would be used, as in specific hardware devices that
Linux has trouble supporting right now, or specific programs that
can't be run efficiently under Linux, or some other real external
benefit, because that is the purpose of software.  This applies to
every byte of code in Linux.

	Don't start by positing some situation in the middle of a
call graph.  Instead, I'm asking you to show an example starting
at the top of the call graph.  What *actual* use will cause this
parameter to be useful in reality?  By what external metric
will users benefit?

	I'll look into the sa1111-pcibuf.c case.  That's the only
potential example that I've seen so far.


>To the extent that it's reasonable, you should also be considering
>this corresponding issue:  ways that removing gfp_flags from the
>corresponding generic memory allocator, __get_free_pages(), would be
>improving those characteristics of Linux.  (IMO, it wouldn't.)

	There are other users of __get_free_pages.  If it turns out
that there is no true use for gfp_mask in dma_alloc_consistent then it
might be worth exploring in the future, but we don't have to decide to
eliminate gfp_mask from __get_free_pages just to decide not to add it
to dma_alloc_consistent.


>> 	I have trouble understanding why, for example, a USB hard
>> disk driver would want anything more than a fixed size pool of
>> transfer descriptors.  At some point you know that you've queued
>> enough IO so that the driver can be confident that it will be
>> called again before that queue is completely emptied.

>For better or worse, that's not how it works today and it's not
>likely to change in the 2.5 kernels.  "Transfer Descriptors" are
>resources inside host controller drivers (bus drivers), allocated
>dynamically (GFP_KERNEL in many cases, normally using a pci_pool)
>when USB device drivers (like usb-storage) submit requests (URBs).

>They are invisible to usb device drivers, except implicitly as
>another reason that submitting an urb might trigger -ENOMEM.

	It might not be trivial to fix that but it would be
straightforward to do so.  In the future, submit_urb(...GFP_KERNEL)
should be guaranteed never to fail with -ENOMEM (see my comments
about uhci_alloc_td above on how to fix this).

	This existing straighforwardly fixable bug does not justify
changing the DMA API just to encrust it further.  Users will be better
off if we fix the bug (it will be able to swap reliably on USB disks,
or at least they'll be closer to it if there are other bugs in the
way).

	In practice, I think that if we just added one, maybe two,
URB's by default for every endpoint when a device is added, that
that would be enough to guarantee that would reduce the number of
drivers that needed to reserve more URB's than that to few or none.

>And for that matter, the usb-storage driver doesn't "fire and
>forget" as you described; that'd be a network driver model.
>The storage driver needs to block till its request completes.

	That only makes it easier to calculate the number of
URB's that need to be guaranteed to be available.

	By the way, as far as I can tell, usb_stor_msg_common in
drivers/usb/storage/transport.c only waits for the urb to be
transferred, not for the operation to complete.


>> 	Also, your use of the term GFP_KERNEL is potentially
>> ambiguous.  In some cases GFP_KERNEL seems to mean "wait indifinitely
>> until memory is available; never fail."  In other cases it means
>> "perhaps wait a second or two if memory is unavailable, but fail if
>> memory is not available by then."

>Hmm, I was unaware that anyone expected GFP_KERNEL (or rather,
>__GFP_WAIT) to guarantee that memory was always returned.  It's
>not called __GFP_NEVERFAIL, after all.

	mempool_alloc does.  That's the point of it.  You calculate
how many objects you need in order to guarantee no deadlocks and
reserve that number in advance (the initial reservation can fail).

>I've always seen such fault returns documented as unrelated to
>allocator parameters ...

	By design, mempool_alloc(GFP_KERNEL) never fails (although
mempool_alloc(GFP_ATOMIC) can).

	By the way, pci_pool_alloc(GFP_KERNEL) also never returns
failure, but I consider that a bug as I explained above.

>and treated callers that break on fault
>returns as buggy, regardless of GFP_* parameters in use.

	It is not a bug for mempool_alloc(GFP_KERNEL) callers to assume
success upon return.  It is a wasete of kernel footprint for them to
check for failure.

>A
>random sample of kernel docs agrees with me on that:  both
>in Documentation/* (like the i2c stuff) or the O'Reilly book
>on device drivers.

	mempool really needs a file in Documentation/.  The only reference
to it there is in linux-2.5.54/Documentation/block/biodoc.txt, line 663 ff:
| This makes use of Ingo Molnar's mempool implementation, which enables
| subsystems like bio to maintain their own reserve memory pools for guaranteed
| deadlock-free allocations during extreme VM load. [...]

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-02 17:04 Adam J. Richter
  0 siblings, 0 replies; 34+ messages in thread
From: Adam J. Richter @ 2003-01-02 17:04 UTC (permalink / raw)
  To: James.Bottomley; +Cc: akpm, david-b, linux-kernel

James Bottomley wrote:
>adam@yggdrasil.com said:
>> 	Let me clarify or revise my request.  By "show me or invent an
>> example" I mean describe a case where this would be used, as in
>> specific hardware devices that Linux has trouble supporting right now,
>> or specific programs that can't be run efficiently under Linux, etc.
>> What device would need to do this kind of allocation?  Why haven't I
>> seen requests for this from people working on real device drivers?
>> Where is this going to make the kernel smaller, more reliable, faster,
>> more maintainable, able to make a computer do something it could do
>> before under Linux, etc.?

>I'm not really the right person to be answering this.  For any transfer you 
>set up (which encompasses all of the SCSI stuff bar target mode and AENs) you 
>should have all the resources ready and waiting in the interrupt, and so never 
>require an in_interrupt allocation.

>However, for unsolicited transfer requests---the best example I can think of 
>would be incoming network packets---it does make sense:  You allocate with 
>GFP_ATOMIC, if the kernel can fulfil the request, fine; if not, you drop the 
>packet on the floor.  Now, whether there's an unsolicited transfer that's 
>going to require coherent memory, that I can't say.  It does seem to be 
>possible, though.

	When a network device driver receives a packet, it gives the
network packet that it pre-allocated to the higher layers with
netif_rx() and then allocates a net packet with dev_alloc_skb(), but
that is non-consistent "streaming" memory.  The consistent memory is
general for the DMA gather-scatter stub(s), which is (are) generally
reused since the receive for the packet that arrived has been
completed.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-02  4:13 Adam J. Richter
  2003-01-02 16:41 ` James Bottomley
  2003-01-02 18:26 ` David Brownell
  0 siblings, 2 replies; 34+ messages in thread
From: Adam J. Richter @ 2003-01-02  4:13 UTC (permalink / raw)
  To: david-b; +Cc: akpm, James.Bottomley, linux-kernel

David Brownell wrote:
>James Bottomley wrote:
>> adam@yggdrasil.com said:
>> 
>>>	Can someone show me or invent an example of two different uses of
>>>dma_alloc_coherent that really should use different policies on
>>>whether to block or not? 
>> 
>> 
>> The obvious one is allocations from interrupt routines, which must be 
>> GFP_ATOMIC (ignoring the issue of whether a driver should be doing a memory 
>> allocation in an interrupt).  Allocating large pools at driver initialisation 
>> should probably be GFP_KERNEL as you say.

>More:  not just "from interrupt routines", also "when a spinlock is held".
>Though one expects that if a dma_pool were in use, it'd probably have one
>of that kind of chunk already available (ideally, even initialized).

	Let me clarify or revise my request.  By "show me or invent an
example" I mean describe a case where this would be used, as in
specific hardware devices that Linux has trouble supporting right now,
or specific programs that can't be run efficiently under Linux, etc.
What device would need to do this kind of allocation?  Why haven't I
seen requests for this from people working on real device drivers?
Where is this going to make the kernel smaller, more reliable, faster,
more maintainable, able to make a computer do something it could do
before under Linux, etc.?

	I have trouble understanding why, for example, a USB hard
disk driver would want anything more than a fixed size pool of
transfer descriptors.  At some point you know that you've queued
enough IO so that the driver can be confident that it will be
called again before that queue is completely emptied.


>Many task-initiated allocations would use GFP_KERNEL; not just major driver
>activation points like open().

	Here too, it would help me to understand if you would try
to construct an example.

	Also, your use of the term GFP_KERNEL is potentially
ambiguous.  In some cases GFP_KERNEL seems to mean "wait indifinitely
until memory is available; never fail."  In other cases it means
"perhaps wait a second or two if memory is unavailable, but fail if
memory is not available by then."

	I don't dispute that adding a memory allocation class argument
to dma_alloc_consistent would make it capable of being used in more
cases.  I just don't know if any of those cases actually come up, in
which case the Linux community is probably better off with the smaller
kernel footprint.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-01  0:02 Adam J. Richter
  0 siblings, 0 replies; 34+ messages in thread
From: Adam J. Richter @ 2003-01-01  0:02 UTC (permalink / raw)
  To: akpm; +Cc: david-b, James.Bottomley, linux-kernel

Andrew Morton wrote:
>The existing mempool code can be used to implement this, I believe.  The
>pool->alloc callback is passed an opaque void *, and it returns
>a void * which can point at any old composite caller-defined blob.

	That field is the same for all allocations (its value
is stored in the struct mempool), so you can't use it.

	David's example would work because it happens to store a copy
of the DMA address in the structure being allocated, and the mempool
code currently does not overwrite the contents of any chunks of memory
when it holds a freed chunk to give to out later.

	We could generalize David's technique for other data
structures by using a wrapper to store the DMA address and
adopting the convention of having the DMA allocator initially
stuff the DMA address in the beginning of the chunk being
allocated, like so.


/* Set mempool.alloc to this */
void *dma_mempool_alloc_callback(int gfp_mask, void *mempool_data)
{
	struct dma_pool *dma_pool = mempool_data;
	dma_addr_t dma_addr;
	void *result = dma_pool_alloc(mempool_data, gfp_mask, &dma_addr);
	if (result)
		memcpy(result, dma_addr, sizeof(dma_addr_t));
	return result;
}

/* Set mempool.free to this */
void dma_mempool_free_callback(void *element, void *arg)
{
	struct dma_pool *dma_pool = mempool_data;
	dma_pool_free(dma_pool, element, *(dma_addr_t*) element);
}




void *dma_mempool_alloc(mempool_t *pool, int gfp_mask, dma_addr_t *dma_addr)
{
        void *result = mempool_alloc(pool, gfp_mask);
        if (result)
                memcpy(dma_addr, result, sizeof(dma_addr_t));
        return result;
}

void dma_mempool_free(void *element, mempool_t *pool,
		      struct dma_addr_t dma_addr)
{
        /* We rely on mempool_free not trashing the first
           sizeof(dma_addr_t) bytes of element if it is going to
           give this element to another caller rather than freeing it.
           Currently, the mempool code does not know the size of the
           elements, so this is safe to to do, but it would be nice if
           in the future, we could let the mempool code use some of the
           remaining byte to maintain its free list. */
        memcpy(element, &dma_addr, sizeof(dma_addr_t));
        mempool_free(element, pool);
}



	So, I guess there is no need to change the mempool interface,
at least if we guarantee that mempool is not going to overwrite any
freed memory chunks that it is holding in reserve, although this also
means that there will continue to be a small but unnecessary memory
overhead in the mempool allocator.  I guess that could be addressed
later.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2002-12-31 23:38 Adam J. Richter
  0 siblings, 0 replies; 34+ messages in thread
From: Adam J. Richter @ 2002-12-31 23:38 UTC (permalink / raw)
  To: akpm; +Cc: david-b, James.Bottomley, linux-kernel

Andrew Morton wrote:
>"Adam J. Richter" wrote:
>> 
>> David Brownell wrote:
>> 
>> >struct dma_pool *dma_pool_create(char *, struct device *, size_t)
>> >void dma_pool_destroy (struct dma_pool *pool)
>> >void *dma_pool_alloc(struct dma_pool *, int mem_flags, dma_addr_t *)
>> >void dma_pool_free(struct dma_pool *, void *, dma_addr_t)
>> 
>>         I would like to be able to have failure-free, deadlock-free
>> blocking memory allocation, such as we have with the non-DMA mempool
>> library so that we can guarantee that drivers that have been
>> successfully initialized will continue to work regardless of memory
>> pressure, and reduce error branches that drivers have to deal with.
>> 
>>         Such a facility could be layered on top of your interface
>> perhaps by extending the mempool code to pass an extra parameter
>> around.  If so, then you should think about arranging your interface
>> so that it could be driven with as little glue as possible by mempool.
>> 

>What is that parameter?  The size, I assume.

	No, dma address.  All allocations in a memory pool (in both
the mempool and pci_pool sense) are the same size.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2002-12-31 22:02 Adam J. Richter
  2002-12-31 22:41 ` Andrew Morton
  2002-12-31 23:35 ` David Brownell
  0 siblings, 2 replies; 34+ messages in thread
From: Adam J. Richter @ 2002-12-31 22:02 UTC (permalink / raw)
  To: david-b, James.Bottomley, linux-kernel

David Brownell wrote:

>struct dma_pool *dma_pool_create(char *, struct device *, size_t)
>void dma_pool_destroy (struct dma_pool *pool)
>void *dma_pool_alloc(struct dma_pool *, int mem_flags, dma_addr_t *)
>void dma_pool_free(struct dma_pool *, void *, dma_addr_t)

	I would like to be able to have failure-free, deadlock-free
blocking memory allocation, such as we have with the non-DMA mempool
library so that we can guarantee that drivers that have been
successfully initialized will continue to work regardless of memory
pressure, and reduce error branches that drivers have to deal with.

	Such a facility could be layered on top of your interface
perhaps by extending the mempool code to pass an extra parameter
around.  If so, then you should think about arranging your interface
so that it could be driven with as little glue as possible by mempool.

	I think that the term "pool" is more descriptively used by
mempool and more misleadningly used by the pci_pool code, as there is
no guaranteed pool being reserved in the pci_pool code.  Alas, I don't
have a good alternative term to suggest at the moment.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: [RFT][PATCH] generic device DMA implementation
@ 2002-12-27 21:40 James Bottomley
  2002-12-28  1:56 ` David Brownell
  0 siblings, 1 reply; 34+ messages in thread
From: James Bottomley @ 2002-12-27 21:40 UTC (permalink / raw)
  To: David Brownell; +Cc: James Bottomley, linux-kernel

david-b@pacbell.net said:
> - Implementation-wise, I'm rather surprised that the generic
>    version doesn't just add new bus driver methods rather than
>    still insisting everything be PCI underneath. 

You mean dma-mapping.h in asm-generic?  The reason for that is to provide an 
implementation that functions now for non x86 (and non-parisc) archs without 
having to write specific code for them all.  Since all the other arch's now 
function in terms of the pci_ API, that was the only way of sliding the dma_ 
API in without breaking them all.

Bus driver methods have been advocated before, but it's not clear to me that 
they should be exposed in the *generic* API.

>    It's not clear to me how I'd make, for example, a USB device
>    or interface work with dma_map_sg() ... those "generic" calls
>    are going to fail (except on x86, where all memory is DMA-able)
>    since USB != PCI.

Actually, they should work on parisc out of the box as well because of the way 
its DMA implementation is built in terms of the generic dma_ API.

As far as implementing this generically, just adding a case for the 
usb_bus_type in asm-generic/dma-mapping.h will probably get you where you need 
to be. (the asm-generic is, after all, only intended as a stopgap.  Fully 
coherent platforms with no IOMMUs will probably take the x86 route to 
implementing the dma_ API, platforms with IOMMUs will probably (eventually) do 
similar things to parisc).


>    (The second indirection:  the usb controller hardware does the
>    mapping, not the device or hcd.  That's usually PCI.) 

Could you clarify this a little.  I tend to think of "mapping" as something 
done by the IO MMU managing the bus.  I think you mean that the usb controller 
will mark a region of memory to be accessed by the device.  If such a region 
were also "mapped" by an IOMMU, it would be done outside the control of the 
USB controller, correct? (the IOMMU would translate between the address the 
processor sees and the address the USB controller thinks it's responding to)

Is the problem actually that the USB controller needs to be able to allocate 
coherent memory in a range much more narrowly defined than the current 
dma_mask allows?

> - There's no analogue to pci_pool, and there's nothing like
>    "kmalloc" (likely built from N dma-coherent pools). 

I didn't want to build another memory pool re-implementation.  The mempool API 
seems to me to be flexible enough for this, is there some reason it won't work?

I did consider wrappering mempool to make it easier, but I couldn't really 
find a simplifying wrapper that wouldn't lose flexibility.

James



^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2003-01-03  6:44 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-01-01 19:21 [PATCH] generic device DMA (dma_pool update) Adam J. Richter
2003-01-01 19:48 ` James Bottomley
2003-01-02  2:11   ` David Brownell
  -- strict thread matches above, loose matches on Subject: below --
2003-01-02 22:07 Adam J. Richter
2003-01-03  0:20 ` Russell King
2003-01-03  4:50 ` David Brownell
2003-01-03  6:11 ` David Brownell
2003-01-03  6:46 ` David Brownell
2003-01-03  6:52   ` William Lee Irwin III
2003-01-02 17:04 Adam J. Richter
2003-01-02  4:13 Adam J. Richter
2003-01-02 16:41 ` James Bottomley
2003-01-02 18:26 ` David Brownell
2003-01-01  0:02 Adam J. Richter
2002-12-31 23:38 Adam J. Richter
2002-12-31 22:02 Adam J. Richter
2002-12-31 22:41 ` Andrew Morton
2002-12-31 23:23   ` David Brownell
2002-12-31 23:27     ` Andrew Morton
2002-12-31 23:44       ` David Brownell
2002-12-31 23:47     ` James Bottomley
2003-01-01 17:10   ` James Bottomley
2002-12-31 23:35 ` David Brownell
2002-12-27 21:40 [RFT][PATCH] generic device DMA implementation James Bottomley
2002-12-28  1:56 ` David Brownell
2002-12-30 23:11   ` [PATCH] generic device DMA (dma_pool update) David Brownell
2002-12-31 15:00     ` James Bottomley
2002-12-31 17:04       ` David Brownell
2002-12-31 17:23         ` James Bottomley
2002-12-31 18:11           ` David Brownell
2002-12-31 18:44             ` James Bottomley
2002-12-31 19:29               ` David Brownell
2002-12-31 19:50                 ` James Bottomley
2002-12-31 21:17                   ` David Brownell
2002-12-31 16:36     ` James Bottomley
2002-12-31 17:32       ` David Brownell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox